WorldWideScience

Sample records for accurate reduced models

  1. Capturing dopaminergic modulation and bimodal membrane behaviour of striatal medium spiny neurons in accurate, reduced models

    Mark D Humphries

    2009-11-01

    Full Text Available Loss of dopamine from the striatum can cause both profound motor deficits, as in Parkinsons's disease, and disrupt learning. Yet the effect of dopamine on striatal neurons remains a complex and controversial topic, and is in need of a comprehensive framework. We extend a reduced model of the striatal medium spiny neuron (MSN to account for dopaminergic modulation of its intrinsic ion channels and synaptic inputs. We tune our D1 and D2 receptor MSN models using data from a recent large-scale compartmental model. The new models capture the input-output relationships for both current injection and spiking input with remarkable accuracy, despite the order of magnitude decrease in system size. They also capture the paired pulse facilitation shown by MSNs. Our dopamine models predict that synaptic effects dominate intrinsic effects for all levels of D1 and D2 receptor activation. We analytically derive a full set of equilibrium points and their stability for the original and dopamine modulated forms of the MSN model. We find that the stability types are not changed by dopamine activation, and our models predict that the MSN is never bistable. Nonetheless, the MSN models can produce a spontaneously bimodal membrane potential similar to that recently observed in vitro following application of NMDA agonists. We demonstrate that this bimodality is created by modelling the agonist effects as slow, irregular and massive jumps in NMDA conductance and, rather than a form of bistability, is due to the voltage-dependent blockade of NMDA receptors. Our models also predict a more pronounced membrane potential bimodality following D1 receptor activation. This work thus establishes reduced yet accurate dopamine-modulated models of MSNs, suitable for use in large-scale models of the striatum. More importantly, these provide a tractable framework for further study of dopamine's effects on computation by individual neurons.

  2. Accurate Modeling of Advanced Reflectarrays

    Zhou, Min

    of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  3. Accurate Excited State Geometries within Reduced Subspace TDDFT/TDA.

    Robinson, David

    2014-12-01

    A method for the calculation of TDDFT/TDA excited state geometries within a reduced subspace of Kohn-Sham orbitals has been implemented and tested. Accurate geometries are found for all of the fluorophore-like molecules tested, with at most all valence occupied orbitals and half of the virtual orbitals included but for some molecules even fewer orbitals. Efficiency gains of between 15 and 30% are found for essentially the same level of accuracy as a standard TDDFT/TDA excited state geometry optimization calculation. PMID:26583218

  4. Towards accurate modeling of moving contact lines

    Holmgren, Hanna

    2015-01-01

    The present thesis treats the numerical simulation of immiscible incompressible two-phase flows with moving contact lines. The conventional Navier–Stokes equations combined with a no-slip boundary condition leads to a non-integrable stress singularity at the contact line. The singularity in the model can be avoided by allowing the contact line to slip. Implementing slip conditions in an accurate way is not straight-forward and different regularization techniques exist where ad-hoc procedures ...

  5. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    Bonney, Matthew S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Brake, Matthew R.W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.

  6. Accurate sky background modelling for ESO facilities

    Full text: Ground-based measurements like e.g. high resolution spectroscopy are heavily influenced by several physical processes. Amongst others, line absorption/ emission, air glow by OH molecules, and scattering of photons within the earth's atmosphere make observations in particular from facilities like the future European extremely large telescope a challenge. Additionally, emission from unresolved extrasolar objects, the zodiacal light, the moon and even thermal emission from the telescope and the instrument contribute significantly to the broad band background over a wide wavelength range. In our talk we review these influences and give an overview on how they can be accurately modeled for increasing the overall precision of spectroscopic and imaging measurements. (author)

  7. A new, accurate predictive model for incident hypertension

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  8. Spectropolarimetrically accurate magnetohydrostatic sunspot model for forward modelling in helioseismology

    Przybylski, D; Cally, P S

    2015-01-01

    We present a technique to construct a spectropolarimetrically accurate magneto-hydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion and absorption in the solar interior and photosphere with the sunspot embedded into it. With the $6173\\mathrm{\\AA}$ magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as full Stokes vector for the simulation at various positions at the solar disk, and analyse the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterised. An increase in acoustic power in the simulated observ...

  9. Reduced Order Podolsky Model

    Thibes, Ronaldo

    2016-01-01

    We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.

  10. On nonlinear reduced order modeling

    When applied to a model that receives n input parameters and predicts m output responses, a reduced order model estimates the variations in the m outputs of the original model resulting from variations in its n inputs. While direct execution of the forward model could provide these variations, reduced order modeling plays an indispensable role for most real-world complex models. This follows because the solutions of complex models are expensive in terms of required computational overhead, thus rendering their repeated execution computationally infeasible. To overcome this problem, reduced order modeling determines a relationship (often referred to as a surrogate model) between the input and output variations that is much cheaper to evaluate than the original model. While it is desirable to seek highly accurate surrogates, the computational overhead becomes quickly intractable especially for high dimensional model, n ≫ 10. In this manuscript, we demonstrate a novel reduced order modeling method for building a surrogate model that employs only 'local first-order' derivatives and a new tensor-free expansion to efficiently identify all the important features of the original model to reach a predetermined level of accuracy. This is achieved via a hybrid approach in which local first-order derivatives (i.e., gradient) of a pseudo response (a pseudo response represents a random linear combination of original model’s responses) are randomly sampled utilizing a tensor-free expansion around some reference point, with the resulting gradient information aggregated in a subspace (denoted by the active subspace) of dimension much less than the dimension of the input parameters space. The active subspace is then sampled employing the state-of-the-art techniques for global sampling methods. The proposed method hybridizes the use of global sampling methods for uncertainty quantification and local variational methods for sensitivity analysis. In a similar manner to

  11. ACCURATE FORECAST AS AN EFFECTIVE WAY TO REDUCE THE ECONOMIC RISK OF AGRO-INDUSTRIAL COMPLEX

    Kymratova A. M.

    2014-11-01

    Full Text Available This article discusses the ways of reducing the financial, economic and social risks on the basis of an accurate prediction. We study the importance of natural time series of winter wheat yield, minimum winter, winter-spring daily temperatures. The feature of the time series of this class is disobeying a normal distribution, there is no visible trend

  12. Accurate Load Modeling Based on Analytic Hierarchy Process

    Zhenshu Wang

    2016-01-01

    Full Text Available Establishing an accurate load model is a critical problem in power system modeling. That has significant meaning in power system digital simulation and dynamic security analysis. The synthesis load model (SLM considers the impact of power distribution network and compensation capacitor, while randomness of power load is more precisely described by traction power system load model (TPSLM. On the basis of these two load models, a load modeling method that combines synthesis load with traction power load is proposed in this paper. This method uses analytic hierarchy process (AHP to interact with two load models. Weight coefficients of two models can be calculated after formulating criteria and judgment matrixes and then establishing a synthesis model by weight coefficients. The effectiveness of the proposed method was examined through simulation. The results show that accurate load modeling based on AHP can effectively improve the accuracy of load model and prove the validity of this method.

  13. An accurate and simple quantum model for liquid water.

    Paesani, Francesco; Zhang, Wei; Case, David A; Cheatham, Thomas E; Voth, Gregory A

    2006-11-14

    The path-integral molecular dynamics and centroid molecular dynamics methods have been applied to investigate the behavior of liquid water at ambient conditions starting from a recently developed simple point charge/flexible (SPC/Fw) model. Several quantum structural, thermodynamic, and dynamical properties have been computed and compared to the corresponding classical values, as well as to the available experimental data. The path-integral molecular dynamics simulations show that the inclusion of quantum effects results in a less structured liquid with a reduced amount of hydrogen bonding in comparison to its classical analog. The nuclear quantization also leads to a smaller dielectric constant and a larger diffusion coefficient relative to the corresponding classical values. Collective and single molecule time correlation functions show a faster decay than their classical counterparts. Good agreement with the experimental measurements in the low-frequency region is obtained for the quantum infrared spectrum, which also shows a higher intensity and a redshift relative to its classical analog. A modification of the original parametrization of the SPC/Fw model is suggested and tested in order to construct an accurate quantum model, called q-SPC/Fw, for liquid water. The quantum results for several thermodynamic and dynamical properties computed with the new model are shown to be in a significantly better agreement with the experimental data. Finally, a force-matching approach was applied to the q-SPC/Fw model to derive an effective quantum force field for liquid water in which the effects due to the nuclear quantization are explicitly distinguished from those due to the underlying molecular interactions. Thermodynamic and dynamical properties computed using standard classical simulations with this effective quantum potential are found in excellent agreement with those obtained from significantly more computationally demanding full centroid molecular dynamics

  14. Mouse models of human AML accurately predict chemotherapy response

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.

    2009-01-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to co...

  15. An accurate RLGC circuit model for dual tapered TSV structure

    A fast RLGC circuit model with analytical expression is proposed for the dual tapered through-silicon via (TSV) structure in three-dimensional integrated circuits under different slope angles at the wide frequency region. By describing the electrical characteristics of the dual tapered TSV structure, the RLGC parameters are extracted based on the numerical integration method. The RLGC model includes metal resistance, metal inductance, substrate resistance, outer inductance with skin effect and eddy effect taken into account. The proposed analytical model is verified to be nearly as accurate as the Q3D extractor but more efficient. (semiconductor integrated circuits)

  16. Robust Small Sample Accurate Inference in Moment Condition Models

    Serigne N. Lo; Elvezio Ronchetti

    2006-01-01

    Procedures based on the Generalized Method of Moments (GMM) (Hansen, 1982) are basic tools in modern econometrics. In most cases, the theory available for making inference with these procedures is based on first order asymptotic theory. It is well-known that the (first order) asymptotic distribution does not provide accurate p-values and confidence intervals in moderate to small samples. Moreover, in the presence of small deviations from the assumed model, p-values and confidence intervals ba...

  17. Bayesian calibration of power plant models for accurate performance prediction

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  18. On the importance of having accurate data for astrophysical modelling

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  19. Accurate method of modeling cluster scaling relations in modified gravity

    He, Jian-hua; Li, Baojiu

    2016-06-01

    We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.

  20. Accurate macroscale modelling of spatial dynamics in multiple dimensions

    Roberts, A ~J; Bunder, J ~E

    2011-01-01

    Developments in dynamical systems theory provides new support for the macroscale modelling of pdes and other microscale systems such as Lattice Boltzmann, Monte Carlo or Molecular Dynamics simulators. By systematically resolving subgrid microscale dynamics the dynamical systems approach constructs accurate closures of macroscale discretisations of the microscale system. Here we specifically explore reaction-diffusion problems in two spatial dimensions as a prototype of generic systems in multiple dimensions. Our approach unifies into one the modelling of systems by a type of finite elements, and the `equation free' macroscale modelling of microscale simulators efficiently executing only on small patches of the spatial domain. Centre manifold theory ensures that a closed model exist on the macroscale grid, is emergent, and is systematically approximated. Dividing space either into overlapping finite elements or into spatially separated small patches, the specially crafted inter-element\\slash patch coupling als...

  1. Congestion Control in WMSNs by Reducing Congestion and Free Resources to Set Accurate Rates and Priority

    Akbar Majidi

    2014-08-01

    Full Text Available The main intention of this paper is focus on mechanism for reducing congestion in the network by free resources to set accurate rates and priority data needs. If two nodes send their packets in the shortest path to the parent node in a crowded place, a source node must prioritize the data and uses data that have lower priorities of a suitable detour nodes consisting of low or non- active consciously. The proposed algorithm is applied to the nodes near the base station (which convey more traffic after the congestion detection mechanism detected the congestion. Obtained results from simulation test done by NS-2 simulator demonstrate the innovation and validity of proposed method with better performance in comparison with CCF, PCCP and DCCP protocols.

  2. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  3. Accurate Modeling of Buck Converters with Magnetic-Core Inductors

    Astorino, Antonio; Antonini, Giulio; Swaminathan, Madhavan

    2015-01-01

    In this paper, a modeling approach for buck converters with magnetic-core inductors is presented. Due to the high nonlinearity of magnetic materials, the frequency domain analysis of such circuits is not suitable for an accurate description of their behaviour. Hence, in this work, a timedomain model...... of buck converters with magnetic-core inductors in a SimulinkR environment is proposed. As an example, the presented approach is used to simulate an eight-phase buck converter. The simulation results show that an unexpected system behaviour in terms of current ripple amplitude needs the inductor core...

  4. Velocity potential formulations of highly accurate Boussinesq-type models

    Bingham, Harry B.; Madsen, Per A.; Fuhrman, David R.

    2009-01-01

    interest because it reduces the computational effort by approximately a factor of two and facilitates a coupling to other potential flow solvers. A new shoaling enhancement operator is introduced to derive new models (in both formulations) with a velocity profile which is always consistent with the...... satisfy a potential flow and/or conserve mass up to the order of truncation of the model. The performance of the new formulation is validated using computations of linear and nonlinear shoaling problems. The behaviour on a rapidly varying bathymetry is also checked using linear wave reflection from a...

  5. A Method to Build a Super Small but Practically Accurate Language Model for Handheld Devices

    WU GenQing (吴根清); ZHENG Fang (郑方)

    2003-01-01

    In this paper, an important question, whether a small language model can be practically accurate enough, is raised. Afterwards, the purpose of a language model, the problems that a language model faces, and the factors that affect the performance of a language model,are analyzed. Finally, a novel method for language model compression is proposed, which makes the large language model usable for applications in handheld devices, such as mobiles, smart phones, personal digital assistants (PDAs), and handheld personal computers (HPCs). In the proposed language model compression method, three aspects are included. First, the language model parameters are analyzed and a criterion based on the importance measure of n-grams is used to determine which n-grams should be kept and which removed. Second, a piecewise linear warping method is proposed to be used to compress the uni-gram count values in the full language model. And third, a rank-based quantization method is adopted to quantize the bi-gram probability values. Experiments show that by using this compression method the language model can be reduced dramatically to only about 1M bytes while the performance almost does not decrease. This provides good evidence that a language model compressed by means of a well-designed compression technique is practically accurate enough, and it makes the language model usable in handheld devices.

  6. BWR stability using a reducing dynamical model

    BWR stability can be treated with reduced order dynamical models. When the parameters of the model came from dynamical models. When the parameters of the model came from experimental data, the predictions are accurate. In this work an alternative derivation for the void fraction equation is made, but remarking the physical structure of the parameters. As the poles of power/reactivity transfer function are related with the parameters, the measurement of the poles by other techniques such as noise analysis will lead to the parameters, but the system of equations is non-linear. Simple parametric calculation of decay ratio are performed, showing why BWRs become unstable when they are operated at low flow and high power. (Author)

  7. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models

    Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.

    2015-09-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  8. Fast and accurate prediction of numerical relativity waveforms from binary black hole mergers using surrogate models

    Blackman, Jonathan; Galley, Chad R; Szilagyi, Bela; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-01-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. In this paper, we construct an accurate and fast-to-evaluate surrogate model for numerical relativity (NR) waveforms from non-spinning binary black hole coalescences with mass ratios from $1$ to $10$ and durations corresponding to about $15$ orbits before merger. Our surrogate, which is built using reduced order modeling techniques, is distinct from traditional modeling efforts. We find that the full multi-mode surrogate model agrees with waveforms generated by NR to within the numerical error of the NR code. In particular, we show that our modeling strategy produces surrogates which can correctly predict NR waveforms that were {\\em not} used for the surrogate's training. For all practical purposes, then, the surrogate waveform model is equivalent to the high-accuracy, large-scale simulation waveform but can be evaluated in a millisecond to a second dependin...

  9. Can Raters with Reduced Job Descriptive Information Provide Accurate Position Analysis Questionnaire (PAQ) Ratings?

    Friedman, Lee; Harvey, Robert J.

    1986-01-01

    Job-naive raters provided with job descriptive information made Position Analysis Questionnaire (PAQ) ratings which were validated against ratings of job analysts who were also job content experts. None of the reduced job descriptive information conditions enabled job-naive raters to obtain either acceptable levels of convergent validity with…

  10. BWR stability using a reduced dynamical model

    BWR stability can be treated with reduced order dynamical models. When the parameters of the model came from experimental data, the predictions are accurate. In this work an alternative derivation for the void fraction equation is made, but remarking the physical struct-ure of the parameters. As the poles of power/reactivity transfer function are related with the parameters, the measurement of the poles by other techniques such as noise analysis will lead to the parameters, but the system of equations in non-linear. Simple parametric calculat-ion of decay ratio are performed, showing why BWRs become unstable when they are operated at low flow and high power. (Author). 7 refs

  11. Coupling Efforts to the Accurate and Efficient Tsunami Modelling System

    Son, S.

    2015-12-01

    In the present study, we couple two different types of tsunami models, i.e., nondispersive shallow water model of characteristic form(MOST ver.4) and dispersive Boussinesq model of non-characteristic form(Son et al. (2011)) in an attempt to improve modelling accuracy and efficiency. Since each model deals with different type of primary variables, additional care on matching boundary condition is required. Using an absorbing-generating boundary condition developed by Van Dongeren and Svendsen(1997), model coupling and integration is achieved. Characteristic variables(i.e., Riemann invariants) in MOST are converted to non-characteristic variables for Boussinesq solver without any loss of physical consistency. Established modelling system has been validated through typical test problems to realistic tsunami events. Simulated results reveal good performance of developed modelling system. Since coupled modelling system provides advantageous flexibility feature during implementation, great efficiencies and accuracies are expected to be gained through spot-focusing application of Boussinesq model inside the entire domain of tsunami propagation.

  12. Fully Automated Generation of Accurate Digital Surface Models with Sub-Meter Resolution from Satellite Imagery

    Wohlfeil, J.; Hirschmüller, H.; Piltz, B.; Börner, A.; Suppa, M.

    2012-07-01

    Modern pixel-wise image matching algorithms like Semi-Global Matching (SGM) are able to compute high resolution digital surface models from airborne and spaceborne stereo imagery. Although image matching itself can be performed automatically, there are prerequisites, like high geometric accuracy, which are essential for ensuring the high quality of resulting surface models. Especially for line cameras, these prerequisites currently require laborious manual interaction using standard tools, which is a growing problem due to continually increasing demand for such surface models. The tedious work includes partly or fully manual selection of tie- and/or ground control points for ensuring the required accuracy of the relative orientation of images for stereo matching. It also includes masking of large water areas that seriously reduce the quality of the results. Furthermore, a good estimate of the depth range is required, since accurate estimates can seriously reduce the processing time for stereo matching. In this paper an approach is presented that allows performing all these steps fully automated. It includes very robust and precise tie point selection, enabling the accurate calculation of the images' relative orientation via bundle adjustment. It is also shown how water masking and elevation range estimation can be performed automatically on the base of freely available SRTM data. Extensive tests with a large number of different satellite images from QuickBird and WorldView are presented as proof of the robustness and reliability of the proposed method.

  13. Reduced-Order Model Development for Airfoil Forced Response

    Ramana V. Grandhi

    2008-04-01

    Full Text Available Two new reduced-order models are developed to accurately and rapidly predict geometry deviation effects on airfoil forced response. Both models have significant application to improved mistuning analysis. The first developed model integrates a principal component analysis approach to reduce the number of defining geometric parameters, semianalytic eigensensitivity analysis, and first-order Taylor series approximation to allow rapid as-measured airfoil response analysis. A second developed model extends this approach and quantifies both random and bias errors between the reduced and full models. Adjusting for the bias significantly improves reduced-order model accuracy. The error model is developed from a regression analysis of the relationship between airfoil geometry parameters and reduced-order model error, leading to physics-based error quantification. Both models are demonstrated on an advanced fan airfoil's frequency, modal force, and forced response.

  14. A more accurate model of wetting transitions with liquid helium

    Up to now the analysis of the liquid helium prewetting line on alkali metal substrates have been made using the simple model proposed by Saam et al. Some improvements on this model are considered within a mean field, sharp kink model. The temperature variations of the substrate-liquid interface energy and that of the liquid density are considered, as well as a more realistic effective potential for the film-substrate interaction. A comparison is made with the experimental data on rubidium and cesium

  15. Visual texture accurate material appearance measurement, representation and modeling

    Haindl, Michal

    2013-01-01

    This book surveys the state of the art in multidimensional, physically-correct visual texture modeling. Features: reviews the entire process of texture synthesis, including material appearance representation, measurement, analysis, compression, modeling, editing, visualization, and perceptual evaluation; explains the derivation of the most common representations of visual texture, discussing their properties, advantages, and limitations; describes a range of techniques for the measurement of visual texture, including BRDF, SVBRDF, BTF and BSSRDF; investigates the visualization of textural info

  16. Accurate wind farm development and operation. Advanced wake modelling

    Brand, A.; Bot, E.; Ozdemir, H. [ECN Unit Wind Energy, P.O. Box 1, NL 1755 ZG Petten (Netherlands); Steinfeld, G.; Drueke, S.; Schmidt, M. [ForWind, Center for Wind Energy Research, Carl von Ossietzky Universitaet Oldenburg, D-26129 Oldenburg (Germany); Mittelmeier, N. REpower Systems SE, D-22297 Hamburg (Germany))

    2013-11-15

    The ability is demonstrated to calculate wind farm wakes on the basis of ambient conditions that were calculated with an atmospheric model. Specifically, comparisons are described between predicted and observed ambient conditions, and between power predictions from three wind farm wake models and power measurements, for a single and a double wake situation. The comparisons are based on performance indicators and test criteria, with the objective to determine the percentage of predictions that fall within a given range about the observed value. The Alpha Ventus site is considered, which consists of a wind farm with the same name and the met mast FINO1. Data from the 6 REpower wind turbines and the FINO1 met mast were employed. The atmospheric model WRF predicted the ambient conditions at the location and the measurement heights of the FINO1 mast. May the predictability of the wind speed and the wind direction be reasonable if sufficiently sized tolerances are employed, it is fairly impossible to predict the ambient turbulence intensity and vertical shear. Three wind farm wake models predicted the individual turbine powers: FLaP-Jensen and FLaP-Ainslie from ForWind Oldenburg, and FarmFlow from ECN. The reliabilities of the FLaP-Ainslie and the FarmFlow wind farm wake models are of equal order, and higher than FLaP-Jensen. Any difference between the predictions from these models is most clear in the double wake situation. Here FarmFlow slightly outperforms FLaP-Ainslie.

  17. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    Mead, Alexander; Heymans, Catherine; Joudaki, Shahab; Heavens, Alan

    2015-01-01

    We present an optimised variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically-motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of $\\Lambda$CDM and $w$CDM models the halo-model power is accurate to $\\simeq 5$ per cent for $k\\leq 10h\\,\\mathrm{Mpc}^{-1}$ and $z\\leq 2$. We compare our results with recent revisions of the popular HALOFIT model and show that our predictions are more accurate. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limi...

  18. The slow-scale linear noise approximation: an accurate, reduced stochastic description of biochemical networks under timescale separation conditions

    Thomas Philipp

    2012-05-01

    Full Text Available Abstract Background It is well known that the deterministic dynamics of biochemical reaction networks can be more easily studied if timescale separation conditions are invoked (the quasi-steady-state assumption. In this case the deterministic dynamics of a large network of elementary reactions are well described by the dynamics of a smaller network of effective reactions. Each of the latter represents a group of elementary reactions in the large network and has associated with it an effective macroscopic rate law. A popular method to achieve model reduction in the presence of intrinsic noise consists of using the effective macroscopic rate laws to heuristically deduce effective probabilities for the effective reactions which then enables simulation via the stochastic simulation algorithm (SSA. The validity of this heuristic SSA method is a priori doubtful because the reaction probabilities for the SSA have only been rigorously derived from microscopic physics arguments for elementary reactions. Results We here obtain, by rigorous means and in closed-form, a reduced linear Langevin equation description of the stochastic dynamics of monostable biochemical networks in conditions characterized by small intrinsic noise and timescale separation. The slow-scale linear noise approximation (ssLNA, as the new method is called, is used to calculate the intrinsic noise statistics of enzyme and gene networks. The results agree very well with SSA simulations of the non-reduced network of elementary reactions. In contrast the conventional heuristic SSA is shown to overestimate the size of noise for Michaelis-Menten kinetics, considerably under-estimate the size of noise for Hill-type kinetics and in some cases even miss the prediction of noise-induced oscillations. Conclusions A new general method, the ssLNA, is derived and shown to correctly describe the statistics of intrinsic noise about the macroscopic concentrations under timescale separation conditions

  19. Accurate Sliding-Mode Control System Modeling for Buck Converters

    Høyerby, Mikkel Christian Wendelboe; Andersen, Michael Andreas E.

    2007-01-01

    This paper shows that classical sliding mode theory fails to correctly predict the output impedance of the highly useful sliding mode PID compensated buck converter. The reason for this is identified as the assumption of the sliding variable being held at zero during sliding mode, effectively...... modeling the hysteretic comparator as an infinite gain. Correct prediction of output impedance is shown to be enabled by the use of a more elaborate, finite-gain model of the hysteretic comparator, which takes the effects of time delay and finite switching frequency into account. The demonstrated modeling...... approach also predicts the self-oscillating switching action of the sliding-mode control system correctly. Analytical findings are verified by simulation as well as experimentally in a 10-30V/3A buck converter....

  20. Accurate models of collisions in glow discharge simulation

    Very detailed, self-consistent kinetic glow discharge simulations are used to examine the effect of various models of collisional processes. The effects of allowing anisotropy in elastic electron collisions with neutral atoms instead of using the momentum transfer cross-section, the effects of using an isotropic distribution in inelastic electron-atom collisions, and the effects of including a Coulomb electron-electron collision operator are all described. It is shown that changes in any of the collisional models, especially the second and third described above, can make a profound difference in the simulation results. This confirms that many discharge simulations have great sensitivity to the physical and numerical approximations used. The results reinforce the importance of using a kinetic theory approach with highly realistic models of various collisional processes

  1. An accurate and efficient Lagrangian sub-grid model

    Mazzitelli, I M; Lanotte, A S

    2014-01-01

    A computationally efficient model is introduced to account for the sub-grid scale velocities of tracer particles dispersed in statistically homogeneous and isotropic turbulent flows. The model embeds the multi-scale nature of turbulent temporal and spatial correlations, that are essential to reproduce multi-particle dispersion. It is capable to describe the Lagrangian diffusion and dispersion of temporally and spatially correlated clouds of particles. Although the model neglects intermittent corrections, we show that pair and tetrad dispersion results nicely compare with Direct Numerical Simulations of statistically isotropic and homogeneous $3D$ turbulence. This is in agreement with recent observations that deviations from self-similar pair dispersion statistics are rare events.

  2. Accurate modelling of flow induced stresses in rigid colloidal aggregates

    Vanni, Marco

    2015-07-01

    A method has been developed to estimate the motion and the internal stresses induced by a fluid flow on a rigid aggregate. The approach couples Stokesian dynamics and structural mechanics in order to take into account accurately the effect of the complex geometry of the aggregates on hydrodynamic forces and the internal redistribution of stresses. The intrinsic error of the method, due to the low-order truncation of the multipole expansion of the Stokes solution, has been assessed by comparison with the analytical solution for the case of a doublet in a shear flow. In addition, it has been shown that the error becomes smaller as the number of primary particles in the aggregate increases and hence it is expected to be negligible for realistic reproductions of large aggregates. The evaluation of internal forces is performed by an adaptation of the matrix methods of structural mechanics to the geometric features of the aggregates and to the particular stress-strain relationship that occurs at intermonomer contacts. A preliminary investigation on the stress distribution in rigid aggregates and their mode of breakup has been performed by studying the response to an elongational flow of both realistic reproductions of colloidal aggregates (made of several hundreds monomers) and highly simplified structures. A very different behaviour has been evidenced between low-density aggregates with isostatic or weakly hyperstatic structures and compact aggregates with highly hyperstatic configuration. In low-density clusters breakup is caused directly by the failure of the most stressed intermonomer contact, which is typically located in the inner region of the aggregate and hence originates the birth of fragments of similar size. On the contrary, breakup of compact and highly cross-linked clusters is seldom caused by the failure of a single bond. When this happens, it proceeds through the removal of a tiny fragment from the external part of the structure. More commonly, however

  3. Double Layered Sheath in Accurate HV XLPE Cable Modeling

    Gudmundsdottir, Unnur Stella; Silva, J. De; Bak, Claus Leth;

    2010-01-01

    This paper discusses modelling of high voltage AC underground cables. For long cables, when crossbonding points are present, not only the coaxial mode of propagation is excited during transient phenomena, but also the intersheath mode. This causes inaccurate simulation results for high frequency...

  4. Parameterized reduced-order models using hyper-dual numbers.

    Fike, Jeffrey A.; Brake, Matthew Robert

    2013-10-01

    The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize the effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.

  5. Relevance of accurate Monte Carlo modeling in nuclear medical imaging

    Zaidi, H

    1999-01-01

    Monte Carlo techniques have become popular in different areas of medical physics with advantage of powerful computing systems. In particular, they have been extensively applied to simulate processes involving random behavior and to quantify physical parameters that are difficult or even impossible to calculate by experimental measurements. Recent nuclear medical imaging innovations such as single-photon emission computed tomography (SPECT), positron emission tomography (PET), and multiple emission tomography (MET) are ideal for Monte Carlo modeling techniques because of the stochastic nature of radiation emission, transport and detection processes. Factors which have contributed to the wider use include improved models of radiation transport processes, the practicality of application with the development of acceleration schemes and the improved speed of computers. This paper presents derivation and methodological basis for this approach and critically reviews their areas of application in nuclear imaging. An ...

  6. Compact and Accurate Turbocharger Modelling for Engine Control

    Sorenson, Spencer C; Hendricks, Elbert; Magnússon, Sigurjón;

    2005-01-01

    With the current trend towards engine downsizing, the use of turbochargers to obtain extra engine power has become common. A great díffuculty in the use of turbochargers is in the modelling of the compressor map. In general this is done by inserting the compressor map directly into the engine ECU...... turbocharges with radial compressors for either Spark Ignition (SI) or diesel engines...

  7. Accurate numerical solutions for elastic-plastic models

    The accuracy of two integration algorithms is studied for the common engineering condition of a von Mises, isotropic hardening model under plane stress. Errors in stress predictions for given total strain increments are expressed with contour plots of two parameters: an angle in the pi plane and the difference between the exact and computed yield-surface radii. The two methods are the tangent-predictor/radial-return approach and the elastic-predictor/radial-corrector algorithm originally developed by Mendelson. The accuracy of a combined tangent-predictor/radial-corrector algorithm is also investigated

  8. Nonlinear thermal reduced model for Microwave Circuit Analysis

    Chang, Christophe; Sommet, Raphael; Quéré, Raymond; Dueme, Ph.

    2004-01-01

    With the constant increase of transistor power density, electro thermal modeling is becoming a necessity for accurate prediction of device electrical performances. For this reason, this paper deals with a methodology to obtain a precise nonlinear thermal model based on Model Order Reduction of a three dimensional thermal Finite Element (FE) description. This reduced thermal model is based on the Ritz vector approach which ensure the steady state solution in every case. An equi...

  9. Reduced cost mission design using surrogate models

    Feldhacker, Juliana D.; Jones, Brandon A.; Doostan, Alireza; Hampton, Jerrad

    2016-01-01

    This paper uses surrogate models to reduce the computational cost associated with spacecraft mission design in three-body dynamical systems. Sampling-based least squares regression is used to project the system response onto a set of orthogonal bases, providing a representation of the ΔV required for rendezvous as a reduced-order surrogate model. Models are presented for mid-field rendezvous of spacecraft in orbits in the Earth-Moon circular restricted three-body problem, including a halo orbit about the Earth-Moon L2 libration point (EML-2) and a distant retrograde orbit (DRO) about the Moon. In each case, the initial position of the spacecraft, the time of flight, and the separation between the chaser and the target vehicles are all considered as design inputs. The results show that sample sizes on the order of 102 are sufficient to produce accurate surrogates, with RMS errors reaching 0.2 m/s for the halo orbit and falling below 0.01 m/s for the DRO. A single function call to the resulting surrogate is up to two orders of magnitude faster than computing the same solution using full fidelity propagators. The expansion coefficients solved for in the surrogates are then used to conduct a global sensitivity analysis of the ΔV on each of the input parameters, which identifies the separation between the spacecraft as the primary contributor to the ΔV cost. Finally, the models are demonstrated to be useful for cheap evaluation of the cost function in constrained optimization problems seeking to minimize the ΔV required for rendezvous. These surrogate models show significant advantages for mission design in three-body systems, in terms of both computational cost and capabilities, over traditional Monte Carlo methods.

  10. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.

    2015-12-01

    We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.

  11. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements.

    Ajay Seth

    Full Text Available The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1 elevation and 2 abduction of the scapula on an ellipsoidal thoracic surface, 3 upward rotation of the scapula normal to the thoracic surface, and 4 internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual's anthropometry. We compared the model to "gold standard" bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2 mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models.

  12. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements.

    Seth, Ajay; Matias, Ricardo; Veloso, António P; Delp, Scott L

    2016-01-01

    The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual's anthropometry. We compared the model to "gold standard" bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2 mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761

  13. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979

  14. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking. PMID:22658682

  15. Accurate Modeling of the Polysilicon-Insulator-Well (PIW) Capacitor in CMOS Technologies

    JAMASB, Shahriar; MOOSAVİ, Roya

    2015-01-01

    Abstract. A practical method enabling rapid development of an accurate device model for the PIW MOS capacitor is introduced. The simultaneous improvement in accuracy and development time can be achieved without having to perform extensive measurements on specialized test structures by taking advantage of the MOS transistor model parameters routinely extracted in support of analog circuit design activities. This method affords accurate modeling of the voltage coefficient of capacitance over th...

  16. Reducing the Need for Accurate Stream Flow Forecasting for Water Supply Planning by Augmenting Reservoir Operations with Seawater Desalination and Wastewater Recycling

    Bhushan, R.; Ng, T. L.

    2014-12-01

    Accurate stream flow forecasts are critical for reservoir operations for water supply planning. As the world urban population increases, the demand for water in cities is also increasing, making accurate forecasts even more important. However, accurate forecasting of stream flows is difficult owing to short- and long-term weather variations. We propose to reduce this need for accurate stream flow forecasts by augmenting reservoir operations with seawater desalination and wastewater recycling. We develop a robust operating policy for the joint operation of the three sources. With the joint model, we tap into the unlimited reserve of seawater through desalination, and make use of local supplies of wastewater through recycling. However, both seawater desalination and recycling are energy intensive and relatively expensive. Reservoir water on the other hand, is generally cheaper but is limited and variable in its availability, increasing the risk of water shortage during extreme climate events. We operate the joint system by optimizing it using a genetic algorithm to maximize water supply reliability and resilience while minimizing vulnerability subject to a budget constraint and for a given stream flow forecast. To compute the total cost of the system, we take into account the pumping cost of transporting reservoir water to its final destination, and the capital and operating costs of desalinating seawater and recycling wastewater. We produce results for different hydro climatic regions based on artificial stream flows we generate using a simple hydrological model and an autoregressive time series model. The artificial flows are generated from precipitation and temperature data from the Canadian Regional Climate model for present and future scenarios. We observe that the joint operation is able to effectively minimize the negative effects of stream flow forecast uncertainty on system performance at an overall cost that is not significantly greater than the cost of a

  17. Generalized Reduced Order Model Generation Project

    National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...

  18. PconsD: ultra rapid, accurate model quality assessment for protein structure prediction

    Skwark, M. J.; Elofsson, A.

    2013-01-01

    Clustering methods are often needed for accurately assessing the quality of modeled protein structures. Recent blind evaluation of quality assessment methods in CASP10 showed that there is very little difference between many different methods as far as ranking models and selecting best model are concerned. When comparing many models the computational cost of the model comparison can become significant. Here, we present PconsD, a very fast, stream-computing method for distance-driven model qua...

  19. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  20. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    Leng, Wei [Chinese Academy of Sciences; Ju, Lili [University of South Carolina; Gunzburger, Max [Florida State University; Price, Stephen [Los Alamos National Laboratory; Ringler, Todd [Los Alamos National Laboratory,

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  1. A Reducing Resistance to Change Model

    Daniela Braduţanu

    2015-01-01

    The aim of this scientific paper is to present an original reducing resistance to change model. After analyzing the existent literature, I have concluded that the resistance to change subject has gained popularity over the years, but there are not too many models that could help managers implement more smoothly an organizational change process and at the same time, reduce effectively employees’ resistance. The proposed model is very helpful for managers and change agents who are c...

  2. Bilinear reduced order approximate model of parabolic distributed solar collectors

    Elmetennani, Shahrazed

    2015-07-01

    This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low dimensional bilinear state representation, enables the reproduction of the heat transfer dynamics along the collector tube for system analysis. Moreover, presented as a reduced order bilinear state space model, the well established control theory for this class of systems can be applied. The approximation efficiency has been proven by several simulation tests, which have been performed considering parameters of the Acurex field with real external working conditions. Model accuracy has been evaluated by comparison to the analytical solution of the hyperbolic distributed model and its semi discretized approximation highlighting the benefits of using the proposed numerical scheme. Furthermore, model sensitivity to the different parameters of the gaussian interpolation has been studied.

  3. Energy-accurate simulation models for evaluating the energy efficiency; Energieexakte Simulationsmodelle zur Bewertung der Energieeffizienz

    Blank, Frederic; Roth-Stielow, Joerg [Stuttgart Univ. (Germany). Inst. fuer Leistungselektronik und Elektrische Antriebe

    2011-07-01

    For the evaluation of the energy efficiency of electrical drive systems in start-stop operations, the amount of energy per cycle is used. This variable of comparison ''energy'' is determined by simulating the whole drive system with special simulation models. These models have to be energy-accurate in order to implement the significant losses. Two simulation models are presented, which were optimized for these simulations: models of a permanent synchronous motor and a frequency inverter. The models are parameterized with measurements and the calculations are verified. Using these models, motion cycles can be simulated and the necessary energy per cycle can be determined. (orig.)

  4. In-Situ Residual Tracking in Reduced Order Modelling

    Joseph C. Slater

    2002-01-01

    Full Text Available Proper orthogonal decomposition (POD based reduced-order modelling is demonstrated to be a weighted residual technique similar to Galerkin's method. Estimates of weighted residuals of neglected modes are used to determine relative importance of neglected modes to the model. The cumulative effects of neglected modes can be used to estimate error in the reduced order model. Thus, once the snapshots have been obtained under prescribed training conditions, the need to perform full-order simulations for comparison is eliminates. This has the potential to allow the analyst to initiate further training when the reduced modes are no longer sufficient to accurately represent the predominant phenomenon of interest. The response of a fluid moving at Mach 1.2 above a panel to a forced localized oscillation of the panel at and away from the training operating conditions is used to demonstrate the evaluation method.

  5. Development of an accurate cavitation coupled spray model for diesel engine simulation

    Highlights: • A new hybrid spray model was implemented into KIVA4 CFD code. • Cavitation sub model was coupled with classic KHRT model. • New model predicts better than classical spray models. • New model predicts spray and combustion characteristics with accuracy. - Abstract: The combustion process in diesel engines is essentially controlled by the dynamics of the fuel spray. Thus accurate modeling of spray process is vital to accurately model the combustion process in diesel engines. In this work, a new hybrid spray model was developed by coupling the cavitation induced spray sub model to KHRT spray model. This new model was implemented into KIVA4 CFD code. The new developed spray model was extensively validated against the experimental data of non-vaporizing and vaporizing spray obtained from constant volume combustion chamber (CVCC) available in literature. The results were compared on the basis of liquid length, spray penetration and spray images. The model was also validated against the engine combustion characteristics data like in-cylinder pressure and heat release rate. The new spray model very well captures both spray characteristics and combustion characteristics

  6. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, UV(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing UV, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that UV accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model

  7. An accurate and efficient system model of iterative image reconstruction in high-resolution pinhole SPECT for small animal research

    Accurate modeling of the photon acquisition process in pinhole SPECT is essential for optimizing resolution. In this work, the authors develop an accurate system model in which pinhole finite aperture and depth-dependent geometric sensitivity are explicitly included. To achieve high-resolution pinhole SPECT, the voxel size is usually set in the range of sub-millimeter so that the total number of image voxels increase accordingly. It is inevitably that a system matrix that models a variety of favorable physical factors will become extremely sophisticated. An efficient implementation for such an accurate system model is proposed in this research. We first use the geometric symmetries to reduce redundant entries in the matrix. Due to the sparseness of the matrix, only non-zero terms are stored. A novel center-to-radius recording rule is also developed to effectively describe the relation between a voxel and its related detectors at every projection angle. The proposed system matrix is also suitable for multi-threaded computing. Finally, the accuracy and effectiveness of the proposed system model is evaluated in a workstation equipped with two Quad-Core Intel X eon processors.

  8. Mining tandem mass spectral data to develop a more accurate mass error model for peptide identification.

    Fu, Yan; Gao, Wen; He, Simin; Sun, Ruixiang; Zhou, Hu; Zeng, Rong

    2007-01-01

    The assumption on the mass error distribution of fragment ions plays a crucial role in peptide identification by tandem mass spectra. Previous mass error models are the simplistic uniform or normal distribution with empirically set parameter values. In this paper, we propose a more accurate mass error model, namely conditional normal model, and an iterative parameter learning algorithm. The new model is based on two important observations on the mass error distribution, i.e. the linearity between the mean of mass error and the ion mass, and the log-log linearity between the standard deviation of mass error and the peak intensity. To our knowledge, the latter quantitative relationship has never been reported before. Experimental results demonstrate the effectiveness of our approach in accurately quantifying the mass error distribution and the ability of the new model to improve the accuracy of peptide identification. PMID:17990507

  9. Efficient and Accurate Log-Levy Approximations of Levy-Driven LIBOR Models

    Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David

    2012-01-01

    -driven LIBOR model and aim to develop accurate and efficient log-Lévy approximations for the dynamics of the rates. The approximations are based on the truncation of the drift term and on Picard approximation of suitable processes. Numerical experiments for forward-rate agreements, caps, swaptions and sticky...

  10. In-situ measurements of material thermal parameters for accurate LED lamp thermal modelling

    Vellvehi, M.; Perpina, X.; Jorda, X.; Werkhoven, R.J.; Kunen, J.M.G.; Jakovenko, J.; Bancken, P.; Bolt, P.J.

    2013-01-01

    This work deals with the extraction of key thermal parameters for accurate thermal modelling of LED lamps: air exchange coefficient around the lamp, emissivity and thermal conductivity of all lamp parts. As a case study, an 8W retrofit lamp is presented. To assess simulation results, temperature is

  11. Development of an Accurate Urban Modeling System Using CAD/GIS Data for Atmosphere Environmental Simulation

    Tomosato Takada; Kazuo Kashiyama

    2008-01-01

    This paper presents an urban modeling system using CAD/GIS data for atmosphere environ- mental simulation, such as wind flow and contaminant spread in urban area. The CAD data is used for the shape modeling for the high-storied buildings and civil structures with complicated shape since the data for that is not included in the 3D-GIS data accurately. The unstructured mesh based on the tetrahedron element is employed in order to express the urban structures with complicated shape accurately. It is difficult to un- derstand the quality of shape model and mesh by the conventional visualization technique. In this paper, the stereoscopic visualization using virtual reality (VR) technology is employed for the vedfication of the quality of shape model and mesh. The present system is applied to the atmosphere environmental simulation in ur- ban area and is shown to be an useful planning and design tool to investigate the atmosphere environmental problem.

  12. Accurate Monte Carlo modelling of the back compartments of SPECT cameras

    Today, new single photon emission computed tomography (SPECT) reconstruction techniques rely on accurate Monte Carlo (MC) simulations to optimize reconstructed images. However, existing MC scintillation camera models which usually include an accurate description of the collimator and crystal, lack correct implementation of the gamma camera's back compartments. In the case of dual isotope simultaneous acquisition (DISA), where backscattered photons from the highest energy isotope are detected in the imaging energy window of the second isotope, this approximation may induce simulation errors. Here, we investigate the influence of backscatter compartment modelling on the simulation accuracy of high-energy isotopes. Three models of a scintillation camera were simulated: a simple model (SM), composed only of a collimator and a NaI(Tl) crystal; an intermediate model (IM), adding a simplified description of the backscatter compartments to the previous model and a complete model (CM), accurately simulating the materials and geometries of the camera. The camera models were evaluated with point sources (67Ga, 99mTc, 111In, 123I, 131I and 18F) in air without a collimator, in air with a collimator and in water with a collimator. In the latter case, sensitivities and point-spread functions (PSFs) simulated in the photopeak window with the IM and CM are close to the measured values (error below 10.5%). In the backscatter energy window, however, the IM and CM overestimate the FWHM of the detected PSF by 52% and 23%, respectively, while the SM underestimates it by 34%. The backscatter peak fluence is also overestimated by 20% and 10% with the IM and CM, respectively, whereas it is underestimated by 60% with the SM. The results show that an accurate description of the backscatter compartments is required for SPECT simulations of high-energy isotopes (above 300 keV) when the backscatter energy window is of interest.

  13. Automated Image-Based Procedures for Accurate Artifacts 3D Modeling and Orthoimage Generation

    Marc Pierrot-Deseilligny

    2011-12-01

    Full Text Available The accurate 3D documentation of architectures and heritages is getting very common and required in different application contexts. The potentialities of the image-based approach are nowadays very well-known but there is a lack of reliable, precise and flexible solutions, possibly open-source, which could be used for metric and accurate documentation or digital conservation and not only for simple visualization or web-based applications. The article presents a set of photogrammetric tools developed in order to derive accurate 3D point clouds and orthoimages for the digitization of archaeological and architectural objects. The aim is also to distribute free solutions (software, methodologies, guidelines, best practices, etc. based on 3D surveying and modeling experiences, useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc. and according to several representations needs (2D technical documentation, 3D reconstruction, web visualization, etc..

  14. Reducing the invasiveness of modelling frameworks

    Donchyts, G.; Baart, F.

    2010-12-01

    There are several modelling frameworks available that allow for environmental models to exchange data with other models. Many efforts have been made in the past years promoting solutions aimed at integrating different numerical models with each other as well as at simplifying the way to set them up, entering the data, and running them. Meanwhile the development of many modeling frameworks concentrated on the interoperability of different model engines, several standards were introduced such as ESMF, OMS and OpenMI. One of the issues with applying modelling frameworks is the invasessness, the more the model has to know about the framework, the more intrussive it is. Another issue when applying modelling frameworks are that a lot of environmental models are written in procedural and in FORTRAN, which is one of the few languages that doesn't have a proper interface with other programming languages. Most modelling frameworks are written in object oriented languages like java/c# and the modelling framework in FORTRAN ESMF is also objected oriented. In this research we show how the application of domain driven, object oriented development techniques to environmental models can reduce the invasiveness of modelling frameworks. Our approach is based on four different steps: 1) application of OO techniques and reflection to the existing model to allow introspection. 2) programming language interoperability, between model written in a procedural programming language and modeling framework written in an object oriented programming language. 3) Domain mapping between data types used by model and other components being integrated 4) Connecting models using framework (wrapper) We compare coupling of an existing model as it was to the same model adapted using the four step approach. We connect both versions of the models using two different integrated modelling frameworks. As an example of a model we use the coastal morphological model XBeach. By adapting this model it allows for

  15. Particle Image Velocimetry Measurements in an Anatomically-Accurate Scaled Model of the Mammalian Nasal Cavity

    Rumple, Christopher; Krane, Michael; Richter, Joseph; Craven, Brent

    2013-11-01

    The mammalian nose is a multi-purpose organ that houses a convoluted airway labyrinth responsible for respiratory air conditioning, filtering of environmental contaminants, and chemical sensing. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of respiratory airflow and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture an anatomically-accurate transparent model for stereoscopic particle image velocimetry (SPIV) measurements. Challenges in the design and manufacture of an index-matched anatomical model are addressed. PIV measurements are presented, which are used to validate concurrent computational fluid dynamics (CFD) simulations of mammalian nasal airflow. Supported by the National Science Foundation.

  16. An accurate model for numerical prediction of piezoelectric energy harvesting from fluid structure interaction problems

    Piezoelectric energy harvesting (PEH) from ambient energy sources, particularly vibrations, has attracted considerable interest throughout the last decade. Since fluid flow has a high energy density, it is one of the best candidates for PEH. Indeed, a piezoelectric energy harvesting process from the fluid flow takes the form of natural three-way coupling of the turbulent fluid flow, the electromechanical effect of the piezoelectric material and the electrical circuit. There are some experimental and numerical studies about piezoelectric energy harvesting from fluid flow in literatures. Nevertheless, accurate modeling for predicting characteristics of this three-way coupling has not yet been developed. In the present study, accurate modeling for this triple coupling is developed and validated by experimental results. A new code based on this modeling in an openFOAM platform is developed. (paper)

  17. Bayesian reduced-order models for multiscale dynamical systems

    Koutsourelakis, P S

    2010-01-01

    While existing mathematical descriptions can accurately account for phenomena at microscopic scales (e.g. molecular dynamics), these are often high-dimensional, stochastic and their applicability over macroscopic time scales of physical interest is computationally infeasible or impractical. In complex systems, with limited physical insight on the coherent behavior of their constituents, the only available information is data obtained from simulations of the trajectories of huge numbers of degrees of freedom over microscopic time scales. This paper discusses a Bayesian approach to deriving probabilistic coarse-grained models that simultaneously address the problems of identifying appropriate reduced coordinates and the effective dynamics in this lower-dimensional representation. At the core of the models proposed lie simple, low-dimensional dynamical systems which serve as the building blocks of the global model. These approximate the latent, generating sources and parameterize the reduced-order dynamics. We d...

  18. Towards more accurate wind and solar power prediction by improving NWP model physics

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  19. The accurate and comprehensive model of thin fluid flows with inertia on curved substrates

    Roberts, A J; Li, Zhenquan

    1999-01-01

    Consider the 3D flow of a viscous Newtonian fluid upon a curved 2D substrate when the fluid film is thin as occurs in many draining, coating and biological flows. We derive a comprehensive model of the dynamics of the film, the model being expressed in terms of the film thickness and the average lateral velocity. Based upon centre manifold theory, we are assured that the model accurately includes the effects of the curvature of substrate, gravitational body force, fluid inertia and dissipatio...

  20. A Reduced High Frequency Transformer Model To Detect The Partial Discharge Locations

    El-Sayed M. El-Refaie

    2014-03-01

    Full Text Available Transformer modeling is the first step in improving partial discharge localization techniques. Different transformer models were used for such purpose. This paper presents a reduced transformer model that can be accurately used for the purpose of partial discharge localization. The model is investigated in alternative transient program (ATP draw for partial discharge localization application. A comparison between different transformer models is studied, the achieved results of the reduced model demonstrated high efficiency.

  1. An improved model for reduced-order physiological fluid flows

    San, Omer; 10.1142/S0219519411004666

    2012-01-01

    An improved one-dimensional mathematical model based on Pulsed Flow Equations (PFE) is derived by integrating the axial component of the momentum equation over the transient Womersley velocity profile, providing a dynamic momentum equation whose coefficients are smoothly varying functions of the spatial variable. The resulting momentum equation along with the continuity equation and pressure-area relation form our reduced-order model for physiological fluid flows in one dimension, and are aimed at providing accurate and fast-to-compute global models for physiological systems represented as networks of quasi one-dimensional fluid flows. The consequent nonlinear coupled system of equations is solved by the Lax-Wendroff scheme and is then applied to an open model arterial network of the human vascular system containing the largest fifty-five arteries. The proposed model with functional coefficients is compared with current classical one-dimensional theories which assume steady state Hagen-Poiseuille velocity pro...

  2. Protein Structure Idealization: How accurately is it possible to model protein structures with dihedral angles?

    Cui, Xuefeng; Li, Shuai Cheng; Bu, Dongbo; Alipanahi, Babak; Li, Ming

    2013-01-01

    Previous studies show that the same type of bond lengths and angles fit Gaussian distributions well with small standard deviations on high resolution protein structure data. The mean values of these Gaussian distributions have been widely used as ideal bond lengths and angles in bioinformatics. However, we are not aware of any research done to evaluate how accurately we can model protein structures with dihedral angles and ideal bond lengths and angles. Here, we introduce the protein structur...

  3. Blasting Vibration Safety Criterion Analysis with Equivalent Elastic Boundary: Based on Accurate Loading Model

    Qingwen Li; Lan Qiao; Gautam Dasgupta; Siwei Ma; Liping Wang; Jianghui Dong

    2015-01-01

    In the tunnel and underground space engineering, the blasting wave will attenuate from shock wave to stress wave to elastic seismic wave in the host rock. Also, the host rock will form crushed zone, fractured zone, and elastic seismic zone under the blasting loading and waves. In this paper, an accurate mathematical dynamic loading model was built. And the crushed zone as well as fractured zone was considered as the blasting vi...

  4. Accurate and interpretable nanoSAR models from genetic programming-based decision tree construction approaches.

    Oksel, Ceyda; Winkler, David A; Ma, Cai Y; Wilkins, Terry; Wang, Xue Z

    2016-09-01

    The number of engineered nanomaterials (ENMs) being exploited commercially is growing rapidly, due to the novel properties they exhibit. Clearly, it is important to understand and minimize any risks to health or the environment posed by the presence of ENMs. Data-driven models that decode the relationships between the biological activities of ENMs and their physicochemical characteristics provide an attractive means of maximizing the value of scarce and expensive experimental data. Although such structure-activity relationship (SAR) methods have become very useful tools for modelling nanotoxicity endpoints (nanoSAR), they have limited robustness and predictivity and, most importantly, interpretation of the models they generate is often very difficult. New computational modelling tools or new ways of using existing tools are required to model the relatively sparse and sometimes lower quality data on the biological effects of ENMs. The most commonly used SAR modelling methods work best with large datasets, are not particularly good at feature selection, can be relatively opaque to interpretation, and may not account for nonlinearity in the structure-property relationships. To overcome these limitations, we describe the application of a novel algorithm, a genetic programming-based decision tree construction tool (GPTree) to nanoSAR modelling. We demonstrate the use of GPTree in the construction of accurate and interpretable nanoSAR models by applying it to four diverse literature datasets. We describe the algorithm and compare model results across the four studies. We show that GPTree generates models with accuracies equivalent to or superior to those of prior modelling studies on the same datasets. GPTree is a robust, automatic method for generation of accurate nanoSAR models with important advantages that it works with small datasets, automatically selects descriptors, and provides significantly improved interpretability of models. PMID:26956430

  5. A rapid and accurate two-point ray tracing method in horizontally layered velocity model

    TIAN Yue; CHEN Xiao-fei

    2005-01-01

    A rapid and accurate method for two-point ray tracing in horizontally layered velocity model is presented in this paper. Numerical experiments show that this method provides stable and rapid convergence with high accuracies, regardless of various 1-D velocity structures, takeoff angles and epicentral distances. This two-point ray tracing method is compared with the pseudobending technique and the method advanced by Kim and Baag (2002). It turns out that the method in this paper is much more efficient and accurate than the pseudobending technique, but is only applicable to 1-D velocity model. Kim(s method is equivalent to ours for cases without large takeoff angles, but it fails to work when the takeoff angle is close to 90o. On the other hand, the method presented in this paper is applicable to cases with any takeoff angles with rapid and accurate convergence. Therefore, this method is a good choice for two-point ray tracing problems in horizontally layered velocity model and is efficient enough to be applied to a wide range of seismic problems.

  6. Accelerated gravitational-wave parameter estimation with reduced order modeling

    Canizares, Priscilla; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2014-01-01

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current parameter estimation approaches for such scenarios can lead to computationally intractable problems in practice. Therefore there is a pressing need for new, fast and accurate Bayesian inference techniques. In this letter we demonstrate that a reduced order modeling approach enables rapid parameter estimation studies. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of non-spinning binary neutron star inspirals can be sped up by a factor of 30 for the early advanced detectors' configurations. This speed-up will increase to about $150$ as the detectors improve their low-frequency limit to 10Hz, reducing to hours analyses which would otherwise take months to complete. Although thes...

  7. A Reducing Resistance to Change Model

    Daniela Braduţanu

    2015-10-01

    Full Text Available The aim of this scientific paper is to present an original reducing resistance to change model. After analyzing the existent literature, I have concluded that the resistance to change subject has gained popularity over the years, but there are not too many models that could help managers implement more smoothly an organizational change process and at the same time, reduce effectively employees’ resistance. The proposed model is very helpful for managers and change agents who are confronted with a high degree of resistance when trying to implement a new change, as well as for researches. The key contribution of this paper is that resistance is not necessarily bad and if used appropriately, it can actually represent an asset. Managers must use employees’ resistance.

  8. Accurate Analytic Results for the Steady State Distribution of the Eigen Model

    Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun

    2016-04-01

    Eigen model of molecular evolution is popular in studying complex biological and biomedical systems. Using the Hamilton-Jacobi equation method, we have calculated analytic equations for the steady state distribution of the Eigen model with a relative accuracy of O(1/N), where N is the length of genome. Our results can be applied for the case of small genome length N, as well as the cases where the direct numerics can not give accurate result, e.g., the tail of distribution.

  9. An Accurate Thermoviscoelastic Rheological Model for Ethylene Vinyl Acetate Based on Fractional Calculus

    Marco Paggi

    2015-01-01

    Full Text Available The thermoviscoelastic rheological properties of ethylene vinyl acetate (EVA used to embed solar cells have to be accurately described to assess the deformation and the stress state of photovoltaic (PV modules and their durability. In the present work, considering the stress as dependent on a noninteger derivative of the strain, a two-parameter model is proposed to approximate the power-law relation between the relaxation modulus and time for a given temperature level. Experimental validation with EVA uniaxial relaxation data at different constant temperatures proves the great advantage of the proposed approach over classical rheological models based on exponential solutions.

  10. Fast and accurate calculations for cumulative first-passage time distributions in Wiener diffusion models

    Blurton, Steven Paul; Kesselmeier, M.; Gondan, Matthias

    2012-01-01

    We propose an improved method for calculating the cumulative first-passage time distribution in Wiener diffusion models with two absorbing barriers. This distribution function is frequently used to describe responses and error probabilities in choice reaction time tasks. The present work extends...... related work on the density of first-passage times [Navarro, D.J., Fuss, I.G. (2009). Fast and accurate calculations for first-passage times in Wiener diffusion models. Journal of Mathematical Psychology, 53, 222-230]. Two representations exist for the distribution, both including infinite series. We...

  11. Accurate halo-model matter power spectra with dark energy, massive neutrinos and modified gravitational forces

    Mead, Alexander; Lombriser, Lucas; Peacock, John; Steele, Olivia; Winther, Hans

    2016-01-01

    We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead (2015b). We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo model method can predict the non-linear matter power spectrum measured from simulations of parameterised $w(a)$ dark energy models at the few per cent level for $k0.5\\,h\\mathrm{Mpc}^{-1}$. An updated version of our publicly available HMcode can be found at https://github.com/alexander-mead/HMcode

  12. Accurate corresponding point search using sphere-attribute-image for statistical bone model generation

    Statistical deformable model based two-dimensional/three-dimensional (2-D/3-D) registration is a promising method for estimating the position and shape of patient bone in the surgical space. Since its accuracy depends on the statistical model capacity, we propose a method for accurately generating a statistical bone model from a CT volume. Our method employs the Sphere-Attribute-Image (SAI) and has improved the accuracy of corresponding point search in statistical model generation. At first, target bone surfaces are extracted as SAIs from the CT volume. Then the textures of SAIs are classified to some regions using Maximally-stable-extremal-regions methods. Next, corresponding regions are determined using Normalized cross-correlation (NCC). Finally, corresponding points in each corresponding region are determined using NCC. The application of our method to femur bone models was performed, and worked well in the experiments. (author)

  13. Accurate halo-model matter power spectra with dark energy, massive neutrinos and modified gravitational forces

    Mead, A. J.; Heymans, C.; Lombriser, L.; Peacock, J. A.; Steele, O. I.; Winther, H. A.

    2016-06-01

    We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead et al. We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases, we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo-model method can predict the non-linear matter power spectrum measured from simulations of parametrized w(a) dark energy models at the few per cent level for k 0.5 h Mpc-1. An updated version of our publicly available HMCODE can be found at https://github.com/alexander-mead/hmcode.

  14. Accurate and efficient prediction of fine-resolution hydrologic and carbon dynamic simulations from coarse-resolution models

    Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning

    2016-02-01

    The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.

  15. An accurate simulation model for single-photon avalanche diodes including important statistical effects

    An accurate and complete circuit simulation model for single-photon avalanche diodes (SPADs) is presented. The derived model is not only able to simulate the static DC and dynamic AC behaviors of an SPAD operating in Geiger-mode, but also can emulate the second breakdown and the forward bias behaviors. In particular, it considers important statistical effects, such as dark-counting and after-pulsing phenomena. The developed model is implemented using the Verilog-A description language and can be directly performed in commercial simulators such as Cadence Spectre. The Spectre simulation results give a very good agreement with the experimental results reported in the open literature. This model shows a high simulation accuracy and very fast simulation rate. (semiconductor devices)

  16. Improvement of a land surface model for accurate prediction of surface energy and water balances

    In order to predict energy and water balances between the biosphere and atmosphere accurately, sophisticated schemes to calculate evaporation and adsorption processes in the soil and cloud (fog) water deposition on vegetation were implemented in the one-dimensional atmosphere-soil-vegetation model including CO2 exchange process (SOLVEG2). Performance tests in arid areas showed that the above schemes have a significant effect on surface energy and water balances. The framework of the above schemes incorporated in the SOLVEG2 and instruction for running the model are documented. With further modifications of the model to implement the carbon exchanges between the vegetation and soil, deposition processes of materials on the land surface, vegetation stress-growth-dynamics etc., the model is suited to evaluate an effect of environmental loads to ecosystems by atmospheric pollutants and radioactive substances under climate changes such as global warming and drought. (author)

  17. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method

  18. Development of accurate contact force models for use with Discrete Element Method (DEM) modelling of bulk fruit handling processes

    Dintwa, Edward

    2006-01-01

    This thesis is primarily concerned with the development of accurate, simplified and validated contact force models for the discrete element modelling (DEM) of fruit bulk handling systems. The DEM is essentially a numerical technique to model a system of particles interacting with one another and with the system boundaries through collisions. The specific area of application envisaged is in postharvest agriculture, where DEM could be used in simulation of many unit operations with bulk fruit,...

  19. Reduced Complexity Channel Models for IMT-Advanced Evaluation

    Yu Zhang

    2009-01-01

    Full Text Available Accuracy and complexity are two crucial aspects of the applicability of a channel model for wideband multiple input multiple output (MIMO systems. For small number of antenna element pairs, correlation-based models have lower computational complexity while the geometry-based stochastic models (GBSMs can provide more accurate modeling of real radio propagation. This paper investigates several potential simplifications of the GBSM to reduce the complexity with minimal impact on accuracy. In addition, we develop a set of broadband metrics which enable a thorough investigation of the differences between the GBSMs and the simplified models. The impact of various random variables which are employed by the original GBSM on the system level simulation are also studied. Both simulation results and a measurement campaign show that complexity can be reduced significantly with a negligible loss of accuracy in the proposed metrics. As an example, in the presented scenarios, the computational time can be reduced by up to 57% while keeping the relative deviation of 5% outage capacity within 5%.

  20. Accurate tissue area measurements with considerably reduced radiation dose achieved by patient-specific CT scan parameters

    Brandberg, J.; Bergelin, E.; Sjostrom, L.;

    2008-01-01

    for muscle tissue. Image noise was quantified by standard deviation measurements. The area deviation was radiation dose of the low-dose technique was reduced to 2-3% for diameters of 31-35 cm and to 7.5-50% for diameters of 36-47 cm...... as compared with the integral dose by the standard diagnostic technique. The CT numbers of muscle tissue remained unchanged with reduced radiation dose. Image noise was on average 20.9 HU (Hounsfield units) for subjects with diameters of 31-35 cm and 11.2 HU for subjects with diameters in the range of 36...

  1. Reduced order modeling of wall turbulence

    Moin, Parviz

    2015-11-01

    Modeling turbulent flow near a wall is a pacing item in computational fluid dynamics for aerospace applications and geophysical flows. Gradual progress has been made in statistical modeling of near wall turbulence using the Reynolds averaged equations of motion, an area of research where John Lumley has made numerous seminal contributions. More recently, Lumley and co-workers pioneered dynamical systems modeling of near wall turbulence, and demonstrated that the experimentally observed turbulence dynamics can be predicted using low dimensional dynamical systems. The discovery of minimal flow unit provides further evidence that the near wall turbulence is amenable to reduced order modeling. The underlying rationale for potential success in using low dimensional dynamical systems theory is based on the fact that the Reynolds number is low in close proximity to the wall. Presumably for the same reason, low dimensional models are expected to be successful in modeling of the laminar/turbulence transition region. This has been shown recently using dynamic mode decomposition. Furthermore, it is shown that the near wall flow structure and statistics in the late and non-linear transition region is strikingly similar to that in higher Reynolds number fully developed turbulence. In this presentation, I will argue that the accumulated evidence suggests that wall modeling for LES using low dimensional dynamical systems is a profitable avenue to pursue. The main challenge would be the numerical integration of such wall models in LES methodology.

  2. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756

  3. A complete and accurate surface-potential based large-signal model for compound semiconductor HEMTs

    A complete and accurate surface potential based large-signal model for compound semiconductor HEMTs is presented. A surface potential equation resembling the one used in conventional MOSFET models is achieved. The analytic solutions from the traditional surface potential theory that developed in MOSFET models are inherited. For core model derivation, a novel method is used to realize a direct application of the standard surface potential model of MOSFETs for HEMT modeling, without breaking the mathematic structure. The high-order derivatives of I—V/C—V remain continuous, making the model suitable for RF large-signal applications. Furthermore, the self-heating effects and the transconductance dispersion are also modelled. The model has been verified through comparison with measured DC IV, Gummel symmetry test, CV, minimum noise figure, small-signal S - parameters up to 66 GHz and single-tone input power sweep at 29 GHz for a 4 × 75 μm × 0.1 μm InGaAs/GaAs power pHEMT, fabricated at a commercial foundry. (semiconductor devices)

  4. Reducing Spatial Data Complexity for Classification Models

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  5. Reducing Spatial Data Complexity for Classification Models

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  6. Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models

    Stovgaard Kasper

    2010-08-01

    Full Text Available Abstract Background Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function, which is paramount for structure determination based on statistical inference. Results We present a method for the efficient calculation of accurate SAXS curves based on the Debye formula and a set of scattering form factors for dummy atom representations of amino acids. Such a method avoids the computationally costly iteration over all atoms. We estimated the form factors using generated data from a set of high quality protein structures. No ad hoc scaling or correction factors are applied in the calculation of the curves. Two coarse-grained representations of protein structure were investigated; two scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, TorusDBN. This resulted in a significant improvement in the decoy recognition performance. In conclusion, the presented method shows great promise for

  7. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements

    Seth, Ajay; Matias, Ricardo; António P Veloso; Delp, Scott L.

    2016-01-01

    The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic join...

  8. Accurate Modeling of a Transverse Flux Permanent Magnet Generator Using 3D Finite Element Analysis

    Hosseini, Seyedmohsen; Moghani, Javad Shokrollahi; Jensen, Bogi Bech

    2011-01-01

    This paper presents an accurate modeling method that is applied to a single-sided outer-rotor transverse flux permanent magnet generator. The inductances and the induced electromotive force for a typical generator are calculated using the magnetostatic three-dimensional finite element method. A new...... method is then proposed that reveals the behavior of the generator under any load. Finally, torque calculations are carried out using three dimensional finite element analyses. It is shown that although in the single-phase generator the cogging torque is very high, this can be improved significantly by...... combining three single-phase modules into a three-phase generator....

  9. Applying an accurate spherical model to gamma-ray burst afterglow observations

    Leventis, K.; van der Horst, A. J.; van Eerten, H. J.; Wijers, R. A. M. J.

    2013-05-01

    We present results of model fits to afterglow data sets of GRB 970508, GRB 980703 and GRB 070125, characterized by long and broad-band coverage. The model assumes synchrotron radiation (including self-absorption) from a spherical adiabatic blast wave and consists of analytic flux prescriptions based on numerical results. For the first time it combines the accuracy of hydrodynamic simulations through different stages of the outflow dynamics with the flexibility of simple heuristic formulas. The prescriptions are especially geared towards accurate description of the dynamical transition of the outflow from relativistic to Newtonian velocities in an arbitrary power-law density environment. We show that the spherical model can accurately describe the data only in the case of GRB 970508, for which we find a circumburst medium density n ∝ r-2. We investigate in detail the implied spectra and physical parameters of that burst. For the microphysics we show evidence for equipartition between the fraction of energy density carried by relativistic electrons and magnetic field. We also find that for the blast wave to be adiabatic, the fraction of electrons accelerated at the shock has to be smaller than 1. We present best-fitting parameters for the afterglows of all three bursts, including uncertainties in the parameters of GRB 970508, and compare the inferred values to those obtained by different authors.

  10. Particle Image Velocimetry Measurements in Anatomically-Accurate Models of the Mammalian Nasal Cavity

    Rumple, C.; Richter, J.; Craven, B. A.; Krane, M.

    2012-11-01

    A summary of the research being carried out by our multidisciplinary team to better understand the form and function of the nose in different mammalian species that include humans, carnivores, ungulates, rodents, and marine animals will be presented. The mammalian nose houses a convoluted airway labyrinth, where two hallmark features of mammals occur, endothermy and olfaction. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of airflow and respiratory and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture transparent, anatomically-accurate models for stereo particle image velocimetry (SPIV) measurements of nasal airflow. Challenges in the design and manufacture of index-matched anatomical models are addressed and preliminary SPIV measurements are presented. Such measurements will constitute a validation database for concurrent computational fluid dynamics (CFD) simulations of mammalian respiration and olfaction. Supported by the National Science Foundation.

  11. Cumulative atomic multipole moments complement any atomic charge model to obtain more accurate electrostatic properties

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1992-01-01

    The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.

  12. Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm

    Huang, Yanhua; Gu, Lizhi

    2015-09-01

    The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and

  13. Using the Neumann series expansion for assembling Reduced Order Models

    Nasisi S.

    2014-06-01

    Full Text Available An efficient method to remove the limitation in selecting the master degrees of freedom in a finite element model by means of a model order reduction is presented. A major difficulty of the Guyan reduction and IRS method (Improved Reduced System is represented by the need of appropriately select the master and slave degrees of freedom for the rate of convergence to be high. This study approaches the above limitation by using a particular arrangement of the rows and columns of the assembled matrices K and M and employing a combination between the IRS method and a variant of the analytical selection of masters presented in (Shah, V. N., Raymund, M., Analytical selection of masters for the reduced eigenvalue problem, International Journal for Numerical Methods in Engineering 18 (1 1982 in case first lowest frequencies had to be sought. One of the most significant characteristics of the approach is the use of the Neumann series expansion that motivates this particular arrangement of the matrices’ entries. The method shows a higher rate of convergence when compared to the standard IRS and very accurate results for the lowest reduced frequencies. To show the effectiveness of the proposed method two testing structures and the human vocal tract model employed in (Vampola, T., Horacek, J., Svec, J. G., FE modeling of human vocal tract acoustics. Part I: Prodution of Czech vowels, Acta Acustica United with Acustica 94 (3 2008 are presented.

  14. Validation of an Accurate Three-Dimensional Helical Slow-Wave Circuit Model

    Kory, Carol L.

    1997-01-01

    The helical slow-wave circuit embodies a helical coil of rectangular tape supported in a metal barrel by dielectric support rods. Although the helix slow-wave circuit remains the mainstay of the traveling-wave tube (TWT) industry because of its exceptionally wide bandwidth, a full helical circuit, without significant dimensional approximations, has not been successfully modeled until now. Numerous attempts have been made to analyze the helical slow-wave circuit so that the performance could be accurately predicted without actually building it, but because of its complex geometry, many geometrical approximations became necessary rendering the previous models inaccurate. In the course of this research it has been demonstrated that using the simulation code, MAFIA, the helical structure can be modeled with actual tape width and thickness, dielectric support rod geometry and materials. To demonstrate the accuracy of the MAFIA model, the cold-test parameters including dispersion, on-axis interaction impedance and attenuation have been calculated for several helical TWT slow-wave circuits with a variety of support rod geometries including rectangular and T-shaped rods, as well as various support rod materials including isotropic, anisotropic and partially metal coated dielectrics. Compared with experimentally measured results, the agreement is excellent. With the accuracy of the MAFIA helical model validated, the code was used to investigate several conventional geometric approximations in an attempt to obtain the most computationally efficient model. Several simplifications were made to a standard model including replacing the helical tape with filaments, and replacing rectangular support rods with shapes conforming to the cylindrical coordinate system with effective permittivity. The approximate models are compared with the standard model in terms of cold-test characteristics and computational time. The model was also used to determine the sensitivity of various

  15. A reduced model for shock and detonation waves. I. The inert case

    Stoltz, G.

    2006-01-01

    We present a model of mesoparticles, very much in the Dissipative Particle Dynamics spirit, in which a molecule is replaced by a particle with an internal thermodynamic degree of freedom (temperature or energy). The model is shown to give quantitavely accurate results for the simulation of shock waves in a crystalline polymer, and opens the way to a reduced model of detonation waves.

  16. LogGPO: An accurate communication model for performance prediction of MPI programs

    CHEN WenGuang; ZHAI JiDong; ZHANG Jin; ZHENG WeiMin

    2009-01-01

    Message passing interface (MPI) is the de facto standard in writing parallel scientific applications on distributed memory systems. Performance prediction of MPI programs on current or future parallel sys-terns can help to find system bottleneck or optimize programs. To effectively analyze and predict per-formance of a large and complex MPI program, an efficient and accurate communication model is highly needed. A series of communication models have been proposed, such as the LogP model family, which assume that the sending overhead, message transmission, and receiving overhead of a communication is not overlapped and there is a maximum overlap degree between computation and communication. However, this assumption does not always hold for MPI programs because either sending or receiving overhead introduced by MPI implementations can decrease potential overlap for large messages. In this paper, we present a new communication model, named LogGPO, which captures the potential overlap between computation with communication of MPI programs. We design and implement a trace-driven simulator to verify the LogGPO model by predicting performance of point-to-point communication and two real applications CG and Sweep3D. The average prediction errors of LogGPO model are 2.4% and 2.0% for these two applications respectively, while the average prediction errors of LogGP model are 38.3% and 9.1% respectively.

  17. Physical modeling of real-world slingshots for accurate speed predictions

    Yeats, Bob

    2016-01-01

    We discuss the physics and modeling of latex-rubber slingshots. The goal is to get accurate speed predictions inspite of the significant real world difficulties of force drift, force hysteresis, rubber ageing, and the very non- linear, non-ideal, force vs. pull distance curves of slingshot rubber bands. Slingshots are known to shoot faster under some circumstances when the bands are tapered rather than having constant width and stiffness. We give both qualitative understanding and numerical predictions of this effect. We consider two models. The first is based on conservation of energy and is easier to implement, but cannot determine the speeds along the rubber bands without making assumptions. The second, treats the bands as a series of mass points subject to being pulled by immediately adjacent mass points according to how much the rubber has been stretched on the two adjacent sides. This is a classic many-body F=ma problem but convergence requires using a particular numerical technique. It gives accurate p...

  18. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement.

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-01-01

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the 'phase to 3D coordinates transformation' are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553

  19. Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques

    Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.

  20. Simple and accurate modelling of the gravitational potential produced by thick and thin exponential disks

    Smith, Rory; Candlish, Graeme N; Fellhauer, Michael; Gibson, Bradley K

    2015-01-01

    We present accurate models of the gravitational potential produced by a radially exponential disk mass distribution. The models are produced by combining three separate Miyamoto-Nagai disks. Such models have been used previously to model the disk of the Milky Way, but here we extend this framework to allow its application to disks of any mass, scalelength, and a wide range of thickness from infinitely thin to near spherical (ellipticities from 0 to 0.9). The models have the advantage of simplicity of implementation, and we expect faster run speeds over a double exponential disk treatment. The potentials are fully analytical, and differentiable at all points. The mass distribution of our models deviates from the radial mass distribution of a pure exponential disk by <0.4% out to 4 disk scalelengths, and <1.9% out to 10 disk scalelengths. We tabulate fitting parameters which facilitate construction of exponential disks for any scalelength, and a wide range of disk thickness (a user-friendly, web-based int...

  1. Accurate and efficient modeling of the detector response in small animal multi-head PET systems

    In fully three-dimensional PET imaging, iterative image reconstruction techniques usually outperform analytical algorithms in terms of image quality provided that an appropriate system model is used. In this study we concentrate on the calculation of an accurate system model for the YAP-(S)PET II small animal scanner, with the aim to obtain fully resolution- and contrast-recovered images at low levels of image roughness. For this purpose we calculate the system model by decomposing it into a product of five matrices: (1) a detector response component obtained via Monte Carlo simulations, (2) a geometric component which describes the scanner geometry and which is calculated via a multi-ray method, (3) a detector normalization component derived from the acquisition of a planar source, (4) a photon attenuation component calculated from x-ray computed tomography data, and finally, (5) a positron range component is formally included. This system model factorization allows the optimization of each component in terms of computation time, storage requirements and accuracy. The main contribution of this work is a new, efficient way to calculate the detector response component for rotating, planar detectors, that consists of a GEANT4 based simulation of a subset of lines of flight (LOFs) for a single detector head whereas the missing LOFs are obtained by using intrinsic detector symmetries. Additionally, we introduce and analyze a probability threshold for matrix elements of the detector component to optimize the trade-off between the matrix size in terms of non-zero elements and the resulting quality of the reconstructed images. In order to evaluate our proposed system model we reconstructed various images of objects, acquired according to the NEMA NU 4-2008 standard, and we compared them to the images reconstructed with two other system models: a model that does not include any detector response component and a model that approximates analytically the depth of interaction

  2. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish

    2016-04-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  3. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    Matias I Maturana

    2016-04-01

    Full Text Available Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants. Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF, i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy.

  4. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina

    Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish

    2016-01-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  5. Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation

    Poddar, Banibrata; Giurgiutiu, Victor

    2016-04-01

    Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.

  6. Do Ecological Niche Models Accurately Identify Climatic Determinants of Species Ranges?

    Searcy, Christopher A; Shaffer, H Bradley

    2016-04-01

    Defining species' niches is central to understanding their distributions and is thus fundamental to basic ecology and climate change projections. Ecological niche models (ENMs) are a key component of making accurate projections and include descriptions of the niche in terms of both response curves and rankings of variable importance. In this study, we evaluate Maxent's ranking of environmental variables based on their importance in delimiting species' range boundaries by asking whether these same variables also govern annual recruitment based on long-term demographic studies. We found that Maxent-based assessments of variable importance in setting range boundaries in the California tiger salamander (Ambystoma californiense; CTS) correlate very well with how important those variables are in governing ongoing recruitment of CTS at the population level. This strong correlation suggests that Maxent's ranking of variable importance captures biologically realistic assessments of factors governing population persistence. However, this result holds only when Maxent models are built using best-practice procedures and variables are ranked based on permutation importance. Our study highlights the need for building high-quality niche models and provides encouraging evidence that when such models are built, they can reflect important aspects of a species' ecology. PMID:27028071

  7. An accurate and efficient Lagrangian sub-grid model for multi-particle dispersion

    Toschi, Federico; Mazzitelli, Irene; Lanotte, Alessandra S.

    2014-11-01

    Many natural and industrial processes involve the dispersion of particle in turbulent flows. Despite recent theoretical progresses in the understanding of particle dynamics in simple turbulent flows, complex geometries often call for numerical approaches based on eulerian Large Eddy Simulation (LES). One important issue related to the Lagrangian integration of tracers in under-resolved velocity fields is connected to the lack of spatial correlations at unresolved scales. Here we propose a computationally efficient Lagrangian model for the sub-grid velocity of tracers dispersed in statistically homogeneous and isotropic turbulent flows. The model incorporates the multi-scale nature of turbulent temporal and spatial correlations that are essential to correctly reproduce the dynamics of multi-particle dispersion. The new model is able to describe the Lagrangian temporal and spatial correlations in clouds of particles. In particular we show that pairs and tetrads dispersion compare well with results from Direct Numerical Simulations of statistically isotropic and homogeneous 3d turbulence. This model may offer an accurate and efficient way to describe multi-particle dispersion in under resolved turbulent velocity fields such as the one employed in eulerian LES. This work is part of the research programmes FP112 of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). We acknowledge support from the EU COST Action MP0806.

  8. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782

  9. Development of accurate inelastic analysis models for materials constituting penetrations in reactor vessel

    Evaluation of structural integrity of lower head penetrations in reactor vessels is required for investigating the scenario of severe accidents in nuclear power plants under the loss of core-cooling capacity. Materials are exposed to temperatures much higher than experienced in normal operation and capability of evaluating material behavior under such circumstances needs to be developed for attaining reliable results. Inelastic deformation behavior changes with temperature significantly and its consideration has a critical importance in the development of inelastic constitutive model for application to such situations. A number of tensile tests have been performed on three materials constituting the lower-head penetrations, i.e. JIS SQV2A, SUS316 and NCF600, and the results were used for development of accurate inelastic constitutive models for these materials. The models based on the combination of initial yield stress, hardening and softening characteristics were found to be successful in describing deformation behavior of these materials under a wide range of temperature between room temperature and 1100degC along with the strain rates covering three orders of magnitude. Ways to generalize the models into varying temperature condition have also been presented. (author)

  10. A general pairwise interaction model provides an accurate description of in vivo transcription factor binding sites.

    Marc Santolini

    Full Text Available The identification of transcription factor binding sites (TFBSs on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs, in which each DNA base pair contributes independently to the transcription factor (TF binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM, a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting

  11. SMARTIES: Spheroids Modelled Accurately with a Robust T-matrix Implementation for Electromagnetic Scattering

    Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.

    2016-03-01

    SMARTIES calculates the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. This suite of MATLAB codes provides a fully documented implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. Included are scripts that cover a range of scattering problems relevant to nanophotonics and plasmonics, including calculation of far-field scattering and absorption cross-sections for fixed incidence orientation, orientation-averaged cross-sections and scattering matrix, surface-field calculations as well as near-fields, wavelength-dependent near-field and far-field properties, and access to lower-level functions implementing the T-matrix calculations, including the T-matrix elements which may be calculated more accurately than with competing codes.

  12. Spiral CT scanning plan to generate accurate Fe models of the human femur

    In spiral computed tomography (CT), source rotation, patient translation, and data acquisition are continuously conducted. Settings of the detector collimation and the table increment affect the image quality in terms of spatial and contrast resolution. This study assessed and measured the efficacy of spiral CT in those applications where the accurate reconstruction of bone morphology is critical: custom made prosthesis design or three dimensional modelling of the mechanical behaviour of long bones. Results show that conventional CT grants the highest accuracy. Spiral CT with D=5 mm and P=1,5 in the regions where the morphology is more regular, slightly degrades the image quality but allows to acquire at comparable cost an higher number of images increasing the longitudinal resolution of the acquired data set. (author)

  13. An Efficient Hybrid DSMC/MD Algorithm for Accurate Modeling of Micro Gas Flows

    Liang, Tengfei

    2013-01-01

    Aiming at simulating micro gas flows with accurate boundary conditions, an efficient hybrid algorithmis developed by combining themolecular dynamics (MD) method with the direct simulationMonte Carlo (DSMC)method. The efficiency comes from the fact that theMD method is applied only within the gas-wall interaction layer, characterized by the cut-off distance of the gas-solid interaction potential, to resolve accurately the gas-wall interaction process, while the DSMC method is employed in the remaining portion of the flow field to efficiently simulate rarefied gas transport outside the gas-wall interaction layer. A unique feature about the present scheme is that the coupling between the two methods is realized by matching the molecular velocity distribution function at the DSMC/MD interface, hence there is no need for one-toone mapping between a MD gas molecule and a DSMC simulation particle. Further improvement in efficiency is achieved by taking advantage of gas rarefaction inside the gas-wall interaction layer and by employing the "smart-wall model" proposed by Barisik et al. The developed hybrid algorithm is validated on two classical benchmarks namely 1-D Fourier thermal problem and Couette shear flow problem. Both the accuracy and efficiency of the hybrid algorithm are discussed. As an application, the hybrid algorithm is employed to simulate thermal transpiration coefficient in the free-molecule regime for a system with atomically smooth surface. Result is utilized to validate the coefficients calculated from the pure DSMC simulation with Maxwell and Cercignani-Lampis gas-wall interaction models. ©c 2014 Global-Science Press.

  14. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  15. Discrete state model and accurate estimation of loop entropy of RNA secondary structures.

    Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie

    2008-03-28

    Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html. PMID:18376982

  16. A Reduced High Frequency Transformer Model To Detect The Partial Discharge Locations

    El-Sayed M. El-Refaie; El-Sayed H. Shehab El-Dein

    2014-01-01

    Transformer modeling is the first step in improving partial discharge localization techniques. Different transformer models were used for such purpose. This paper presents a reduced transformer model that can be accurately used for the purpose of partial discharge localization. The model is investigated in alternative transient program (ATP) draw for partial discharge localization application. A comparison between different transformer models is studied, the achieved results of...

  17. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    Kostelich Eric J

    2011-12-01

    Full Text Available Abstract Background Data assimilation refers to methods for updating the state vector (initial condition of a complex spatiotemporal model (such as a numerical weather model by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter, previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck.

  18. SPARC: Mass Models for 175 Disk Galaxies with Spitzer Photometry and Accurate Rotation Curves

    Lelli, Federico; Schombert, James M

    2016-01-01

    We introduce SPARC (Spitzer Photometry & Accurate Rotation Curves): a sample of 175 nearby galaxies with new surface photometry at 3.6 um and high-quality rotation curves from previous HI/Halpha studies. SPARC spans a broad range of morphologies (S0 to Irr), luminosities (~5 dex), and surface brightnesses (~4 dex). We derive [3.6] surface photometry and study structural relations of stellar and gas disks. We find that both the stellar mass-HI mass relation and the stellar radius-HI radius relation have significant intrinsic scatter, while the HI mass-radius relation is extremely tight. We build detailed mass models and quantify the ratio of baryonic-to-observed velocity (Vbar/Vobs) for different characteristic radii and values of the stellar mass-to-light ratio (M/L) at [3.6]. Assuming M/L=0.5 Msun/Lsun (as suggested by stellar population models) we find that (i) the gas fraction linearly correlates with total luminosity, (ii) the transition from star-dominated to gas-dominated galaxies roughly correspond...

  19. An Approach to More Accurate Model Systems for Purple Acid Phosphatases (PAPs).

    Bernhardt, Paul V; Bosch, Simone; Comba, Peter; Gahan, Lawrence R; Hanson, Graeme R; Mereacre, Valeriu; Noble, Christopher J; Powell, Annie K; Schenk, Gerhard; Wadepohl, Hubert

    2015-08-01

    The active site of mammalian purple acid phosphatases (PAPs) have a dinuclear iron site in two accessible oxidation states (Fe(III)2 and Fe(III)Fe(II)), and the heterovalent is the active form, involved in the regulation of phosphate and phosphorylated metabolite levels in a wide range of organisms. Therefore, two sites with different coordination geometries to stabilize the heterovalent active form and, in addition, with hydrogen bond donors to enable the fixation of the substrate and release of the product, are believed to be required for catalytically competent model systems. Two ligands and their dinuclear iron complexes have been studied in detail. The solid-state structures and properties, studied by X-ray crystallography, magnetism, and Mössbauer spectroscopy, and the solution structural and electronic properties, investigated by mass spectrometry, electronic, nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and Mössbauer spectroscopies and electrochemistry, are discussed in detail in order to understand the structures and relative stabilities in solution. In particular, with one of the ligands, a heterovalent Fe(III)Fe(II) species has been produced by chemical oxidation of the Fe(II)2 precursor. The phosphatase reactivities of the complexes, in particular, also of the heterovalent complex, are reported. These studies include pH-dependent as well as substrate concentration dependent studies, leading to pH profiles, catalytic efficiencies and turnover numbers, and indicate that the heterovalent diiron complex discussed here is an accurate PAP model system. PMID:26196255

  20. New possibilities of accurate particle characterisation by applying direct boundary models to analytical centrifugation

    Walter, Johannes; Thajudeen, Thaseem; Süß, Sebastian; Segets, Doris; Peukert, Wolfgang

    2015-04-01

    Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles.

  1. Blasting Vibration Safety Criterion Analysis with Equivalent Elastic Boundary: Based on Accurate Loading Model

    Qingwen Li

    2015-01-01

    Full Text Available In the tunnel and underground space engineering, the blasting wave will attenuate from shock wave to stress wave to elastic seismic wave in the host rock. Also, the host rock will form crushed zone, fractured zone, and elastic seismic zone under the blasting loading and waves. In this paper, an accurate mathematical dynamic loading model was built. And the crushed zone as well as fractured zone was considered as the blasting vibration source thus deducting the partial energy for cutting host rock. So this complicated dynamic problem of segmented differential blasting was regarded as an equivalent elastic boundary problem by taking advantage of Saint-Venant’s Theorem. At last, a 3D model in finite element software FLAC3D accepted the constitutive parameters, uniformly distributed mutative loading, and the cylindrical attenuation law to predict the velocity curves and effective tensile curves for calculating safety criterion formulas of surrounding rock and tunnel liner after verifying well with the in situ monitoring data.

  2. Quad-Band Bowtie Antenna Design for Wireless Communication System Using an Accurate Equivalent Circuit Model

    Mohammed Moulay

    2015-01-01

    Full Text Available A novel configuration of quad-band bowtie antenna suitable for wireless application is proposed based on accurate equivalent circuit model. The simple configuration and low profile nature of the proposed antenna lead to easy multifrequency operation. The proposed antenna is designed to satisfy specific bandwidth specifications for current communication systems including the Bluetooth (frequency range 2.4–2.485 GHz and bands of the Unlicensed National Information Infrastructure (U-NII low band (frequency range 5.15–5.35 GHz and U-NII mid band (frequency range 5.47–5.725 GHz and used for mobile WiMAX (frequency range 3.3–3.6 GHz. To validate the proposed equivalent circuit model, the simulation results are compared with those obtained by the moments method of Momentum software, the finite integration technique of CST Microwave studio, and the finite element method of HFSS software. An excellent agreement is achieved for all the designed antennas. The analysis of the simulated results confirms the successful design of quad-band bowtie antenna.

  3. Developing an Accurate CFD Based Gust Model for the Truss Braced Wing Aircraft

    Bartels, Robert E.

    2013-01-01

    The increased flexibility of long endurance aircraft having high aspect ratio wings necessitates attention to gust response and perhaps the incorporation of gust load alleviation. The design of civil transport aircraft with a strut or truss-braced high aspect ratio wing furthermore requires gust response analysis in the transonic cruise range. This requirement motivates the use of high fidelity nonlinear computational fluid dynamics (CFD) for gust response analysis. This paper presents the development of a CFD based gust model for the truss braced wing aircraft. A sharp-edged gust provides the gust system identification. The result of the system identification is several thousand time steps of instantaneous pressure coefficients over the entire vehicle. This data is filtered and downsampled to provide the snapshot data set from which a reduced order model is developed. A stochastic singular value decomposition algorithm is used to obtain a proper orthogonal decomposition (POD). The POD model is combined with a convolution integral to predict the time varying pressure coefficient distribution due to a novel gust profile. Finally the unsteady surface pressure response of the truss braced wing vehicle to a one-minus-cosine gust, simulated using the reduced order model, is compared with the full CFD.

  4. ACCURATE UNIVERSAL MODELS FOR THE MASS ACCRETION HISTORIES AND CONCENTRATIONS OF DARK MATTER HALOS

    A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance ΛCDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and ΛCDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the ΛCDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass, when

  5. Inflation model building with an accurate measure of e-folding

    Chongchitnan, Sirichai

    2016-01-01

    We revisit the problem of measuring the number of e-folding during inflation. It has become standard practice to take the logarithmic growth of the scale factor as a measure of the amount of inflation. However, this is only an approximation for the true amount of inflation required to solve the horizon and flatness problems. The aim of this work is to quantify the error in this approximation, and show how it can be avoided. We present an alternative framework for inflation model building using the inverse Hubble radius, aH, as the key parameter. We show that in this formalism, the correct number of e-folding arises naturally as a measure of inflation. As an application, we present an interesting model in which the entire inflationary dynamics can be solved analytically and exactly, and, in special cases, reduces to the familiar class of power-law models.

  6. Reduced order model of draft tube flow

    Swirling flow with compact coherent structures is very good candidate for proper orthogonal decomposition (POD), i.e. for decomposition into eigenmodes, which are the cornerstones of the flow field. Present paper focuses on POD of steady flows, which correspond to different operating points of Francis turbine draft tube flow. Set of eigenmodes is built using a limited number of snapshots from computational simulations. Resulting reduced order model (ROM) describes whole operating range of the draft tube. ROM enables to interpolate in between the operating points exploiting the knowledge about significance of particular eigenmodes and thus reconstruct the velocity field in any operating point within the given range. Practical example, which employs axisymmetric simulations of the draft tube flow, illustrates accuracy of ROM in regions without vortex breakdown together with need for higher resolution of the snapshot database close to location of sudden flow changes (e.g. vortex breakdown). ROM based on POD interpolation is very suitable tool for insight into flow physics of the draft tube flows (especially energy transfers in between different operating points), for supply of data for subsequent stability analysis or as an initialization database for advanced flow simulations

  7. The accurate simulation of the tension test for stainless steel sheet: the plasticity model

    Full text: The overall aim of this research project is to achieve the accurate simulation of a hydroforming process chain, in this case the manufacturing of a metal bellow. The work is done in cooperation with the project group for numerical research at the computer centre of the University of Karlsruhe, which is responsible for the simulation itself, while the Institute for Metal Forming Technology (IFU) of the University of Stuttgart is responsible for the material modeling and the resulting differential equations to describe the material behavior. Hydroforming technology uses highly compressed fluid media (up to 4200 bar) to form the basic, mostly metallic material. One hydroforming field is tube hydroforming (THF), which uses tubes or extrusions as basic material. The forming conditions created by hydroforming are quite different from those originated by other processes as for example deep drawing. That's why today's available simulation software is not always able to show satisfying results when a hydroforming process is simulated. The partners of this project try to solve this problem with the FDEM simulation software, developed by W. Schoenauer at the University of Karlsruhe, Germany. It was designed to solve systems of partial differential equations, which in this project are delivered by the IFU. The manufacturing of a metal bellow by hydroforming leads to tensile stress in longitudinal and tangential direction and to bend load due to the shifting and rollforming process. Therefore as a first step, the standardized tensile test is simulated. For plastic deformation a material model developed by D. Banabic is used. It describes the plastic behavior of orthotropic sheet metal. For elastic deformation Hooke's law for isotropic materials is used. In permanent iteration with the simulation the used material model has to be checked for its validity and must be modified if necessary. Refs. 3 (author)

  8. Bacteriophage Infection of Model Metal Reducing Bacteria

    Weber, K. A.; Bender, K. S.; Gandhi, K.; Coates, J. D.

    2008-12-01

    filtered through a 0.22 μ m sterile nylon filter, stained with phosphotungstic acid (PTA), and examined using transmission electron microscopy (TEM). TEM revealed the presence of viral like particles in the culture exposed to mytomycin C. Together these results suggest an active infection with a lysogenic bacteriophage in the model metal reducing bacteria, Geobacter spp., which could affect metabolic physiology and subsequently metal reduction in environmental systems.

  9. Quantitative evaluation of gas entrainment by numerical simulation with accurate physics model

    In the design study on a large-scale sodium-cooled fast reactor (JSFR), the reactor vessel is compactified to reduce the construction costs and enhance the economical competitiveness. However, such a reactor vessel induces higher coolant flows in the vessel and causes several thermal-hydraulics issues, e.g. gas entrainment (GE) phenomenon. The GE in the JSFR may occur at the cover gas-coolant interface in the vessel by a strong vortex at the interface. This type of GE has been studied experimentally, numerically and theoretically. Therefore, the onset condition of the GE can be evaluated conservatively. However, to clarify the negative influences of the GE on the JSFR, not only the onset condition of the GE but also the entrained gas (bubble) flow rate has to be evaluated. As long as we know, studies on the entrained gas flow rates are quite limited in both experimental and numerical fields. In this study, the authors performs numerical simulations to investigate the entrained gas amount in a hollow vortex experiment (a cylindrical vessel experiment). To simulate interfacial deformations accurately, a high-precision numerical simulation algorithm for gas-liquid two-phase flows is employed. In the first place, fine cells are applied to the region near the center of the vortex to reproduce the steep radial gradient of the circumferential velocity in this region. Then, the entrained gas flow rates are evaluated in the simulation results and are compared to the experimental data. As a result, the numerical simulation gives somewhat larger entrained gas flow rate than the experiment. However, both the numerical simulation and experiment show the entrained gas flow rates which are proportional to the outlet water velocity. In conclusion, it is confirmed that the developed numerical simulation algorithm can be applied to the quantitative evaluation of the GE. (authors)

  10. Studies of accurate multi-component lattice Boltzmann models on benchmark cases required for engineering applications

    Otomo, Hiroshi; Li, Yong; Dressler, Marco; Staroselsky, Ilya; Zhang, Raoyang; Chen, Hudong

    2016-01-01

    We present recent developments in lattice Boltzmann modeling for multi-component flows, implemented on the platform of a general purpose, arbitrary geometry solver PowerFLOW. Presented benchmark cases demonstrate the method's accuracy and robustness necessary for handling real world engineering applications at practical resolution and computational cost. The key requirements for such approach are that the relevant physical properties and flow characteristics do not strongly depend on numerics. In particular, the strength of surface tension obtained using our new approach is independent of viscosity and resolution, while the spurious currents are significantly suppressed. Using a much improved surface wetting model, undesirable numerical artifacts including thin film and artificial droplet movement on inclined wall are significantly reduced.

  11. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    Wu, Bo; Chen, Xiaofei

    2016-08-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  12. Precise and accurate assessment of uncertainties in model parameters from stellar interferometry. Application to stellar diameters

    Lachaume, Regis; Rabus, Markus; Jordan, Andres

    2015-08-01

    In stellar interferometry, the assumption that the observables can be seen as Gaussian, independent variables is the norm. In particular, neither the optical interferometry FITS (OIFITS) format nor the most popular fitting software in the field, LITpro, offer means to specify a covariance matrix or non-Gaussian uncertainties. Interferometric observables are correlated by construct, though. Also, the calibration by an instrumental transfer function ensures that the resulting observables are not Gaussian, even if uncalibrated ones happened to be so.While analytic frameworks have been published in the past, they are cumbersome and there is no generic implementation available. We propose here a relatively simple way of dealing with correlated errors without the need to extend the OIFITS specification or making some Gaussian assumptions. By repeatedly picking at random which interferograms, which calibrator stars, and which are the errors on their diameters, and performing the data processing on the bootstrapped data, we derive a sampling of p(O), the multivariate probability density function (PDF) of the observables O. The results can be stored in a normal OIFITS file. Then, given a model m with parameters P predicting observables O = m(P), we can estimate the PDF of the model parameters f(P) = p(m(P)) by using a density estimation of the observables' PDF p.With observations repeated over different baselines, on nights several days apart, and with a significant set of calibrators systematic errors are de facto taken into account. We apply the technique to a precise and accurate assessment of stellar diameters obtained at the Very Large Telescope Interferometer with PIONIER.

  13. Accurate modeling of cache replacement policies in a Data-Grid.

    Otoo, Ekow J.; Shoshani, Arie

    2003-01-23

    Caching techniques have been used to improve the performance gap of storage hierarchies in computing systems. In data intensive applications that access large data files over wide area network environment, such as a data grid,caching mechanism can significantly improve the data access performance under appropriate workloads. In a data grid, it is envisioned that local disk storage resources retain or cache the data files being used by local application. Under a workload of shared access and high locality of reference, the performance of the caching techniques depends heavily on the replacement policies being used. A replacement policy effectively determines which set of objects must be evicted when space is needed. Unlike cache replacement policies in virtual memory paging or database buffering, developing an optimal replacement policy for data grids is complicated by the fact that the file objects being cached have varying sizes and varying transfer and processing costs that vary with time. We present an accurate model for evaluating various replacement policies and propose a new replacement algorithm referred to as ''Least Cost Beneficial based on K backward references (LCB-K).'' Using this modeling technique, we compare LCB-K with various replacement policies such as Least Frequently Used (LFU), Least Recently Used (LRU), Greedy DualSize (GDS), etc., using synthetic and actual workload of accesses to and from tertiary storage systems. The results obtained show that (LCB-K) and (GDS) are the most cost effective cache replacement policies for storage resource management in data grids.

  14. Accurate Locally Conservative Discretizations for Modeling Multiphase Flow in Porous Media on General Hexahedra Grids

    Wheeler, M.F.

    2010-09-06

    For many years there have been formulations considered for modeling single phase ow on general hexahedra grids. These include the extended mixed nite element method, and families of mimetic nite di erence methods. In most of these schemes either no rate of convergence of the algorithm has been demonstrated both theoret- ically and computationally or a more complicated saddle point system needs to be solved for an accurate solution. Here we describe a multipoint ux mixed nite element (MFMFE) method [5, 2, 3]. This method is motivated from the multipoint ux approximation (MPFA) method [1]. The MFMFE method is locally conservative with continuous ux approximations and is a cell-centered scheme for the pressure. Compared to the MPFA method, the MFMFE has a variational formulation, since it can be viewed as a mixed nite element with special approximating spaces and quadrature rules. The framework allows han- dling of hexahedral grids with non-planar faces by applying trilinear mappings from physical elements to reference cubic elements. In addition, there are several multi- scale and multiphysics extensions such as the mortar mixed nite element method that allows the treatment of non-matching grids [4]. Extensions to the two-phase oil-water ow are considered. We reformulate the two- phase model in terms of total velocity, capillary velocity, water pressure, and water saturation. We choose water pressure and water saturation as primary variables. The total velocity is driven by the gradient of the water pressure and total mobility. Iterative coupling scheme is employed for the coupled system. This scheme allows treatments of di erent time scales for the water pressure and water saturation. In each time step, we rst solve the pressure equation using the MFMFE method; we then Center for Subsurface Modeling, The University of Texas at Austin, Austin, TX 78712; mfw@ices.utexas.edu. yCenter for Subsurface Modeling, The University of Texas at Austin, Austin, TX 78712; gxue

  15. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm3) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm3, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm3, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm, and 1

  16. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  17. A semi-implicit, second-order-accurate numerical model for multiphase underexpanded volcanic jets

    S. Carcano

    2013-11-01

    Full Text Available An improved version of the PDAC (Pyroclastic Dispersal Analysis Code, Esposti Ongaro et al., 2007 numerical model for the simulation of multiphase volcanic flows is presented and validated for the simulation of multiphase volcanic jets in supersonic regimes. The present version of PDAC includes second-order time- and space discretizations and fully multidimensional advection discretizations in order to reduce numerical diffusion and enhance the accuracy of the original model. The model is tested on the problem of jet decompression in both two and three dimensions. For homogeneous jets, numerical results are consistent with experimental results at the laboratory scale (Lewis and Carlson, 1964. For nonequilibrium gas–particle jets, we consider monodisperse and bidisperse mixtures, and we quantify nonequilibrium effects in terms of the ratio between the particle relaxation time and a characteristic jet timescale. For coarse particles and low particle load, numerical simulations well reproduce laboratory experiments and numerical simulations carried out with an Eulerian–Lagrangian model (Sommerfeld, 1993. At the volcanic scale, we consider steady-state conditions associated with the development of Vulcanian and sub-Plinian eruptions. For the finest particles produced in these regimes, we demonstrate that the solid phase is in mechanical and thermal equilibrium with the gas phase and that the jet decompression structure is well described by a pseudogas model (Ogden et al., 2008. Coarse particles, on the other hand, display significant nonequilibrium effects, which associated with their larger relaxation time. Deviations from the equilibrium regime, with maximum velocity and temperature differences on the order of 150 m s−1 and 80 K across shock waves, occur especially during the rapid acceleration phases, and are able to modify substantially the jet dynamics with respect to the homogeneous case.

  18. Towards a More Accurate Solar Power Forecast By Improving NWP Model Physics

    Köhler, C.; Lee, D.; Steiner, A.; Ritter, B.

    2014-12-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the uncertainties associated with the large share of weather-dependent power sources. Precise power forecast, well-timed energy trading on the stock market, and electrical grid stability can be maintained. The research project EWeLiNE is a collaboration of the German Weather Service (DWD), the Fraunhofer Institute (IWES) and three German transmission system operators (TSOs). Together, wind and photovoltaic (PV) power forecasts shall be improved by combining optimized NWP and enhanced power forecast models. The conducted work focuses on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. Not only the representation of the model cloud characteristics, but also special events like Sahara dust over Germany and the solar eclipse in 2015 are treated and their effect on solar power accounted for. An overview of the EWeLiNE project and results of the ongoing research will be presented.

  19. Reduced form models of bond portfolios

    Matti Koivu; Teemu Pennanen

    2010-01-01

    We derive simple return models for several classes of bond portfolios. With only one or two risk factors our models are able to explain most of the return variations in portfolios of fixed rate government bonds, inflation linked government bonds and investment grade corporate bonds. The underlying risk factors have natural interpretations which make the models well suited for risk management and portfolio design.

  20. PSI/TM-Coffee: a web server for fast and accurate multiple sequence alignments of regular and transmembrane proteins using homology extension on reduced databases.

    Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming

    2016-07-01

    The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. PMID:27106060

  1. Rapid nonlinear analysis for electrothermal microgripper using reduced order model based Krylov subspace

    The conventional numerical analysis methods could not perform the rapid system-level simulation to MEMS especially when containing sensing and testing integrated circuit. Using reduced-order model can simulate the behavior characteristic of multiphysical energy domain models including nonlinear analysis. This paper set up the reduced-order model of electrothermal microgripper using Krylov subspace projection method. The system functions were assembled through finite element analysis using Ansys. We took the structure-electro-thermal analysis to microgripper finite element model and reduced model order through second-order Krylov subspace projection method based on Arnoldi process. The simulation result from electrothermal reduced order model of microgripper is accurate compared with finite element analysis and even more has a few computing consuming

  2. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California 90024 (United States)

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  3. Causal transmission in reduced-form models

    Vassili Bazinas; Bent Nielsen

    2015-01-01

    We propose a method to explore the causal transmission of a catalyst variable through two endogenous variables of interest. The method is based on the reduced-form system formed from the conditional distribution of the two endogenous variables given the catalyst. The method combines elements from instru- mental variable analysis and Cholesky decomposition of structural vector autoregressions. We give conditions for uniqueness of the causal transmission.

  4. Test of the standard model at low energy: accurate measurements of the branching rates of 62Ga; accurate measurements of the half-life of 38Ca

    Precise measurements of Fermi superallowed 0+ → 0+ β decays provide a powerful tool to study the weak interaction properties in the framework of the Standard Model (SM). Collectively, the comparative half-lives (ft) of these transitions allow a sensitive probe of the CVC (Conserved Vector Current) hypothesis and contribute to the most demanding test of the unitarity of the quarks-mixing CKM matrix top-row, by providing, so far, the most accurate determination of its dominant element (Vud). Until recently, an apparent departure from unity enhanced a doubt on the validity of the minimal SM and thus stimulated a considerable effort in order to extend the study to other Fermi emitters available. The 62Ga and 38Ca are among key nuclei to achieve these precision tests and verify the reliability of the corrections applied to the experimental ft-values. The 62Ga β-decay was investigated at the IGISOL separator, with an experimental setup composed of 3 EUROBALL Clovers for γ-ray detection. Very weak intensity (62Zn. The newly established analog branching-ratio (B.RA equals 99.893(24) %) was used to compute the universal Ft-value (62Ga). The latter turned out to be in good agreement with the 12 well-known cases. Compatibility between the upper limit set here on the term (δIM) and the theoretical prediction suggests that the isospin-symmetry-breaking correction is indeed large for the heavy (A ≥ 62) β-emitters. The study of the 38Ca decay was performed at the CERN-ISOLDE facility. Injection of fluorine into the ion source, in order to chemically select the isotopes of interest, assisted by the REXTRAP Penning trap facility and a time-of-flight analysis, enabled us to eliminate efficiently the troublesome 38mK. For the first time, the 38Ca half-life is measured with a highly purified radioactive sample. The preliminary result obtained, T1/2(38Ca) 445.8(10) ms, improves the precision on the half-life as determined from previous measurements by a factor close to 10

  5. Reducing the Ising model to matchings

    Huber, Mark

    2009-01-01

    Canonical paths is one of the most powerful tools available to show that a Markov chain is rapidly mixing, thereby enabling approximate sampling from complex high dimensional distributions. Two success stories for the canonical paths method are chains for drawing matchings in a graph, and a chain for a version of the Ising model called the subgraphs world. In this paper, it is shown that a subgraphs world draw can be obtained by taking a draw from matchings on a graph that is linear in the size of the original graph. This provides a partial answer to why canonical paths works so well for both problems, as well as providing a new source of algorithms for the Ising model. For instance, this new reduction immediately yields a fully polynomial time approximation scheme for the Ising model on a bounded degree graph when the magnitization is bounded away from 0.

  6. A simple and accurate model for Love wave based sensors: Dispersion equation and mass sensitivity

    Jiansheng Liu

    2014-01-01

    Dispersion equation is an important tool for analyzing propagation properties of acoustic waves in layered structures. For Love wave (LW) sensors, the dispersion equation with an isotropic-considered substrate is too rough to get accurate solutions; the full dispersion equation with a piezoelectric-considered substrate is too complicated to get simple and practical expressions for optimizing LW-based sensors. In this work, a dispersion equation is introduced for Love waves in a layered struct...

  7. Accurate SPICE Modeling of Poly-silicon Resistor in 40nm CMOS Technology Process for Analog Circuit Simulation

    Sun Lijie

    2015-01-01

    Full Text Available In this paper, the SPICE model of poly resistor is accurately developed based on silicon data. To describe the non-linear R-V trend, the new correlation in temperature and voltage is found in non-silicide poly-silicon resistor. A scalable model is developed on the temperature-dependent characteristics (TDC and the temperature-dependent voltage characteristics (TDVC from the R-V data. Besides, the parasitic capacitance between poly and substrate are extracted from real silicon structure in replacing conventional simulation data. The capacitance data are tested through using on-wafer charge-induced-injection error-free charge-based capacitance measurement (CIEF-CBCM technique which is driven by non-overlapping clock generation circuit. All modeling test structures are designed and fabricated through using 40nm CMOS technology process. The new SPICE model of poly-silicon resistor is more accurate to silicon for analog circuit simulation.

  8. Small pores in soils: Is the physico-chemical environment accurately reflected in biogeochemical models ?

    Weber, Tobias K. D.; Riedel, Thomas

    2015-04-01

    Free water is a prerequesite to chemical reactions and biological activity in earth's upper crust essential to life. The void volume between the solid compounds provides space for water, air, and organisms that thrive on the consumption of minerals and organic matter thereby regulating soil carbon turnover. However, not all water in the pore space in soils and sediments is in its liquid state. This is a result of the adhesive forces which reduce the water activity in small pores and charged mineral surfaces. This water has a lower tendency to react chemically in solution as this additional binding energy lowers its activity. In this work, we estimated the amount of soil pore water that is thermodynamically different from a simple aqueous solution. The quantity of soil pore water with properties different to liquid water was found to systematically increase with increasing clay content. The significance of this is that the grain size and surface area apparently affects the thermodynamic state of water. This implies that current methods to determine the amount of water content, traditionally determined from bulk density or gravimetric water content after drying at 105°C overestimates the amount of free water in a soil especially at higher clay content. Our findings have consequences for biogeochemical processes in soils, e.g. nutrients may be contained in water which is not free which could enhance preservation. From water activity measurements on a set of various soils with 0 to 100 wt-% clay, we can show that 5 to 130 mg H2O per g of soil can generally be considered as unsuitable for microbial respiration. These results may therefore provide a unifying explanation for the grain size dependency of organic matter preservation in sedimentary environments and call for a revised view on the biogeochemical environment in soils and sediments. This could allow a different type of process oriented modelling.

  9. Fast and Accurate Icepak-PSpice Co-Simulation of IGBTs under Short-Circuit with an Advanced PSpice Model

    Wu, Rui; Iannuzzo, Francesco; Wang, Huai;

    2014-01-01

    A basic problem in the IGBT short-circuit failure mechanism study is to obtain realistic temperature distribution inside the chip, which demands accurate electrical simulation to obtain power loss distribution as well as detailed IGBT geometry and material information. This paper describes an...... unprecedented fast and accurate approach to electro-thermal simulation of power IGBTs suitable to simulate normal as well as abnormal conditions based on an advanced physics-based PSpice model together with ANSYS/Icepak FEM thermal simulator in a closed loop. Through this approach, significantly faster...... simulation speed with respect to conventional double-physics simulations, together with very accurate results can be achieved. A case study is given which presents the detailed electrical and thermal simulation results of an IGBT module under short circuit conditions. Furthermore, thermal maps in the case of...

  10. Effective and accurate approach for modeling of commensurate–incommensurate transition in krypton monolayer on graphite

    Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system

  11. Surface electron density models for accurate ab initio molecular dynamics with electronic friction

    Novko, D.; Blanco-Rey, M.; Alducin, M.; Juaristi, J. I.

    2016-06-01

    Ab initio molecular dynamics with electronic friction (AIMDEF) is a valuable methodology to study the interaction of atomic particles with metal surfaces. This method, in which the effect of low-energy electron-hole (e-h) pair excitations is treated within the local density friction approximation (LDFA) [Juaristi et al., Phys. Rev. Lett. 100, 116102 (2008), 10.1103/PhysRevLett.100.116102], can provide an accurate description of both e-h pair and phonon excitations. In practice, its applicability becomes a complicated task in those situations of substantial surface atoms displacements because the LDFA requires the knowledge at each integration step of the bare surface electron density. In this work, we propose three different methods of calculating on-the-fly the electron density of the distorted surface and we discuss their suitability under typical surface distortions. The investigated methods are used in AIMDEF simulations for three illustrative adsorption cases, namely, dissociated H2 on Pd(100), N on Ag(111), and N2 on Fe(110). Our AIMDEF calculations performed with the three approaches highlight the importance of going beyond the frozen surface density to accurately describe the energy released into e-h pair excitations in case of large surface atom displacements.

  12. An accurate elasto-plastic frictional tangential force displacement model for granular-flow simulations: Displacement-driven formulation

    Zhang, Xiang; Vu-Quoc, Loc

    2007-07-01

    We present in this paper the displacement-driven version of a tangential force-displacement (TFD) model that accounts for both elastic and plastic deformations together with interfacial friction occurring in collisions of spherical particles. This elasto-plastic frictional TFD model, with its force-driven version presented in [L. Vu-Quoc, L. Lesburg, X. Zhang. An accurate tangential force-displacement model for granular-flow simulations: contacting spheres with plastic deformation, force-driven formulation, Journal of Computational Physics 196(1) (2004) 298-326], is consistent with the elasto-plastic frictional normal force-displacement (NFD) model presented in [L. Vu-Quoc, X. Zhang. An elasto-plastic contact force-displacement model in the normal direction: displacement-driven version, Proceedings of the Royal Society of London, Series A 455 (1991) 4013-4044]. Both the NFD model and the present TFD model are based on the concept of additive decomposition of the radius of contact area into an elastic part and a plastic part. The effect of permanent indentation after impact is represented by a correction to the radius of curvature. The effect of material softening due to plastic flow is represented by a correction to the elastic moduli. The proposed TFD model is accurate, and is validated against nonlinear finite element analyses involving plastic flows in both the loading and unloading conditions. The proposed consistent displacement-driven, elasto-plastic NFD and TFD models are designed for implementation in computer codes using the discrete-element method (DEM) for granular-flow simulations. The model is shown to be accurate and is validated against nonlinear elasto-plastic finite-element analysis.

  13. On the fast convergence modeling and accurate calculation of PV output energy for operation and planning studies

    Highlights: • A comprehensive modeling framework for photovoltaic power plants is presented. • Parameters for various modules are obtained using weather and manufacturer’s data. • A fast and accurate algorithm calculates the five-parameter model of PV module. • The output energy results are closer to measured data compared to SAM and RETScreen. • The overall plant model is recommended for simulation in optimal planning problems. - Abstract: Optimal planning of energy systems greatly relies upon the models utilized for system components. In this paper, a thorough modeling framework for photovoltaic (PV) power plants is developed for application to operation and planning studies. The model is a precise and flexible one that reflects all the effective environmental and weather parameters on the performance of PV module and inverter, as the main components of a PV power plant. These parameters are surface radiation, ambient temperature and wind speed. The presented model can be used to estimate the plant’s output energy for any time period and operating condition. Using a simple iterative process, the presented method demonstrates fast and accurate convergence by merely using the limited information provided by manufacturers. The results obtained by the model are verified by the results of System Advisor Model (SAM) and RETScreen in various operational scenarios. Furthermore, comparison of the simulation results with a real power plant outputs and the comparative statistical error analysis confirm that our calculation procedure merits over SAM and RETScreen, as modern and popular commercial PV simulation tools

  14. Fast and Accurate Recurrent Neural Network Acoustic Models for Speech Recognition

    SAK, Haşim; Senior, Andrew; Rao, Kanishka; Beaufays, Françoise

    2015-01-01

    We have recently shown that deep Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform feed forward deep neural networks (DNNs) as acoustic models for speech recognition. More recently, we have shown that the performance of sequence trained context dependent (CD) hidden Markov model (HMM) acoustic models using such LSTM RNNs can be equaled by sequence trained phone models initialized with connectionist temporal classification (CTC). In this paper, we present techniques tha...

  15. Multi Sensor Data Integration for AN Accurate 3d Model Generation

    Chhatkuli, S.; Satoh, T.; Tachibana, K.

    2015-05-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  16. Parameterized Reduced Order Modeling of Misaligned Stacked Disks Rotor Assemblies

    Ganine, Vladislav; Laxalde, Denis; Michalska, Hannah; Pierre, Christophe

    2011-01-01

    Light and flexible rotating parts of modern turbine engines operating at supercritical speeds necessitate application of more accurate but rather computationally expensive 3D FE modeling techniques. Stacked disks misalignment due to manufacturing variability in the geometry of individual components constitutes a particularly important aspect to be included in the analysis because of its impact on system dynamics. A new parametric model order reduction algorithm is presented to achieve this go...

  17. Credit Risk Modelling Under the Reduced Form Approach

    Cãlin Adrian Cantemir; Popovici Oana Cristina

    2012-01-01

    Credit risk is one of the most important aspects that need to be considered by financial institutions involved in credit-granting. It is defined as the risk of loss that arises from a borrower who does not make payments as promised. For modelling credit risk there are two main approaches: the structural models and the reduced form models. The purpose of this paper is to review the evolution of reduced form models from the pioneering days of Jarrow and Turnbull to present

  18. Towards more accurate isoscapes encouraging results from wine, water and marijuana data/model and model/model comparisons.

    West, J. B.; Ehleringer, J. R.; Cerling, T.

    2006-12-01

    Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across

  19. An approach to estimating and extrapolating model error based on inverse problem methods: towards accurate numerical weather prediction

    Model error is one of the key factors restricting the accuracy of numerical weather prediction (NWP). Considering the continuous evolution of the atmosphere, the observed data (ignoring the measurement error) can be viewed as a series of solutions of an accurate model governing the actual atmosphere. Model error is represented as an unknown term in the accurate model, thus NWP can be considered as an inverse problem to uncover the unknown error term. The inverse problem models can absorb long periods of observed data to generate model error correction procedures. They thus resolve the deficiency and faultiness of the NWP schemes employing only the initial-time data. In this study we construct two inverse problem models to estimate and extrapolate the time-varying and spatial-varying model errors in both the historical and forecast periods by using recent observations and analogue phenomena of the atmosphere. Numerical experiment on Burgers' equation has illustrated the substantial forecast improvement using inverse problem algorithms. The proposed inverse problem methods of suppressing NWP errors will be useful in future high accuracy applications of NWP. (geophysics, astronomy, and astrophysics)

  20. Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3

    Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.

    2016-04-01

    Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.

  1. The Impact of Accurate Extinction Measurements for X-ray Spectral Models

    Smith, Randall K; Corrales, Lia

    2016-01-01

    Interstellar extinction includes both absorption and scattering of photons from interstellar gas and dust grains, and it has the effect of altering a source's spectrum and its total observed intensity. However, while multiple absorption models exist, there are no useful scattering models in standard X-ray spectrum fitting tools, such as XSPEC. Nonetheless, X-ray halos, created by scattering from dust grains, are detected around even moderately absorbed sources and the impact on an observed source spectrum can be significant, if modest, compared to direct absorption. By convolving the scattering cross section with dust models, we have created a spectral model as a function of energy, type of dust, and extraction region that can be used with models of direct absorption. This will ensure the extinction model is consistent and enable direct connections to be made between a source's X-ray spectral fits and its UV/optical extinction.

  2. GLOBAL THRESHOLD AND REGION-BASED ACTIVE CONTOUR MODEL FOR ACCURATE IMAGE SEGMENTATION

    Nuseiba M. Altarawneh; Suhuai Luo; Brian Regan; Changming Sun; Fucang Jia

    2014-01-01

    In this contribution, we develop a novel global threshold-based active contour model. This model deploys a new edge-stopping function to control the direction of the evolution and to stop the evolving contour at weak or blurred edges. An implementation of the model requires the use of selective binary and Gaussian filtering regularized level set (SBGFRLS) method. The method uses either a selective local or global segmentation property. It penalizes the level set function to force ...

  3. EXAMINING THE MOVEMENTS OF MOBILE NODES IN THE REAL WORLD TO PRODUCE ACCURATE MOBILITY MODELS

    TANWEER ALAM

    2010-09-01

    Full Text Available All communication occurs through a wireless median in an ad hoc network. Ad hoc networks are dynamically created and maintained by the individual nodes comprising the network. Random Waypoint Mobility Model is a model that includes pause times between changes in destination and speed. To produce a real-world environment within which an ad hoc network can be formed among a set of nodes, there is a need for the development of realistic, generic and comprehensive mobility models. In this paper, we examine the movements of entities in the real world and present the production of mobility model in an ad hoc network.

  4. Fault Tolerance for Industrial Actuators in Absence of Accurate Models and Hardware Redundancy

    Papageorgiou, Dimitrios; Blanke, Mogens; Niemann, Hans Henrik;

    2015-01-01

    This paper investigates Fault-Tolerant Control for closed-loop systems where only coarse models are available and there is lack of actuator and sensor redundancies. The problem is approached in the form of a typical servomotor in closed-loop. A linear model is extracted from input/output data...

  5. Accurate calculation of binding energies for molecular clusters - Assessment of different models

    Friedrich, Joachim; Fiedler, Benjamin

    2016-06-01

    In this work we test different strategies to compute high-level benchmark energies for medium-sized molecular clusters. We use the incremental scheme to obtain CCSD(T)/CBS energies for our test set and carefully validate the accuracy for binding energies by statistical measures. The local errors of the incremental scheme are benchmark values are ΔE = - 278.01 kJ/mol for (H2O)10, ΔE = - 221.64 kJ/mol for (HF)10, ΔE = - 45.63 kJ/mol for (CH4)10, ΔE = - 19.52 kJ/mol for (H2)20 and ΔE = - 7.38 kJ/mol for (H2)10 . Furthermore we test state-of-the-art wave-function-based and DFT methods. Our benchmark data will be very useful for critical validations of new methods. We find focal-point-methods for estimating CCSD(T)/CBS energies to be highly accurate and efficient. For foQ-i3CCSD(T)-MP2/TZ we get a mean error of 0.34 kJ/mol and a standard deviation of 0.39 kJ/mol.

  6. A reduced order model of a quadruped walking system

    Trot walking has recently been studied by several groups because of its stability and realizability. In the trot, diagonally opposed legs form pairs. While one pair of legs provides support, the other pair of legs swings forward in preparation for the next step. In this paper, we propose a reduced order model for the trot walking. The reduced order model is derived by using two dominant modes of the closed loop system in which the local feedback at each joint is implemented. It is shown by numerical examples that the obtained reduced order model can well approximate the original higher order model. (author)

  7. Highly Accurate Tree Models Derived from Terrestrial Laser Scan Data: A Method Description

    Jan Hackenberg

    2014-05-01

    Full Text Available This paper presents a method for fitting cylinders into a point cloud, derived from a terrestrial laser-scanned tree. Utilizing high scan quality data as the input, the resulting models describe the branching structure of the tree, capable of detecting branches with a diameter smaller than a centimeter. The cylinders are stored as a hierarchical tree-like data structure encapsulating parent-child neighbor relations and incorporating the tree’s direction of growth. This structure enables the efficient extraction of tree components, such as the stem or a single branch. The method was validated both by applying a comparison of the resulting cylinder models with ground truth data and by an analysis between the input point clouds and the models. Tree models were accomplished representing more than 99% of the input point cloud, with an average distance from the cylinder model to the point cloud within sub-millimeter accuracy. After validation, the method was applied to build two allometric models based on 24 tree point clouds as an example of the application. Computation terminated successfully within less than 30 min. For the model predicting the total above ground volume, the coefficient of determination was 0.965, showing the high potential of terrestrial laser-scanning for forest inventories.

  8. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Wicke, Jason; Dumas, Geneviève A

    2014-06-01

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. PMID:24735506

  9. Accurate Fabrication of Hydroxyapatite Bone Models with Porous Scaffold Structures by Using Stereolithography

    Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.

  10. Accurate Fabrication of Hydroxyapatite Bone Models with Porous Scaffold Structures by Using Stereolithography

    Maeda, Chiaki; Tasaki, Satoko; Kirihara, Soshu, E-mail: c-maeda@jwri.osaka-u.ac.jp [Joining and Welding Research Institute, Osaka University, 11-1 Mihogaoka, Ibaraki City, Osaka 567-0047 (Japan)

    2011-05-15

    Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.

  11. HIGH ACCURATE LOW COMPLEX FACE DETECTION BASED ON KL TRANSFORM AND YCBCR GAUSSIAN MODEL

    Epuru Nithish Kumar

    2013-05-01

    Full Text Available This paper presents a skin color model for face detection based on YCbCr Gauss model and KL transform. The simple gauss model and the region model of the skin color are designed in both KL color space and YCbCr space according to clustering. Skin regions are segmented using optimal threshold value obtained from adaptive algorithm. The segmentation results are then used to eliminate likely skin region in the gauss-likelihood image. Different morphological processes are then used to eliminate noise from binary image. In order to locate the face, the obtained regions are grouped out with simple detection algorithms. The proposed algorithm works well for complex background and many faces.

  12. Empirical approaches to more accurately predict benthic-pelagic coupling in biogeochemical ocean models

    Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus

    2016-04-01

    The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?

  13. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners.

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-01-01

    Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model. PMID:27077855

  14. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners

    Xuemiao Xu

    2016-04-01

    Full Text Available Exterior orientation parameters’ (EOP estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model.

  15. Restricted Collapsed Draw: Accurate Sampling for Hierarchical Chinese Restaurant Process Hidden Markov Models

    Makino, Takaki; Takei, Shunsuke; Sato, Issei; Mochihashi, Daichi

    2011-01-01

    We propose a restricted collapsed draw (RCD) sampler, a general Markov chain Monte Carlo sampler of simultaneous draws from a hierarchical Chinese restaurant process (HCRP) with restriction. Models that require simultaneous draws from a hierarchical Dirichlet process with restriction, such as infinite Hidden markov models (iHMM), were difficult to enjoy benefits of \\markerg{the} HCRP due to combinatorial explosion in calculating distributions of coupled draws. By constructing a proposal of se...

  16. Parameterized reduced order modeling of misaligned stacked disks rotor assemblies

    Ganine, Vladislav; Laxalde, Denis; Michalska, Hannah; Pierre, Christophe

    2011-01-01

    Light and flexible rotating parts of modern turbine engines operating at supercritical speeds necessitate application of more accurate but rather computationally expensive 3D FE modeling techniques. Stacked disks misalignment due to manufacturing variability in the geometry of individual components constitutes a particularly important aspect to be included in the analysis because of its impact on system dynamics. A new parametric model order reduction algorithm is presented to achieve this goal at affordable computational costs. It is shown that the disks misalignment leads to significant changes in nominal system properties that manifest themselves as additional blocks coupling neighboring spatial harmonics in Fourier space. Consequently, the misalignment effects can no longer be accurately modeled as equivalent forces applied to a nominal unperturbed system. The fact that the mode shapes become heavily distorted by extra harmonic content renders the nominal modal projection-based methods inaccurate and thus numerically ineffective in the context of repeated analysis of multiple misalignment realizations. The significant numerical bottleneck is removed by employing an orthogonal projection onto the subspace spanned by first few Fourier harmonic basis vectors. The projected highly sparse systems are shown to accurately approximate the specific misalignment effects, to be inexpensive to solve using direct sparse methods and easy to parameterize with a small set of measurable eccentricity and tilt angle parameters. Selected numerical examples on an industrial scale model are presented to illustrate the accuracy and efficiency of the algorithm implementation.

  17. Accurate modeling of a DOI capable small animal PET scanner using GATE

    In this work we developed a Monte Carlo (MC) model of the Sedecal Argus pre-clinical PET scanner, using GATE (Geant4 Application for Tomographic Emission). This is a dual-ring scanner which features DOI compensation by means of two layers of detector crystals (LYSO and GSO). Geometry of detectors and sources, pulses readout and selection of coincidence events were modeled with GATE, while a separate code was developed in order to emulate the processing of digitized data (for example, customized time windows and data flow saturation), the final binning of the lines of response and to reproduce the data output format of the scanner's acquisition software. Validation of the model was performed by modeling several phantoms used in experimental measurements, in order to compare the results of the simulations. Spatial resolution, sensitivity, scatter fraction, count rates and NECR were tested. Moreover, the NEMA NU-4 phantom was modeled in order to check for the image quality yielded by the model. Noise, contrast of cold and hot regions and recovery coefficient were calculated and compared using images of the NEMA phantom acquired with our scanner. The energy spectrum of coincidence events due to the small amount of 176Lu in LYSO crystals, which was suitably included in our model, was also compared with experimental measurements. Spatial resolution, sensitivity and scatter fraction showed an agreement within 7%. Comparison of the count rates curves resulted satisfactory, being the values within the uncertainties, in the range of activities practically used in research scans. Analysis of the NEMA phantom images also showed a good agreement between simulated and acquired data, within 9% for all the tested parameters. This work shows that basic MC modeling of this kind of system is possible using GATE as a base platform; extension through suitably written customized code allows for an adequate level of accuracy in the results. Our careful validation against experimental

  18. An accurate, fast and stable material model for shape memory alloys

    Shape memory alloys possess several features that make them interesting for industrial applications. However, due to their complex and thermo-mechanically coupled behavior, direct use of shape memory alloys in engineering construction is problematic. There is thus a demand for tools to achieve realistic, predictive simulations that are numerically robust when computing complex, coupled load states, are fast enough to calculate geometries of industrial interest, and yield realistic and reliable results without the use of fitting curves. In this paper a new and numerically fast material model for shape memory alloys is presented. It is based solely on energetic quantities, which thus creates a quite universal approach. In the beginning, a short derivation is given before it is demonstrated how this model can be easily calibrated by means of tension tests. Then, several examples of engineering applications under mechanical and thermal loads are presented to demonstrate the numerical stability and high computation speed of the model. (paper)

  19. Modelling of Limestone Dissolution in Wet FGD Systems: The Importance of an Accurate Particle Size Distribution

    Kiil, Søren; Johnsson, Jan Erik; Dam-Johansen, Kim

    1999-01-01

    In wet flue gas desulphurisation (FGD) plants, the most common sorbent is limestone. Over the past 25 years, many attempts to model the transient dissolution of limestone particles in aqueous solutions have been performed, due to the importance for the development of reliable FGD simu-lation tools...... Danish limestone types with very different particle size distributions (PSDs). All limestones were of a high purity. Model predictions were found to be qualitatively in good agreement with experimental data without any use of adjustable parameters. Deviations between measurements and simulations were...... attributed primarily to the PSD measurements of the limestone particles, which were used as model inputs. The PSDs, measured using a laser diffrac-tion-based Malvern analyser, were probably not representative of the limestone samples because agglomeration phenomena took place when the particles were...

  20. Toward Accurate Modeling of the Effect of Ion-Pair Formation on Solute Redox Potential.

    Qu, Xiaohui; Persson, Kristin A

    2016-09-13

    A scheme to model the dependence of a solute redox potential on the supporting electrolyte is proposed, and the results are compared to experimental observations and other reported theoretical models. An improved agreement with experiment is exhibited if the effect of the supporting electrolyte on the redox potential is modeled through a concentration change induced via ion pair formation with the salt, rather than by only considering the direct impact on the redox potential of the solute itself. To exemplify the approach, the scheme is applied to the concentration-dependent redox potential of select molecules proposed for nonaqueous flow batteries. However, the methodology is general and enables rational computational electrolyte design through tuning of the operating window of electrochemical systems by shifting the redox potential of its solutes; including potentially both salts as well as redox active molecules. PMID:27500744

  1. High-order accurate finite-volume formulations for the pressure gradient force in layered ocean models

    Engwirda, Darren; Marshall, John

    2016-01-01

    The development of a set of high-order accurate finite-volume formulations for evaluation of the pressure gradient force in layered ocean models is described. A pair of new schemes are presented, both based on an integration of the contact pressure force about the perimeter of an associated momentum control-volume. The two proposed methods differ in their choice of control-volume geometries. High-order accurate numerical integration techniques are employed in both schemes to account for non-linearities in the underlying equation-of-state definitions and thermodynamic profiles, and details of an associated vertical interpolation and quadrature scheme are discussed in detail. Numerical experiments are used to confirm the consistency of the two formulations, and it is demonstrated that the new methods maintain hydrostatic and thermobaric equilibrium in the presence of strongly-sloping layer-wise geometry, non-linear equation-of-state definitions and non-uniform vertical stratification profiles. Additionally, one...

  2. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036, Spain and Servei de Física Mèdica i Protecció Radiològica, Institut Català d’Oncologia, L’Hospitalet de Llobregat 08907 (Spain); Roé, Nuria [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036 (Spain); Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Complexo Hospitalario Universitario de Santiago de Compostela 15706, Spain and Grupo de Imagen Molecular, Instituto de Investigacións Sanitarias de Santiago de Compostela (IDIS), Galicia 15782 (Spain); Falcon, Carles; Ros, Domènec [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 08036, Spain and CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Pavía, Javier [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 080836 (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); and Servei de Medicina Nuclear, Hospital Clínic, Barcelona 08036 (Spain)

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  3. A fast and accurate SystemC-AMS model for PLL

    Ma, K.; Leuken, R. van; Vidojkovic, M.; Romme, J.; Rampu, S.; Pflug, H.; Huang, L.; Dolmans, G.

    2011-01-01

    PLLs have become an important part of electrical systems. When designing a PLL, an efficient and reliable simulation platform for system evaluation is needed. However, the closed loop simulation of a PLL is time consuming. To address this problem, in this paper, a new PLL model containing both digit

  4. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what

  5. Analysis of computational models for an accurate study of electronic excitations in GFP

    Schwabe, Tobias; Beerepoot, Maarten; Olsen, Jógvan Magnus Haugaard; Kongsted, Jacob

    2015-01-01

    Using the chromophore of the green fluorescent protein (GFP), the performance of a hybrid RI-CC2 / polarizable embedding (PE) model is tested against a quantum chemical cluster pproach. Moreover, the effect of the rest of the protein environment is studied by systematically increasing the size of...

  6. Accurate reduction of a model of circadian rhythms by delayed quasi steady state assumptions

    Vejchodský, Tomáš

    2014-01-01

    Roč. 139, č. 4 (2014), s. 577-585. ISSN 0862-7959 Grant ostatní: European Commission(XE) StochDetBioModel(328008) Institutional support: RVO:67985840 Keywords : biochemical networks * gene regulatory networks * oscillating systems * periodic solution Subject RIV: BA - General Mathematics http://hdl.handle.net/10338.dmlcz/144135

  7. A Framework for Accurate Geospatial Modeling of Recharge and Discharge Maps using Image Ranking and Machine Learning

    Yahja, A.; Kim, C.; Lin, Y.; Bajcsy, P.

    2008-12-01

    This paper addresses the problem of accurate estimation of geospatial models from a set of groundwater recharge & discharge (R&D) maps and from auxiliary remote sensing and terrestrial raster measurements. The motivation for our work is driven by the cost of field measurements, and by the limitations of currently available physics-based modeling techniques that do not include all relevant variables and allow accurate predictions only at coarse spatial scales. The goal is to improve our understanding of the underlying physical phenomena and increase the accuracy of geospatial models--with a combination of remote sensing, field measurements and physics-based modeling. Our approach is to process a set of R&D maps generated from interpolated sparse field measurements using existing physics-based models, and identify the R&D map that would be the most suitable for extracting a set of rules between the auxiliary variables of interest and the R&D map labels. We implemented this approach by ranking R&D maps using information entropy and mutual information criteria, and then by deriving a set of rules using a machine learning technique, such as the decision tree method. The novelty of our work is in developing a general framework for building geospatial models with the ultimate goal of minimizing cost and maximizing model accuracy. The framework is demonstrated for groundwater R&D rate models but could be applied to other similar studies, for instance, to understanding hypoxia based on physics-based models and remotely sensed variables. Furthermore, our key contribution is in designing a ranking method for R&D maps that allows us to analyze multiple plausible R&D maps with a different number of zones which was not possible in our earlier prototype of the framework called Spatial Pattern to Learn. We will present experimental results using examples R&D and other maps from an area in Wisconsin.

  8. Accurate Modeling of The Siemens S7 SCADA Protocol For Intrusion Detection And Digital Forensic

    Amit Kleinmann

    2014-09-01

    Full Text Available The Siemens S7 protocol is commonly used in SCADA systems for communications between a Human Machine Interface (HMI and the Programmable Logic Controllers (PLCs. This paper presents a model-based Intrusion Detection Systems (IDS designed for S7 networks. The approach is based on the key observation that S7 traffic to and from a specific PLC is highly periodic; as a result, each HMI-PLC channel can be modeled using its own unique Deterministic Finite Automaton (DFA. The resulting DFA-based IDS is very sensitive and is able to flag anomalies such as a message appearing out of its position in the normal sequence or a message referring to a single unexpected bit. The intrusion detection approach was evaluated on traffic from two production systems. Despite its high sensitivity, the system had a very low false positive rate - over 99.82% of the traffic was identified as normal.

  9. An accurate two-phase approximate solution to the acute viral infection model

    Perelson, Alan S [Los Alamos National Laboratory

    2009-01-01

    During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.

  10. Accurate Simulation of 802.11 Indoor Links: A "Bursty" Channel Model Based on Real Measurements

    Agüero Ramón

    2010-01-01

    Full Text Available We propose a novel channel model to be used for simulating indoor wireless propagation environments. An extensive measurement campaign was carried out to assess the performance of different transport protocols over 802.11 links. This enabled us to better adjust our approach, which is based on an autoregressive filter. One of the main advantages of this proposal lies in its ability to reflect the "bursty" behavior which characterizes indoor wireless scenarios, having a great impact on the behavior of upper layer protocols. We compare this channel model, integrated within the Network Simulator (ns-2 platform, with other traditional approaches, showing that it is able to better reflect the real behavior which was empirically assessed.

  11. Features of creation of highly accurate models of triumphal pylons for archaeological reconstruction

    Grishkanich, A. S.; Sidorov, I. S.; Redka, D. N.

    2015-12-01

    Cited a measuring operation for determining the geometric characteristics of objects in space and geodetic survey objects on the ground. In the course of the work, data were obtained on a relative positioning of the pylons in space. There are deviations from verticality. In comparison with traditional surveying this testing method is preferable because it allows you to get in semi-automated mode, the CAD model of the object is high for subsequent analysis that is more economical-ly advantageous.

  12. Morphometric analysis of Russian Plain's small lakes on the base of accurate digital bathymetric models

    Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana

    2016-04-01

    Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.

  13. Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum

    Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.

    2013-02-01

    Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.

  14. The Reduced RUM as a Logit Model: Parameterization and Constraints.

    Chiu, Chia-Yi; Köhn, Hans-Friedrich

    2016-06-01

    Cognitive diagnosis models (CDMs) for educational assessment are constrained latent class models. Examinees are assigned to classes of intellectual proficiency defined in terms of cognitive skills called attributes, which an examinee may or may not have mastered. The Reduced Reparameterized Unified Model (Reduced RUM) has received considerable attention among psychometricians. Markov Chain Monte Carlo (MCMC) or Expectation Maximization (EM) are typically used for estimating the Reduced RUM. Commercial implementations of the EM algorithm are available in the latent class analysis (LCA) routines of Latent GOLD and Mplus, for example. Fitting the Reduced RUM with an LCA routine requires that it be reparameterized as a logit model, with constraints imposed on the parameters. For models involving two attributes, these have been worked out. However, for models involving more than two attributes, the parameterization and the constraints are nontrivial and currently unknown. In this article, the general parameterization of the Reduced RUM as a logit model involving any number of attributes and the associated parameter constraints are derived. As a practical illustration, the LCA routine in Mplus is used for fitting the Reduced RUM to two synthetic data sets and to a real-world data set; for comparison, the results obtained by using the MCMC implementation in OpenBUGS are also provided. PMID:25838247

  15. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  16. An accurate higher order displacement model with shear and normal deformations effects for functionally graded plates

    Jha, D.K., E-mail: dkjha@barc.gov.in [Civil Engineering Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India); Kant, Tarun [Department of Civil Engineering, Indian Institute of Technology Bombay, Powai, Mumbai 400 076 (India); Srinivas, K. [Civil Engineering Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India); Singh, R.K. [Reactor Safety Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India)

    2013-12-15

    Highlights: • We model through-thickness variation of material properties in functionally graded (FG) plates. • Effect of material grading index on deformations, stresses and natural frequency of FG plates is studied. • Effect of higher order terms in displacement models is studied for plate statics. • The benchmark solutions for the static analysis and free vibration of thick FG plates are presented. -- Abstract: Functionally graded materials (FGMs) are the potential candidates under consideration for designing the first wall of fusion reactors with a view to make best use of potential properties of available materials under severe thermo-mechanical loading conditions. A higher order shear and normal deformations plate theory is employed for stress and free vibration analyses of functionally graded (FG) elastic, rectangular, and simply (diaphragm) supported plates. Although FGMs are highly heterogeneous in nature, they are generally idealized as continua with mechanical properties changing smoothly with respect to spatial coordinates. The material properties of FG plates are assumed here to vary through thickness of plate in a continuous manner. Young's modulii and material densities are considered to be varying continuously in thickness direction according to volume fraction of constituents which are mathematically modeled here as exponential and power law functions. The effects of variation of material properties in terms of material gradation index on deformations, stresses and natural frequency of FG plates are investigated. The accuracy of present numerical solutions has been established with respect to exact three-dimensional (3D) elasticity solutions and the other models’ solutions available in literature.

  17. Generation of Accurate Lateral Boundary Conditions for a Surface-Water Groundwater Interaction Model

    Khambhammettu, P.; Tsou, M.; Panday, S. M.; Kool, J.; Wei, X.

    2010-12-01

    The 106 mile long Peace River in Florida flows south from Lakeland to Charlotte Harbor and has a drainage basin of approximately 2,350 square miles. A long-term decline in stream flows and groundwater potentiometric levels has been observed in the region. Long-term trends in rainfall, along with effects of land use changes on runoff, surface-water storage, recharge and evapotranspiration patterns, and increased groundwater and surface-water withdrawals have contributed to this decline. The South West Florida Water Management District (SWFWMD) has funded the development of the Peace River Integrated Model (PRIM) to assess the effects of land use, water use, and climatic changes on stream flows and to evaluate the effectiveness of various management alternatives for restoring stream flows. The PRIM was developed using MODHMS, a fully integrated surface-water groundwater flow and transport simulator developed by HydroGeoLogic, Inc. The development of the lateral boundary conditions (groundwater inflow and outflow) for the PRIM in both historical and predictive contexts is discussed in this presentation. Monthly-varying specified heads were used to define the lateral boundary conditions for the PRIM. These head values were derived from the coarser Southern District Groundwater Model (SDM). However, there were discrepancies between the simulated SDM heads and measured heads: the likely causes being spatial (use of a coarser grid) and temporal (monthly average pumping rates and recharge rates) approximations in the regional SDM. Finer re-calibration of the SDM was not feasible, therefore, an innovative approach was adopted to remove the discrepancies. In this approach, point discrepancies/residuals between the observed and simulated heads were kriged with an appropriate variogram to generate a residual surface. This surface was then added to the simulated head surface of the SDM to generate a corrected head surface. This approach preserves the trends associated with

  18. A simple and accurate numerical network flow model for bionic micro heat exchangers

    Pieper, M.; Klein, P. [Fraunhofer Institute (ITWM), Kaiserslautern (Germany)

    2011-05-15

    Heat exchangers are often associated with drawbacks like a large pressure drop or a non-uniform flow distribution. Recent research shows that bionic structures can provide possible improvements. We considered a set of such structures that were designed with M. Hermann's FracTherm {sup registered} algorithm. In order to optimize and compare them with conventional heat exchangers, we developed a numerical method to determine their performance. We simulated the flow in the heat exchanger applying a network model and coupled these results with a finite volume method to determine the heat distribution in the heat exchanger. (orig.)

  19. Reduced Lorenz models for anomalous transport and profile resilience

    Rypdal, K.; Garcia, Odd Erik

    2007-01-01

    resilience of the profile. Particular emphasis is put on the diffusionless limit, where these equations reduce to a simple dynamical system depending only on one single forcing parameter. This model is studied numerically, stressing experimentally observable signatures, and some of the perils of dimension-reducing...

  20. Considering mask pellicle effect for more accurate OPC model at 45nm technology node

    Wang, Ching-Heng; Liu, Qingwei; Zhang, Liguo

    2008-11-01

    Now it comes to the 45nm technology node, which should be the first generation of the immersion micro-lithography. And the brand-new lithography tool makes many optical effects, which can be ignored at 90nm and 65nm nodes, now have significant impact on the pattern transmission process from design to silicon. Among all the effects, one that needs to be pay attention to is the mask pellicle effect's impact on the critical dimension variation. With the implement of hyper-NA lithography tools, light transmits the mask pellicle vertically is not a good approximation now, and the image blurring induced by the mask pellicle should be taken into account in the computational microlithography. In this works, we investigate how the mask pellicle impacts the accuracy of the OPC model. And we will show that considering the extremely tight critical dimension control spec for 45nm generation node, to take the mask pellicle effect into the OPC model now becomes necessary.

  1. Accurate modeling of SiPM detectors coupled to FE electronics for timing performance analysis

    Ciciriello, F.; Corsi, F.; Licciulli, F.; Marzocca, C. [DEE-Politecnico di Bari, Via Orabona 4, I-70125 Bari (Italy); Matarrese, G., E-mail: matarrese@deemail.poliba.it [DEE-Politecnico di Bari, Via Orabona 4, I-70125 Bari (Italy); Del Guerra, A.; Bisogni, M.G. [Department of Physics, University of Pisa, Largo Bruno Pontecorvo 3, I-56127 Pisa (Italy)

    2013-08-01

    It has already been shown how the shape of the current pulse produced by a SiPM in response to an incident photon is sensibly affected by the characteristics of the front-end electronics (FEE) used to read out the detector. When the application requires to approach the best theoretical time performance of the detection system, the influence of all the parasitics associated to the coupling SiPM–FEE can play a relevant role and must be adequately modeled. In particular, it has been reported that the shape of the current pulse is affected by the parasitic inductance of the wiring connection between SiPM and FEE. In this contribution, we extend the validity of a previously presented SiPM model to account for the wiring inductance. Various combinations of the main performance parameters of the FEE (input resistance and bandwidth) have been simulated in order to evaluate their influence on the time accuracy of the detection system, when the time pick-off of each single event is extracted by means of a leading edge discriminator (LED) technique.

  2. Accurate programmable electrocardiogram generator using a dynamical model implemented on a microcontroller

    Chien Chang, Jia-Ren; Tai, Cheng-Chi

    2006-07-01

    This article reports on the design and development of a complete, programmable electrocardiogram (ECG) generator, which can be used for the testing, calibration and maintenance of electrocardiograph equipment. A modified mathematical model, developed from the three coupled ordinary differential equations of McSharry et al. [IEEE Trans. Biomed. Eng. 50, 289, (2003)], was used to locate precisely the positions of the onset, termination, angle, and duration of individual components in an ECG. Generator facilities are provided so the user can adjust the signal amplitude, heart rate, QRS-complex slopes, and P- and T-wave settings. The heart rate can be adjusted in increments of 1BPM (beats per minute), from 20to176BPM, while the amplitude of the ECG signal can be set from 0.1to400mV with a 0.1mV resolution. Experimental results show that the proposed concept and the resulting system are feasible.

  3. How to build accurate macroscopic models of actinide ions in aqueous solvents?

    Classical molecular dynamics (MD) based on parameterized force fields allow one to simulate large molecular systems on significantly long simulation times (usually, at the ns scale and above). Hence, they provide statistically relevant sampled sets of data, which may then be post-processed to estimate specific properties. However, the study of the ligand coordination dynamics around heavy ions requires the use of sophisticated force fields accounting for in particular polarization phenomena, as well as for the charge-transfer effects affecting ion/ligand interactions, which are shown to be significant in several heavy element systems. Our current efforts focus on the development of force-field models for radionuclides, with the intention of pushing as far as possible the accuracy of all competing interactions between the various elements present in solution, that is the metal, the ligands, the solvent, and the counter-ions

  4. Extrapolation of Urn Models via Poissonization: Accurate Measurements of the Microbial Unknown

    Lladser, Manuel; Reeder, Jens; 10.1371/journal.pone.0021105

    2011-01-01

    The availability of high-throughput parallel methods for sequencing microbial communities is increasing our knowledge of the microbial world at an unprecedented rate. Though most attention has focused on determining lower-bounds on the alpha-diversity i.e. the total number of different species present in the environment, tight bounds on this quantity may be highly uncertain because a small fraction of the environment could be composed of a vast number of different species. To better assess what remains unknown, we propose instead to predict the fraction of the environment that belongs to unsampled classes. Modeling samples as draws with replacement of colored balls from an urn with an unknown composition, and under the sole assumption that there are still undiscovered species, we show that conditionally unbiased predictors and exact prediction intervals (of constant length in logarithmic scale) are possible for the fraction of the environment that belongs to unsampled classes. Our predictions are based on a P...

  5. Combined model of non-conformal layer growth for accurate optical simulation of thin-film silicon solar cells

    Sever, M.; Lipovsek, B.; Krc, J.; Campa, A.; Topic, M. [University of Ljubljana, Faculty of Electrical Engineering Trzaska cesta 25, Ljubljana 1000 (Slovenia); Sanchez Plaza, G. [Technical University of Valencia, Valencia Nanophotonics Technology Center (NTC) Valencia 46022 (Spain); Haug, F.J. [Ecole Polytechnique Federale de Lausanne EPFL, Institute of Microengineering IMT, Photovoltaics and Thin-Film Electronics Laboratory, Neuchatel 2000 (Switzerland); Duchamp, M. [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons Institute for Microstructure Research, Research Centre Juelich, Juelich D-52425 (Germany); Soppe, W. [ECN-Solliance, High Tech Campus 5, Eindhoven 5656 AE (Netherlands)

    2013-12-15

    In thin-film silicon solar cells textured interfaces are introduced, leading to improved antireflection and light trapping capabilities of the devices. Thin-layers are deposited on surface-textured substrates or superstrates and the texture is translated to internal interfaces. For accurate optical modelling of the thin-film silicon solar cells it is important to define and include the morphology of textured interfaces as realistic as possible. In this paper we present a model of thin-layer growth on textured surfaces which combines two growth principles: conformal and isotropic one. With the model we can predict the morphology of subsequent internal interfaces in thin-film silicon solar cells based on the known morphology of the substrate or superstrate. Calibration of the model for different materials grown under certain conditions is done on various cross-sectional scanning electron microscopy images of realistic devices. Advantages over existing growth modelling approaches are demonstrated - one of them is the ability of the model to predict and omit the textures with high possibility of defective regions formation inside the Si absorber layers. The developed model of layer growth is used in rigorous 3-D optical simulations employing the COMSOL simulator. A sinusoidal texture of the substrate is optimised for the case of a micromorph silicon solar cell. More than a 50 % increase in short-circuit current density of the bottom cell with respect to the flat case is predicted, considering the defect-free absorber layers. The developed approach enables accurate prediction and powerful design of current-matched top and bottom cell.

  6. Communication: Accurate higher-order van der Waals coefficients between molecules from a model dynamic multipole polarizability

    Tao, Jianmin, E-mail: jianmin.tao@temple.edu [Department of Physics, Temple University, Philadelphia, Pennsylvania 19122 (United States); Rappe, Andrew M. [Department of Chemistry, University of Pennsylvania, Philadelphia, Pennsylvania 19104-6323 (United States)

    2016-01-21

    Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C{sub 6} alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C{sub 8} and C{sub 10} between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C{sub 8} and 7% for C{sub 10}. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.

  7. Communication: Accurate higher-order van der Waals coefficients between molecules from a model dynamic multipole polarizability

    Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C8 and C10 between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C8 and 7% for C10. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry

  8. A fuzzy-logic-based approach to accurate modeling of a double gate MOSFET for nanoelectronic circuit design

    The double gate (DG) silicon MOSFET with an extremely short-channel length has the appropriate features to constitute the devices for nanoscale circuit design. To develop a physical model for extremely scaled DG MOSFETs, the drain current in the channel must be accurately determined under the application of drain and gate voltages. However, modeling the transport mechanism for the nanoscale structures requires the use of overkill methods and models in terms of their complexity and computation time (self-consistent, quantum computations, ...). Therefore, new methods and techniques are required to overcome these constraints. In this paper, a new approach based on the fuzzy logic computation is proposed to investigate nanoscale DG MOSFETs. The proposed approach has been implemented in a device simulator to show the impact of the proposed approach on the nanoelectronic circuit design. The approach is general and thus is suitable for any type of nanoscale structure investigation problems in the nanotechnology industry. (semiconductor devices)

  9. A fuzzy-logic-based approach to accurate modeling of a double gate MOSFET for nanoelectronic circuit design

    F. Djeffal; A. Ferdi; M. Chahdi

    2012-01-01

    The double gate (DG) silicon MOSFET with an extremely short-channel length has the appropriate features to constitute the devices for nanoscale circuit design.To develop a physical model for extremely scaled DG MOSFETs,the drain current in the channel must be accurately determined under the application of drain and gate voltages.However,modeling the transport mechanism for the nanoscale structures requires the use of overkill methods and models in terms of their complexity and computation time (self-consistent,quantum computations ).Therefore,new methods and techniques are required to overcome these constraints.In this paper,a new approach based on the fuzzy logic computation is proposed to investigate nanoscale DG MOSFETs.The proposed approach has been implemented in a device simulator to show the impact of the proposed approach on the nanoelectronic circuit design.The approach is general and thus is suitable for any type ofnanoscale structure investigation problems in the nanotechnology industry.

  10. Wind-tunnel tests and modeling indicate that aerial dispersant delivery operations are highly accurate

    Hoffman, C.; Fritz, B. [United States Dept. of Agriculture, College Station, TX (United States); Nedwed, T. [ExxonMobil Upstream Research Co., Houston, TX (United States); Coolbaugh, T. [ExxonMobil Research and Engineering Co., Fairfax, VA (United States); Huber, C.A. [CAH Inc., Williamsburg, VA (United States)

    2009-07-01

    Oil dispersants are used to accelerate the dispersion of floating oil slicks. This study was conducted to select application equipment that will help to optimize the application oil dispersants from aircraft. Oil spill responders have a broad range of oil dispersants at their disposal because the physical and chemical interaction between the oil and dispersant is critical to successful mitigation. In order to make efficient use of dispersants, it is important to evaluate how each one atomizes once released from an aircraft. The specific goal of this study was to evaluate current spray nozzles used to spray oil dispersants from aircraft. The United States Department of Agriculture's high-speed wind tunnel facility in College Station, Texas was used to determine droplet size distributions generated by dispersant delivery nozzles at wind speeds similar to those used in aerial dispersant applications. Droplet distribution was quantified using a laser particle size analyzer. Wind-tunnel tests were conducted using water, Corexit 9500 and 9527 as well as a new dispersant gel being developed by ExxonMobil. The measured drop-size distributions were then used in an agriculture spray model to predict the delivery efficiency and swath width of dispersant delivered at flight speeds and altitudes commonly used for dispersant application. It was concluded that current practices for aerial application of dispersants lead to very efficient application. 19 refs., 5 tabs., 10 figs.

  11. Wind-tunnel tests and modeling indicate that aerial dispersant delivery operations are highly accurate

    Oil dispersants are used to accelerate the dispersion of floating oil slicks. This study was conducted to select application equipment that will help to optimize the application oil dispersants from aircraft. Oil spill responders have a broad range of oil dispersants at their disposal because the physical and chemical interaction between the oil and dispersant is critical to successful mitigation. In order to make efficient use of dispersants, it is important to evaluate how each one atomizes once released from an aircraft. The specific goal of this study was to evaluate current spray nozzles used to spray oil dispersants from aircraft. The United States Department of Agriculture's high-speed wind tunnel facility in College Station, Texas was used to determine droplet size distributions generated by dispersant delivery nozzles at wind speeds similar to those used in aerial dispersant applications. Droplet distribution was quantified using a laser particle size analyzer. Wind-tunnel tests were conducted using water, Corexit 9500 and 9527 as well as a new dispersant gel being developed by ExxonMobil. The measured drop-size distributions were then used in an agriculture spray model to predict the delivery efficiency and swath width of dispersant delivered at flight speeds and altitudes commonly used for dispersant application. It was concluded that current practices for aerial application of dispersants lead to very efficient application. 19 refs., 5 tabs., 10 figs

  12. The human skin/chick chorioallantoic membrane model accurately predicts the potency of cosmetic allergens.

    Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S

    2009-04-01

    The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin. PMID:19054059

  13. GPS satellite and receiver instrumental biases estimation using least squares method for accurate ionosphere modelling

    G Sasibhushana Rao

    2007-10-01

    The positional accuracy of the Global Positioning System (GPS)is limited due to several error sources.The major error is ionosphere.By augmenting the GPS,the Category I (CAT I)Precision Approach (PA)requirements can be achieved.The Space-Based Augmentation System (SBAS)in India is known as GPS Aided Geo Augmented Navigation (GAGAN).One of the prominent errors in GAGAN that limits the positional accuracy is instrumental biases.Calibration of these biases is particularly important in achieving the CAT I PA landings.In this paper,a new algorithm is proposed to estimate the instrumental biases by modelling the TEC using 4th order polynomial.The algorithm uses values corresponding to a single station for one month period and the results confirm the validity of the algorithm.The experimental results indicate that the estimation precision of the satellite-plus-receiver instrumental bias is of the order of ± 0.17 nsec.The observed mean bias error is of the order − 3.638 nsec and − 4.71 nsec for satellite 1 and 31 respectively.It is found that results are consistent over the period.

  14. A Comparison of Digital Elevation Models to Accurately Predict Stream Locations

    Trowbridge, Spencer

    Three separate digital elevation models (DEMs) were compared in their ability to predict stream locations. The first DEM from the Shuttle Radar Topography Mission had a resolution of 90 meters, the second DEM from the National Elevation Dataset had a resolution of 30 meters, and the third DEM was created from Light Detection and Ranging (LiDAR) data and had a resolution of 4.34 meters. Ultimately, stream locations were created from these DEMs and compared to the National Hydrography Dataset (NHD) and stream channels traced from aerial photographs. Each bank of the named streams of the Papillion Creek Watershed were traced and samples were obtained that represent error in the placement of the derived stream locations. Measurements were taken from the centerline of the traced stream channels to where orthogonal transects intersected with the derived stream channel of the DEMs and the streams of the NHD. This study found that DEMs with differing resolutions will delineate stream channels differently and that without human assistance in processing elevation data, the finest resolution DEM was not the best at reproducing stream locations.

  15. Reduced order modeling of some fluid flows of industrial interest

    Some basic ideas are presented for the construction of robust, computationally efficient reduced order models amenable to be used in industrial environments, combined with somewhat rough computational fluid dynamics solvers. These ideas result from a critical review of the basic principles of proper orthogonal decomposition-based reduced order modeling of both steady and unsteady fluid flows. In particular, the extent to which some artifacts of the computational fluid dynamics solvers can be ignored is addressed, which opens up the possibility of obtaining quite flexible reduced order models. The methods are illustrated with the steady aerodynamic flow around a horizontal tail plane of a commercial aircraft in transonic conditions, and the unsteady lid-driven cavity problem. In both cases, the approximations are fairly good, thus reducing the computational cost by a significant factor. (review)

  16. Reduced order modeling of some fluid flows of industrial interest

    Alonso, D; Terragni, F; Velazquez, A; Vega, J M, E-mail: josemanuel.vega@upm.es [E.T.S.I. Aeronauticos, Universidad Politecnica de Madrid, 28040 Madrid (Spain)

    2012-06-01

    Some basic ideas are presented for the construction of robust, computationally efficient reduced order models amenable to be used in industrial environments, combined with somewhat rough computational fluid dynamics solvers. These ideas result from a critical review of the basic principles of proper orthogonal decomposition-based reduced order modeling of both steady and unsteady fluid flows. In particular, the extent to which some artifacts of the computational fluid dynamics solvers can be ignored is addressed, which opens up the possibility of obtaining quite flexible reduced order models. The methods are illustrated with the steady aerodynamic flow around a horizontal tail plane of a commercial aircraft in transonic conditions, and the unsteady lid-driven cavity problem. In both cases, the approximations are fairly good, thus reducing the computational cost by a significant factor. (review)

  17. What makes an accurate and reliable subject-specific finite element model? A case study of an elephant femur.

    Panagiotopoulou, O; Wilshin, S D; Rayfield, E J; Shefelbine, S J; Hutchinson, J R

    2012-02-01

    Finite element modelling is well entrenched in comparative vertebrate biomechanics as a tool to assess the mechanical design of skeletal structures and to better comprehend the complex interaction of their form-function relationships. But what makes a reliable subject-specific finite element model? To approach this question, we here present a set of convergence and sensitivity analyses and a validation study as an example, for finite element analysis (FEA) in general, of ways to ensure a reliable model. We detail how choices of element size, type and material properties in FEA influence the results of simulations. We also present an empirical model for estimating heterogeneous material properties throughout an elephant femur (but of broad applicability to FEA). We then use an ex vivo experimental validation test of a cadaveric femur to check our FEA results and find that the heterogeneous model matches the experimental results extremely well, and far better than the homogeneous model. We emphasize how considering heterogeneous material properties in FEA may be critical, so this should become standard practice in comparative FEA studies along with convergence analyses, consideration of element size, type and experimental validation. These steps may be required to obtain accurate models and derive reliable conclusions from them. PMID:21752810

  18. Accurate Modeling of the Cubic and Antiferrodistortive Phases of SrTiO3 with Screened Hybrid Density Functional Theory

    El-Mellouhi, Fadwa; Lucero, Melissa J; Scuseria, Gustavo E

    2011-01-01

    We have calculated the properties of SrTiO3 (STO) using a wide array of density functionals ranging from standard semi-local functionals to modern range-separated hybrids, combined with several basis sets of varying size/quality. We show how these combination's predictive ability varies signi?cantly, both for STO's cubic and antiferrodistortive (AFD) phases, with the greatest variation in functional/basis set e?cacy seen in modeling the AFD phase. The screened hybrid functionals we utilized predict the structural properties of both phases in very good agreement with experiment, especially if used with large (but still computationally tractable) basis sets. The most accurate results presented in this study, namely those from HSE06/modi?ed-def2-TZVP, stand as the most accurate modeling of STO to date when compared to the literature; these results agree well with experimental structural and electronic properties as well as providing insight into the band structure alteration during the phase transition.

  19. Accurate prediction of interference minima in linear molecular harmonic spectra by a modified two-center model

    Xin, Cui; Di-Yu, Zhang; Gao, Chen; Ji-Gen, Chen; Si-Liang, Zeng; Fu-Ming, Guo; Yu-Jun, Yang

    2016-03-01

    We demonstrate that the interference minima in the linear molecular harmonic spectra can be accurately predicted by a modified two-center model. Based on systematically investigating the interference minima in the linear molecular harmonic spectra by the strong-field approximation (SFA), it is found that the locations of the harmonic minima are related not only to the nuclear distance between the two main atoms contributing to the harmonic generation, but also to the symmetry of the molecular orbital. Therefore, we modify the initial phase difference between the double wave sources in the two-center model, and predict the harmonic minimum positions consistent with those simulated by SFA. Project supported by the National Basic Research Program of China (Grant No. 2013CB922200) and the National Natural Science Foundation of China (Grant Nos. 11274001, 11274141, 11304116, 11247024, and 11034003), and the Jilin Provincial Research Foundation for Basic Research, China (Grant Nos. 20130101012JC and 20140101168JC).

  20. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    Shiyao Wang

    2016-02-01

    Full Text Available A high-performance differential global positioning system (GPS  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU/dead reckoning (DR data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  1. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  2. A method for accurate modelling of the crystal response function at a crystal sub-level applied to PET reconstruction

    Stute, S.; Benoit, D.; Martineau, A.; Rehfeld, N. S.; Buvat, I.

    2011-02-01

    Positron emission tomography (PET) images suffer from low spatial resolution and signal-to-noise ratio. Accurate modelling of the effects affecting resolution within iterative reconstruction algorithms can improve the trade-off between spatial resolution and signal-to-noise ratio in PET images. In this work, we present an original approach for modelling the resolution loss introduced by physical interactions between and within the crystals of the tomograph and we investigate the impact of such modelling on the quality of the reconstructed images. The proposed model includes two components: modelling of the inter-crystal scattering and penetration (interC) and modelling of the intra-crystal count distribution (intraC). The parameters of the model were obtained using a Monte Carlo simulation of the Philips GEMINI GXL response. Modelling was applied to the raw line-of-response geometric histograms along the four dimensions and introduced in an iterative reconstruction algorithm. The impact of modelling interC, intraC or combined interC and intraC on spatial resolution, contrast recovery and noise was studied using simulated phantoms. The feasibility of modelling interC and intraC in two clinical 18F-NaF scans was also studied. Measurements on Monte Carlo simulated data showed that, without any crystal interaction modelling, the radial spatial resolution in air varied from 5.3 mm FWHM at the centre of the field-of-view (FOV) to 10 mm at 266 mm from the centre. Resolution was improved with interC modelling (from 4.4 mm in the centre to 9.6 mm at the edge), or with intraC modelling only (from 4.8 mm in the centre to 4.3 mm at the edge), and it became stationary across the FOV (4.2 mm FWHM) when combining interC and intraC modelling. This improvement in resolution yielded significant contrast enhancement, e.g. from 65 to 76% and 55.5 to 68% for a 6.35 mm radius sphere with a 3.5 sphere-to-background activity ratio at 55 and 215 mm from the centre of the FOV, respectively

  3. A method for accurate modelling of the crystal response function at a crystal sub-level applied to PET reconstruction

    Positron emission tomography (PET) images suffer from low spatial resolution and signal-to-noise ratio. Accurate modelling of the effects affecting resolution within iterative reconstruction algorithms can improve the trade-off between spatial resolution and signal-to-noise ratio in PET images. In this work, we present an original approach for modelling the resolution loss introduced by physical interactions between and within the crystals of the tomograph and we investigate the impact of such modelling on the quality of the reconstructed images. The proposed model includes two components: modelling of the inter-crystal scattering and penetration (interC) and modelling of the intra-crystal count distribution (intraC). The parameters of the model were obtained using a Monte Carlo simulation of the Philips GEMINI GXL response. Modelling was applied to the raw line-of-response geometric histograms along the four dimensions and introduced in an iterative reconstruction algorithm. The impact of modelling interC, intraC or combined interC and intraC on spatial resolution, contrast recovery and noise was studied using simulated phantoms. The feasibility of modelling interC and intraC in two clinical 18F-NaF scans was also studied. Measurements on Monte Carlo simulated data showed that, without any crystal interaction modelling, the radial spatial resolution in air varied from 5.3 mm FWHM at the centre of the field-of-view (FOV) to 10 mm at 266 mm from the centre. Resolution was improved with interC modelling (from 4.4 mm in the centre to 9.6 mm at the edge), or with intraC modelling only (from 4.8 mm in the centre to 4.3 mm at the edge), and it became stationary across the FOV (4.2 mm FWHM) when combining interC and intraC modelling. This improvement in resolution yielded significant contrast enhancement, e.g. from 65 to 76% and 55.5 to 68% for a 6.35 mm radius sphere with a 3.5 sphere-to-background activity ratio at 55 and 215 mm from the centre of the FOV, respectively

  4. Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests.

    Gao, Yaozong; Shao, Yeqin; Lian, Jun; Wang, Andrew Z; Chen, Ronald C; Shen, Dinggang

    2016-06-01

    Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a non-local external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation. PMID:26800531

  5. Fast and accurate Monte Carlo modeling of a kilovoltage X-ray therapy unit using a photon-source approximation for treatment planning in complex media

    B Zeinali-Rafsanjani

    2015-01-01

    Full Text Available To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL, percentage depth doses (PDDs and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam.

  6. SU-E-T-475: An Accurate Linear Model of Tomotherapy MLC-Detector System for Patient Specific Delivery QA

    Purpose: An accurate leaf fluence model can be used in applications such as patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is known that the total fluence is not a linear combination of individual leaf fluence due to leakage-transmission, tongue-and-groove, and source occlusion effect. Here we propose a method to model the nonlinear effects as linear terms thus making the MLC-detector system a linear system. Methods: A leaf pattern basis (LPB) consisting of no-leaf-open, single-leaf-open, double-leaf-open and triple-leaf-open patterns are chosen to represent linear and major nonlinear effects of leaf fluence as a linear system. An arbitrary leaf pattern can be expressed as (or decomposed to) a linear combination of the LPB either pulse by pulse or weighted by dwelling time. The exit detector responses to the LPB are obtained by processing returned detector signals resulting from the predefined leaf patterns for each jaw setting. Through forward transformation, detector signal can be predicted given a delivery plan. An equivalent leaf open time (LOT) sinogram containing output variation information can also be inversely calculated from the measured detector signals. Twelve patient plans were delivered in air. The equivalent LOT sinograms were compared with their planned sinograms. Results: The whole calibration process was done in 20 minutes. For two randomly generated leaf patterns, 98.5% of the active channels showed differences within 0.5% of the local maximum between the predicted and measured signals. Averaged over the twelve plans, 90% of LOT errors were within +/−10 ms. The LOT systematic error increases and shows an oscillating pattern when LOT is shorter than 50 ms. Conclusion: The LPB method models the MLC-detector response accurately, which improves patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is sensitive enough to detect systematic LOT errors as small as 10 ms

  7. Use of a clay modeling task to reduce chocolate craving.

    Andrade, Jackie; Pears, Sally; May, Jon; Kavanagh, David J

    2012-06-01

    Elaborated Intrusion theory (EI theory; Kavanagh, Andrade, & May, 2005) posits two main cognitive components in craving: associative processes that lead to intrusive thoughts about the craved substance or activity, and elaborative processes supporting mental imagery of the substance or activity. We used a novel visuospatial task to test the hypothesis that visual imagery plays a key role in craving. Experiment 1 showed that spending 10 min constructing shapes from modeling clay (plasticine) reduced participants' craving for chocolate compared with spending 10 min 'letting your mind wander'. Increasing the load on verbal working memory using a mental arithmetic task (counting backwards by threes) did not reduce craving further. Experiment 2 compared effects on craving of a simpler verbal task (counting by ones) and clay modeling. Clay modeling reduced overall craving strength and strength of craving imagery, and reduced the frequency of thoughts about chocolate. The results are consistent with EI theory, showing that craving is reduced by loading the visuospatial sketchpad of working memory but not by loading the phonological loop. Clay modeling might be a useful self-help tool to help manage craving for chocolate, snacks and other foods. PMID:22369958

  8. On Modeling CPU Utilization of MapReduce Applications

    Rizvandi, Nikzad Babaii; Zomaya, Albert Y

    2012-01-01

    In this paper, we present an approach to predict the total CPU utilization in terms of CPU clock tick of applications when running on MapReduce framework. Our approach has two key phases: profiling and modeling. In the profiling phase, an application is run several times with different sets of MapReduce configuration parameters to profile total CPU clock tick of the application on a given platform. In the modeling phase, multi linear regression is used to map the sets of MapReduce configuration parameters (number of Mappers, number of Reducers, size of File System (HDFS) and the size of input file) to total CPU clock ticks of the application. This derived model can be used for predicting total CPU requirements of the same application when using MapReduce framework on the same platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. Three standard applications (WordCount, Exim Mainlog parsing and Terasort) are used to evaluate our modeling technique on pseu...

  9. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. PMID:24375512

  10. Comment on ''Accurate analytic model potentials for D2 and H2 based on the perturbed-Morse-oscillator model''

    Huffaker and Cohen (ref.1) claim that the perturbed-Morse-oscillator (PMO) model, for the potential energy function for hydrogen, gives very high accuracy results; surpassing that of the RKR potential. A more efficient approach to formulating analytical functions based on the PMO model is given, and some defects of the PMO model are discussed

  11. A homotopy-based sparse representation for fast and accurate shape prior modeling in liver surgical planning.

    Wang, Guotai; Zhang, Shaoting; Xie, Hongzhi; Metaxas, Dimitris N; Gu, Lixu

    2015-01-01

    Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement

  12. User Guide for SMARTIES: Spheroids Modelled Accurately with a Robust T-matrix Implementation for Electromagnetic Scattering

    Somerville, W R C; Ru, E C Le

    2015-01-01

    We provide a detailed user guide for SMARTIES, a suite of Matlab codes for the calculation of the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. SMARTIES is a Matlab implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. The theory behind the improvements in numerical accuracy and convergence is briefly summarised, with reference to the original publications. Instructions of use, and a detailed description of the code structure, its range of applicability, as well as guidelines for further developments by advanced users are discussed in separate sections of this user guide. The code may be useful to researchers seeking a fast, accurate and reliable tool to simulate the near-field and far-field optical properties of elongated particles, but will also appeal to other developers of light-scattering software seeking a reliable benchmark for...

  13. Reduced Models in Chemical Kinetics via Nonlinear Data-Mining

    Eliodoro Chiavazzo

    2014-01-01

    Full Text Available The adoption of detailed mechanisms for chemical kinetics often poses two types of severe challenges: First, the number of degrees of freedom is large; and second, the dynamics is characterized by widely disparate time scales. As a result, reactive flow solvers with detailed chemistry often become intractable even for large clusters of CPUs, especially when dealing with direct numerical simulation (DNS of turbulent combustion problems. This has motivated the development of several techniques for reducing the complexity of such kinetics models, where, eventually, only a few variables are considered in the development of the simplified model. Unfortunately, no generally applicable a priori recipe for selecting suitable parameterizations of the reduced model is available, and the choice of slow variables often relies upon intuition and experience. We present an automated approach to this task, consisting of three main steps. First, the low dimensional manifold of slow motions is (approximately sampled by brief simulations of the detailed model, starting from a rich enough ensemble of admissible initial conditions. Second, a global parametrization of the manifold is obtained through the Diffusion Map (DMAP approach, which has recently emerged as a powerful tool in data analysis/machine learning. Finally, a simplified model is constructed and solved on the fly in terms of the above reduced (slow variables. Clearly, closing this latter model requires nontrivial interpolation calculations, enabling restriction (mapping from the full ambient space to the reduced one and lifting (mapping from the reduced space to the ambient one. This is a key step in our approach, and a variety of interpolation schemes are reported and compared. The scope of the proposed procedure is presented and discussed by means of an illustrative combustion example.

  14. Accurate Monte Carlo simulations on FCC and HCP Lennard-Jones solids at very low temperatures and high reduced densities up to 1.30

    Adidharma, Hertanto; Tan, Sugata P.

    2016-07-01

    Canonical Monte Carlo simulations on face-centered cubic (FCC) and hexagonal closed packed (HCP) Lennard-Jones (LJ) solids are conducted at very low temperatures (0.10 ≤ T∗ ≤ 1.20) and high densities (0.96 ≤ ρ∗ ≤ 1.30). A simple and robust method is introduced to determine whether or not the cutoff distance used in the simulation is large enough to provide accurate thermodynamic properties, which enables us to distinguish the properties of FCC from that of HCP LJ solids with confidence, despite their close similarities. Free-energy expressions derived from the simulation results are also proposed, not only to describe the properties of those individual structures but also the FCC-liquid, FCC-vapor, and FCC-HCP solid phase equilibria.

  15. Mobile phone model with metamaterials to reduce the exposure

    Pinto, Yenny; Begaud, Xavier

    2016-04-01

    This work presents a terminal mobile model where an Inverted-F Antenna (IFA) is associated with three different kinds of metamaterials: artificial magnetic conductor (AMC), electromagnetic band gap (EBG) and resistive high-impedance surface (RHIS). The objective was to evaluate whether some metamaterials may be used to reduce exposure while preserving the antenna performances. The exposure has been evaluated using a simplified phantom model. Two configurations, antenna in front of the phantom and antenna hidden by the ground plane, have been evaluated. Results show that using an optimized RHIS, the SAR 10 g is reduced and the antenna performances are preserved. With RHIS solution, the SAR 10 g peak is reduced by 8 % when the antenna is located in front of the phantom and by 6 % when the antenna is hidden by ground plane.

  16. Subspace methods for multi-physics reduced order modeling in nuclear engineering applications

    This manuscript proposes a new extension to a reduced order modeling algorithm, previously introduced for single-physics models, to multi-physics models. This manuscript focuses on loosely-coupled physics models wherein the output of one physics model is fed as input to the next physics model, and each physics model is solved separately while freezing all other physics models. The idea is to perform three reductions at each physics-to-physics interface, one based on the upstream physics, another for the downstream physics, and a third for the interaction thereof. Accurately capturing the interaction between the reduced physics models is an essential feature of the proposed algorithm, and will be the key measure for its success. For standard model execution, this interaction is often captured using an iterative technique that loops over the different physics until convergence or meeting some stopping criterion. We adopt a similar approach in which the effective dimensionality of each physics-to-physics interface is updated iteratively until a user-defined error tolerance is met. A quarter PWR fuel assembly depleted to 32 GWD/MTU by iteratively solving the quasi-static transport-depletion approximation is used to exemplify the application of the proposed algorithm. Active subspaces for the nuclear cross-sections and neutron flux are determined, and compared to the active subspaces obtained without the physics coupling.(author)

  17. Development of Boundary Condition Independent Reduced Order Thermal Models using Proper Orthogonal Decomposition

    Raghupathy, Arun; Ghia, Karman; Ghia, Urmila

    2008-11-01

    Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.

  18. A POD reduced order model for resolving angular direction in neutron/photon transport problems

    Buchan, A.G., E-mail: andrew.buchan@imperial.ac.uk [Applied Modelling and Computation Group, Department of Earth Science and Engineering, Imperial College London, SW7 2AZ (United Kingdom); Calloo, A.A.; Goffin, M.G.; Dargaville, S.; Fang, F.; Pain, C.C. [Applied Modelling and Computation Group, Department of Earth Science and Engineering, Imperial College London, SW7 2AZ (United Kingdom); Navon, I.M. [Department of Scientific Computing, Florida State University, Tallahassee, FL 32306-4120 (United States)

    2015-09-01

    This article presents the first Reduced Order Model (ROM) that efficiently resolves the angular dimension of the time independent, mono-energetic Boltzmann Transport Equation (BTE). It is based on Proper Orthogonal Decomposition (POD) and uses the method of snapshots to form optimal basis functions for resolving the direction of particle travel in neutron/photon transport problems. A unique element of this work is that the snapshots are formed from the vector of angular coefficients relating to a high resolution expansion of the BTE's angular dimension. In addition, the individual snapshots are not recorded through time, as in standard POD, but instead they are recorded through space. In essence this work swaps the roles of the dimensions space and time in standard POD methods, with angle and space respectively. It is shown here how the POD model can be formed from the POD basis functions in a highly efficient manner. The model is then applied to two radiation problems; one involving the transport of radiation through a shield and the other through an infinite array of pins. Both problems are selected for their complex angular flux solutions in order to provide an appropriate demonstration of the model's capabilities. It is shown that the POD model can resolve these fluxes efficiently and accurately. In comparison to high resolution models this POD model can reduce the size of a problem by up to two orders of magnitude without compromising accuracy. Solving times are also reduced by similar factors.

  19. A POD reduced order model for resolving angular direction in neutron/photon transport problems

    This article presents the first Reduced Order Model (ROM) that efficiently resolves the angular dimension of the time independent, mono-energetic Boltzmann Transport Equation (BTE). It is based on Proper Orthogonal Decomposition (POD) and uses the method of snapshots to form optimal basis functions for resolving the direction of particle travel in neutron/photon transport problems. A unique element of this work is that the snapshots are formed from the vector of angular coefficients relating to a high resolution expansion of the BTE's angular dimension. In addition, the individual snapshots are not recorded through time, as in standard POD, but instead they are recorded through space. In essence this work swaps the roles of the dimensions space and time in standard POD methods, with angle and space respectively. It is shown here how the POD model can be formed from the POD basis functions in a highly efficient manner. The model is then applied to two radiation problems; one involving the transport of radiation through a shield and the other through an infinite array of pins. Both problems are selected for their complex angular flux solutions in order to provide an appropriate demonstration of the model's capabilities. It is shown that the POD model can resolve these fluxes efficiently and accurately. In comparison to high resolution models this POD model can reduce the size of a problem by up to two orders of magnitude without compromising accuracy. Solving times are also reduced by similar factors

  20. Fast and accurate global multiphase arrival tracking: the irregular shortest-path method in a 3-D spherical earth model

    Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart

    2013-09-01

    The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.

  1. Accurate modeling of size and strain broadening in the Rietveld refinement: The {open_quotes}double-Voigt{close_quotes} approach

    Balzar, D. [Ruder Boskovic Inst., Zagreb (Croatia); Ledbetter, H. [National Inst. of Standards and Technology, Boulder, CO (United States)

    1995-12-31

    In the {open_quotes}double-Voigt{close_quotes} approach, an exact Voigt function describes both size- and strain-broadened profiles. The lattice strain is defined in terms of physically credible mean-square strain averaged over a distance in the diffracting domains. Analysis of Fourier coefficients in a harmonic approximation for strain coefficients leads to the Warren-Averbach method for the separation of size and strain contributions to diffraction line broadening. The model is introduced in the Rietveld refinement program in the following way: Line widths are modeled with only four parameters in the isotropic case. Varied parameters are both surface- and volume-weighted domain sizes and root-mean-square strains averaged over two distances. Refined parameters determine the physically broadened Voigt line profile. Instrumental Voigt line profile parameters are added to obtain the observed (Voigt) line profile. To speed computation, the corresponding pseudo-Voigt function is calculated and used as a fitting function in refinement. This approach allows for both fast computer code and accurate modeling in terms of physically identifiable parameters.

  2. Reduced order modeling of fluid/structure interaction.

    Barone, Matthew Franklin; Kalashnikova, Irina; Segalman, Daniel Joseph; Brake, Matthew Robert

    2009-11-01

    This report describes work performed from October 2007 through September 2009 under the Sandia Laboratory Directed Research and Development project titled 'Reduced Order Modeling of Fluid/Structure Interaction.' This project addresses fundamental aspects of techniques for construction of predictive Reduced Order Models (ROMs). A ROM is defined as a model, derived from a sequence of high-fidelity simulations, that preserves the essential physics and predictive capability of the original simulations but at a much lower computational cost. Techniques are developed for construction of provably stable linear Galerkin projection ROMs for compressible fluid flow, including a method for enforcing boundary conditions that preserves numerical stability. A convergence proof and error estimates are given for this class of ROM, and the method is demonstrated on a series of model problems. A reduced order method, based on the method of quadratic components, for solving the von Karman nonlinear plate equations is developed and tested. This method is applied to the problem of nonlinear limit cycle oscillations encountered when the plate interacts with an adjacent supersonic flow. A stability-preserving method for coupling the linear fluid ROM with the structural dynamics model for the elastic plate is constructed and tested. Methods for constructing efficient ROMs for nonlinear fluid equations are developed and tested on a one-dimensional convection-diffusion-reaction equation. These methods are combined with a symmetrization approach to construct a ROM technique for application to the compressible Navier-Stokes equations.

  3. Accelerating transient simulation of linear reduced order models.

    Thornquist, Heidi K.; Mei, Ting; Keiter, Eric Richard; Bond, Brad

    2011-10-01

    Model order reduction (MOR) techniques have been used to facilitate the analysis of dynamical systems for many years. Although existing model reduction techniques are capable of providing huge speedups in the frequency domain analysis (i.e. AC response) of linear systems, such speedups are often not obtained when performing transient analysis on the systems, particularly when coupled with other circuit components. Reduced system size, which is the ostensible goal of MOR methods, is often insufficient to improve transient simulation speed on realistic circuit problems. It can be shown that making the correct reduced order model (ROM) implementation choices is crucial to the practical application of MOR methods. In this report we investigate methods for accelerating the simulation of circuits containing ROM blocks using the circuit simulator Xyce.

  4. Reduced Complexity Volterra Models for Nonlinear System Identification

    Hacıoğlu Rıfat

    2001-01-01

    Full Text Available A broad class of nonlinear systems and filters can be modeled by the Volterra series representation. However, its practical use in nonlinear system identification is sometimes limited due to the large number of parameters associated with the Volterra filter′s structure. The parametric complexity also complicates design procedures based upon such a model. This limitation for system identification is addressed in this paper using a Fixed Pole Expansion Technique (FPET within the Volterra model structure. The FPET approach employs orthonormal basis functions derived from fixed (real or complex pole locations to expand the Volterra kernels and reduce the number of estimated parameters. That the performance of FPET can considerably reduce the number of estimated parameters is demonstrated by a digital satellite channel example in which we use the proposed method to identify the channel dynamics. Furthermore, a gradient-descent procedure that adaptively selects the pole locations in the FPET structure is developed in the paper.

  5. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  6. Observed allocations of productivity and biomass, and turnover times in tropical forests are not accurately represented in CMIP5 Earth system models

    A significant fraction of anthropogenic CO2 emissions is assimilated by tropical forests and stored as biomass, slowing the accumulation of CO2 in the atmosphere. Because different plant tissues have different functional roles and turnover times, predictions of carbon balance of tropical forests depend on how earth system models (ESMs) represent the dynamic allocation of productivity to different tree compartments. This study shows that observed allocation of productivity, biomass, and turnover times of main tree compartments (leaves, wood, and roots) are not accurately represented in Coupled Model Intercomparison Project Phase 5 ESMs. In particular, observations indicate that biomass saturates with increasing productivity. In contrast, most models predict continuous increases in biomass with increases in productivity. This bias may lead to an over-prediction of carbon uptake in response to CO2 or climate-driven changes in productivity. Compartment-specific productivity and biomass are useful benchmarks to assess terrestrial ecosystem model performance. Improvements in the predicted allocation patterns and turnover times by ESMs will reduce uncertainties in climate predictions. (letter)

  7. Reduced order modeling and analysis of combustion instabilities

    Tamanampudi, Gowtham Manikanta Reddy

    The coupling between unsteady heat release and pressure fluctuations in a combustor leads to the complex phenomenon of combustion instability. Combustion instability can lead to enormous pressure fluctuations and high rates of combustor heat transfer which play a very important role in determining the life and performance of engine. Although high fidelity simulations are starting to yield detailed understanding of the underlying physics of combustion instability, the enormous computing power required restricts their application to a few runs and fairly simple geometries. To overcome this, low order models are being employed for prediction and analysis. Since low order models cannot account for the coupling between heat release and pressure fluctuations, lower-order combustion response models are required. One such attempt is made through the work presented here using a commercial software COMSOL. The linearized Euler Equations with combustion response models were solved in the frequency domain implementing Arnoldi algorithm using 3D Finite Element solver COMSOL. This work is part of a larger effort to investigate a low order, computationally inexpensive and accurate solver which accounts for mean flow effects, complex boundary conditions and combustion response. This tool was tested against a number of cases presenting longitudinal instabilities. Further, combustion instabilities in transverse instability chamber were studied and are compared with experiments. Both sets of results are in good agreement with experiment. In addition, the effect of nozzle length on the mode shapes in transverse instability chamber was studied and presented.

  8. The i-V curve characteristics of burner-stabilized premixed flames: detailed and reduced models

    Han, Jie

    2016-07-17

    The i-V curve describes the current drawn from a flame as a function of the voltage difference applied across the reaction zone. Since combustion diagnostics and flame control strategies based on electric fields depend on the amount of current drawn from flames, there is significant interest in modeling and understanding i-V curves. We implement and apply a detailed model for the simulation of the production and transport of ions and electrons in one-dimensional premixed flames. An analytical reduced model is developed based on the detailed one, and analytical expressions are used to gain insight into the characteristics of the i-Vcurve for various flame configurations. In order for the reduced model to capture the spatial distribution of the electric field accurately, the concept of a dead zone region, where voltage is constant, is introduced, and a suitable closure for the spatial extent of the dead zone is proposed and validated. The results from the reduced modeling framework are found to be in good agreement with those from the detailed simulations. The saturation voltage is found to depend significantly on the flame location relative to the electrodes, and on the sign of the voltage difference applied. Furthermore, at sub-saturation conditions, the current is shown to increase linearly or quadratically with the applied voltage, depending on the flame location. These limiting behaviors exhibited by the reduced model elucidate the features of i-V curves observed experimentally. The reduced model relies on the existence of a thin layer where charges are produced, corresponding to the reaction zone of a flame. Consequently, the analytical model we propose is not limited to the study of premixed flames, and may be applied easily to others configurations, e.g.~nonpremixed counterflow flames.

  9. BWR stability using a reducing dynamical model; Estabilidad de un BWR con un modelo dinamico reducido

    Ballestrin Bolea, J. M.; Blazquez Martinez, J. B.

    1990-07-01

    BWR stability can be treated with reduced order dynamical models. When the parameters of the model came from dynamical models. When the parameters of the model came from experimental data, the predictions are accurate. In this work an alternative derivation for the void fraction equation is made, but remarking the physical structure of the parameters. As the poles of power/reactivity transfer function are related with the parameters, the measurement of the poles by other techniques such as noise analysis will lead to the parameters, but the system of equations is non-linear. Simple parametric calculation of decay ratio are performed, showing why BWRs become unstable when they are operated at low flow and high power. (Author)

  10. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    Gray, Alan [The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); Harlen, Oliver G. [University of Leeds, Leeds LS2 9JT (United Kingdom); Harris, Sarah A., E-mail: s.a.harris@leeds.ac.uk [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Leeds, Leeds LS2 9JT (United Kingdom); Khalid, Syma; Leung, Yuk Ming [University of Southampton, Southampton SO17 1BJ (United Kingdom); Lonsdale, Richard [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Philipps-Universität Marburg, Hans-Meerwein Strasse, 35032 Marburg (Germany); Mulholland, Adrian J. [University of Bristol, Bristol BS8 1TS (United Kingdom); Pearson, Arwen R. [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Hamburg, Hamburg (Germany); Read, Daniel J.; Richardson, Robin A. [University of Leeds, Leeds LS2 9JT (United Kingdom); The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom)

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  11. An accurate locally active memristor model for S-type negative differential resistance in NbOx

    A number of important commercial applications would benefit from the introduction of easily manufactured devices that exhibit current-controlled, or “S-type,” negative differential resistance (NDR). A leading example is emerging non-volatile memory based on crossbar array architectures. Due to the inherently linear current vs. voltage characteristics of candidate non-volatile memristor memory elements, individual memory cells in these crossbar arrays can be addressed only if a highly non-linear circuit element, termed a “selector,” is incorporated in the cell. Selectors based on a layer of niobium oxide sandwiched between two electrodes have been investigated by a number of groups because the NDR they exhibit provides a promisingly large non-linearity. We have developed a highly accurate compact dynamical model for their electrical conduction that shows that the NDR in these devices results from a thermal feedback mechanism. A series of electrothermal measurements and numerical simulations corroborate this model. These results reveal that the leakage currents can be minimized by thermally isolating the selector or by incorporating materials with larger activation energies for electron motion

  12. An accurate locally active memristor model for S-type negative differential resistance in NbOx

    Gibson, Gary A.; Musunuru, Srinitya; Zhang, Jiaming; Vandenberghe, Ken; Lee, James; Hsieh, Cheng-Chih; Jackson, Warren; Jeon, Yoocharn; Henze, Dick; Li, Zhiyong; Stanley Williams, R.

    2016-01-01

    A number of important commercial applications would benefit from the introduction of easily manufactured devices that exhibit current-controlled, or "S-type," negative differential resistance (NDR). A leading example is emerging non-volatile memory based on crossbar array architectures. Due to the inherently linear current vs. voltage characteristics of candidate non-volatile memristor memory elements, individual memory cells in these crossbar arrays can be addressed only if a highly non-linear circuit element, termed a "selector," is incorporated in the cell. Selectors based on a layer of niobium oxide sandwiched between two electrodes have been investigated by a number of groups because the NDR they exhibit provides a promisingly large non-linearity. We have developed a highly accurate compact dynamical model for their electrical conduction that shows that the NDR in these devices results from a thermal feedback mechanism. A series of electrothermal measurements and numerical simulations corroborate this model. These results reveal that the leakage currents can be minimized by thermally isolating the selector or by incorporating materials with larger activation energies for electron motion.

  13. An accurate locally active memristor model for S-type negative differential resistance in NbO{sub x}

    Gibson, Gary A.; Musunuru, Srinitya; Zhang, Jiaming; Lee, James; Hsieh, Cheng-Chih; Jackson, Warren; Jeon, Yoocharn; Henze, Dick; Li, Zhiyong; Stanley Williams, R. [Hewlett-Packard Laboratories, 1501 Page Mill Road, Palo Alto, California 94304 (United States); Vandenberghe, Ken [PTD-PPS, Hewlett-Packard Company, 1070 NE Circle Boulevard, Corvallis, Oregon 97330 (United States)

    2016-01-11

    A number of important commercial applications would benefit from the introduction of easily manufactured devices that exhibit current-controlled, or “S-type,” negative differential resistance (NDR). A leading example is emerging non-volatile memory based on crossbar array architectures. Due to the inherently linear current vs. voltage characteristics of candidate non-volatile memristor memory elements, individual memory cells in these crossbar arrays can be addressed only if a highly non-linear circuit element, termed a “selector,” is incorporated in the cell. Selectors based on a layer of niobium oxide sandwiched between two electrodes have been investigated by a number of groups because the NDR they exhibit provides a promisingly large non-linearity. We have developed a highly accurate compact dynamical model for their electrical conduction that shows that the NDR in these devices results from a thermal feedback mechanism. A series of electrothermal measurements and numerical simulations corroborate this model. These results reveal that the leakage currents can be minimized by thermally isolating the selector or by incorporating materials with larger activation energies for electron motion.

  14. An accurate numerical solution to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in rivers

    Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid

    2016-07-01

    We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].

  15. Toward an Accurate Modeling of Hydrodynamic Effects on the Translational and Rotational Dynamics of Biomolecules in Many-Body Systems.

    Długosz, Maciej; Antosiewicz, Jan M

    2015-07-01

    Proper treatment of hydrodynamic interactions is of importance in evaluation of rigid-body mobility tensors of biomolecules in Stokes flow and in simulations of their folding and solution conformation, as well as in simulations of the translational and rotational dynamics of either flexible or rigid molecules in biological systems at low Reynolds numbers. With macromolecules conveniently modeled in calculations or in dynamic simulations as ensembles of spherical frictional elements, various approximations to hydrodynamic interactions, such as the two-body, far-field Rotne-Prager approach, are commonly used, either without concern or as a compromise between the accuracy and the numerical complexity. Strikingly, even though the analytical Rotne-Prager approach fails to describe (both in the qualitative and quantitative sense) mobilities in the simplest system consisting of two spheres, when the distance between their surfaces is of the order of their size, it is commonly applied to model hydrodynamic effects in macromolecular systems. Here, we closely investigate hydrodynamic effects in two and three-body systems, consisting of bead-shell molecular models, using either the analytical Rotne-Prager approach, or an accurate numerical scheme that correctly accounts for the many-body character of hydrodynamic interactions and their short-range behavior. We analyze mobilities, and translational and rotational velocities of bodies resulting from direct forces acting on them. We show, that with the sufficient number of frictional elements in hydrodynamic models of interacting bodies, the far-field approximation is able to provide a description of hydrodynamic effects that is in a reasonable qualitative as well as quantitative agreement with the description resulting from the application of the virtually exact numerical scheme, even for small separations between bodies. PMID:26068580

  16. Identification of the reduced order models of a BWR reactor

    The present work has as objective to analyze the relative stability of a BWR type reactor. It is analyzed that so adaptive it turns out to identify the parameters of a model of reduced order so that this it reproduces a condition of given uncertainty. This will take of a real fact happened in the La Salle plant under certain operation conditions of power and flow of coolant. The parametric identification is carried out by means of an algorithm of recursive least square and an Output Error model (Output Error), measuring the output power of the reactor when the instability is present, and considering that it is produced by a change in the reactivity of the system in the same way that a sign of type step. Also it is carried out an analytic comparison of the relative stability, analyzing two types of answers: the original answer of the uncertainty of the reactor vs. the obtained response identifying the parameters of the model of reduced order, reaching the conclusion that it is very viable to adapt a model of reduced order to study the stability of a reactor, under the only condition to consider that the dynamics of the reactivity is of step type. (Author)

  17. Reducing equifinality of hydrological models by integrating Functional Streamflow Disaggregation

    Lüdtke, Stefan; Apel, Heiko; Nied, Manuela; Carl, Peter; Merz, Bruno

    2014-05-01

    A universal problem of the calibration of hydrological models is the equifinality of different parameter sets derived from the calibration of models against total runoff values. This is an intrinsic problem stemming from the quality of the calibration data and the simplified process representation by the model. However, discharge data contains additional information which can be extracted by signal processing methods. An analysis specifically developed for the disaggregation of runoff time series into flow components is the Functional Streamflow Disaggregation (FSD; Carl & Behrendt, 2008). This method is used in the calibration of an implementation of the hydrological model SWIM in a medium sized watershed in Thailand. FSD is applied to disaggregate the discharge time series into three flow components which are interpreted as base flow, inter-flow and surface runoff. In addition to total runoff, the model is calibrated against these three components in a modified GLUE analysis, with the aim to identify structural model deficiencies, assess the internal process representation and to tackle equifinality. We developed a model dependent (MDA) approach calibrating the model runoff components against the FSD components, and a model independent (MIA) approach comparing the FSD of the model results and the FSD of calibration data. The results indicate, that the decomposition provides valuable information for the calibration. Particularly MDA highlights and discards a number of standard GLUE behavioural models underestimating the contribution of soil water to river discharge. Both, MDA and MIA yield to a reduction of the parameter ranges by a factor up to 3 in comparison to standard GLUE. Based on these results, we conclude that the developed calibration approach is able to reduce the equifinality of hydrological model parameterizations. The effect on the uncertainty of the model predictions is strongest by applying MDA and shows only minor reductions for MIA. Besides

  18. A Reduced Wind Power Grid Model for Research and Education

    Akhmatov, V. [Energinet.dk, Fjordvejen 1-11, DK-7000 Fredericia (Denmark); Lund, T.; Hansen, A.D.; Sorensen, P.E. [Risoe National Laboratory, DK-4000 Roskilde (Denmark); Nielsen, A.H. [Centre for Electric Technology, Technical University of Denmark, DK-2800 Kgs. Lyngby (Denmark)

    2006-07-01

    A reduced grid model of a transmission system with a number of central power plants, consumption centers, local wind turbines and a large offshore wind farm is developed and implemented in the simulation tool PowerFactory (DIgSILENT). The reduced grid model is given by Energinet.dk, Transmission System Operator of Denmark (TSO) for Natural Gas and Electricity, to the Danish Universities and the Risoe National Laboratory. Its intended usage is education and studying of interaction between electricity-producing wind turbines and a realistic transmission system. Focus in these studies is on voltage stability issues and on the ride-through capability of different wind turbine concepts, equipped with advanced controllers, developed by the Risoe National Laboratory.

  19. Predictive modeling and reducing cyclic variability in autoignition engines

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  20. Reduced model design of a floating wind turbine

    Sandner, Frank

    2012-01-01

    Floating platform concepts offer the prospect of harvesting offshore wind energy at deep water locations for countries with a limited number of suitable shal- low water locations for bottom-mounted offshore wind turbines. The floating spar-buoy concept has shown promising experimental and theoretical results. Al- though various codes for a detailed simulation exist the purpose of this work is to elaborate a reduced Floating Offshore Wind Turbine (FOWT) model that mainly reproduces the overall...

  1. Model protocells photochemically reduce carbonate to organic carbon

    Folsome, C.; Brittain, A.

    1981-06-11

    Synthetic cell-sized organic microstructures effect the long-wavelength uv photosynthesis of organic products from carbonate. Formaldehyde is the most abundant photoproduct and water is the major proton donor for this reduced form of carbon. We show here that these results for model phase-bounded systems are consistent with the postulate that metabolism of progenitors to the earliest living cells could have been, at least in part, photosynthetic.

  2. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    M. Montes-Hugo

    2014-06-01

    Full Text Available The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor, EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM for estimating the phytoplankton absorption coefficient at 443 nm (aph(443 and the chlorophyll concentration (chl in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443 estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013. A change on the inversion approach used for estimating aph(443 values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System default values for the optical cross section of phytoplankton (i.e., aph(443 = aph(443/chl = 0.056 m2mg−1, the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443 retrievals and with respect to in situ determinations increased up to 29%.

  3. Hydrate Model for CCS Relevant Gases Compatible with Highly Accurate Equations of State I. Parameter Study and Model Fitting

    Vinš, Václav; Jäger, A.; Hrubý, Jan; Span, R.

    Boulder Colorado: National Institute of Standards and Technology, 2015. PaperID 2868. [Symposium on Thermophysical Properties /19./. 21.06.2015-26.06.2015, Boulder Colorado] R&D Projects: GA MŠk(CZ) 7F14466 Grant ostatní: Rada Programu interní podpory projektů mezinárodní spolupráce AV ČR(CZ) M100761201 Institutional support: RVO:61388998 Keywords : gas hydrates * carbon capture and storage * modelling Subject RIV: BJ - Thermodynamics http://thermosymposium.nist.gov/pdf/Abstract_2868.pdf ; http://thermosymposium.nist.gov/program.html

  4. A reduced model for fingering instability in miscible displacement

    The classical problem of fingering instability in miscible displacement is revisited. The finger-forming dynamics is considered as a multiple-scale process involving a thin inter-diffusion layer and large-scale background flow affected by the viscosity and/or density stratification. Upon an appropriate separation of 'fast' and 'slow' variables one ends up with a reduced model dealing directly with the evolving displacement front. As an illustration, the new model is applied for description of fingering in a source-supported flow and in a flow within a vertical channel

  5. Anchanling reduces pathology in a lactacystin- induced Parkinson's disease model

    Yinghong Li; Zhengzhi Wu; Xiaowei Gao; Qingwei Zhu; Yu Jin; Anmin Wu; Andrew C. J. Huang

    2012-01-01

    A rat model of Parkinson's disease was induced by injecting lactacystin stereotaxically into the left mesencephalic ventral tegmental area and substantia nigra pars compacta. After rats were intragastrically perfused with Anchanling, a Chinese medicine, mainly composed of magnolol, for 5 weeks, when compared with Parkinson's disease model rats, tyrosine hydroxylase expression was increased, α-synuclein and ubiquitin expression was decreased, substantia nigra cell apoptosis was reduced, and apomorphine-induced rotational behavior was improved. Results suggested that Anchanling can ameliorate Parkinson's disease pathology possibly by enhancing degradation activity of the ubiquitin-proteasome system.

  6. Frequency-domain reduced order models for gravitational waves from aligned-spin compact binaries

    Black-hole binary coalescences are one of the most promising sources for the first detection of gravitational waves. Fast and accurate theoretical models of the gravitational radiation emitted from these coalescences are highly important for the detection and extraction of physical parameters. Spinning effective-one-body models for binaries with aligned-spins have been shown to be highly faithful, but are slow to generate and thus have not yet been used for parameter estimation (PE) studies. I provide a frequency-domain singular value decomposition-based surrogate reduced order model that is thousands of times faster for typical system masses and has a faithfulness mismatch of better than ∼0.1% with the original SEOBNRv1 model for advanced LIGO detectors. This model enables PE studies up to signal-to-noise ratios (SNRs) of 20 and even up to 50 for total masses below 50 M⊙. This paper discusses various choices for approximations and interpolation over the parameter space that can be made for reduced order models of spinning compact binaries, provides a detailed discussion of errors arising in the construction and assesses the fidelity of such models. (paper)

  7. The capabilities and limitations of conductance-based compartmental neuron models with reduced branched or unbranched morphologies and active dendrites.

    Hendrickson, Eric B; Edgerton, Jeremy R; Jaeger, Dieter

    2011-04-01

    Conductance-based neuron models are frequently employed to study the dynamics of biological neural networks. For speed and ease of use, these models are often reduced in morphological complexity. Simplified dendritic branching structures may process inputs differently than full branching structures, however, and could thereby fail to reproduce important aspects of biological neural processing. It is not yet well understood which processing capabilities require detailed branching structures. Therefore, we analyzed the processing capabilities of full or partially branched reduced models. These models were created by collapsing the dendritic tree of a full morphological model of a globus pallidus (GP) neuron while preserving its total surface area and electrotonic length, as well as its passive and active parameters. Dendritic trees were either collapsed into single cables (unbranched models) or the full complement of branch points was preserved (branched models). Both reduction strategies allowed us to compare dynamics between all models using the same channel density settings. Full model responses to somatic inputs were generally preserved by both types of reduced model while dendritic input responses could be more closely preserved by branched than unbranched reduced models. However, features strongly influenced by local dendritic input resistance, such as active dendritic sodium spike generation and propagation, could not be accurately reproduced by any reduced model. Based on our analyses, we suggest that there are intrinsic differences in processing capabilities between unbranched and branched models. We also indicate suitable applications for different levels of reduction, including fast searches of full model parameter space. PMID:20623167

  8. Reduced Complexity Modeling (RCM): toward more use of less

    Paola, Chris; Voller, Vaughan

    2014-05-01

    Although not exact, there is a general correspondence between reductionism and detailed, high-fidelity models, while 'synthesism' is often associated with reduced-complexity modeling. There is no question that high-fidelity reduction- based computational models are extremely useful in simulating the behaviour of complex natural systems. In skilled hands they are also a source of insight and understanding. We focus here on the case for the other side (reduced-complexity models), not because we think they are 'better' but because their value is more subtle, and their natural constituency less clear. What kinds of problems and systems lend themselves to the reduced-complexity approach? RCM is predicated on the idea that the mechanism of the system or phenomenon in question is, for whatever reason, insensitive to the full details of the underlying physics. There are multiple ways in which this can happen. B.T. Werner argued for the importance of process hierarchies in which processes at larger scales depend on only a small subset of everything going on at smaller scales. Clear scale breaks would seem like a way to test systems for this property but to our knowledge has not been used in this way. We argue that scale-independent physics, as for example exhibited by natural fractals, is another. We also note that the same basic criterion - independence of the process in question from details of the underlying physics - underpins 'unreasonably effective' laboratory experiments. There is thus a link between suitability for experimentation at reduced scale and suitability for RCM. Examples from RCM approaches to erosional landscapes, braided rivers, and deltas illustrate these ideas, and suggest that they are insufficient. There is something of a 'wild west' nature to RCM that puts some researchers off by suggesting a departure from traditional methods that have served science well for centuries. We offer two thoughts: first, that in the end the measure of a model is its

  9. Modelling the Constraints of Spatial Environment in Fauna Movement Simulations: Comparison of a Boundaries Accurate Function and a Cost Function

    Jolivet, L.; Cohen, M.; Ruas, A.

    2015-08-01

    Landscape influences fauna movement at different levels, from habitat selection to choices of movements' direction. Our goal is to provide a development frame in order to test simulation functions for animal's movement. We describe our approach for such simulations and we compare two types of functions to calculate trajectories. To do so, we first modelled the role of landscape elements to differentiate between elements that facilitate movements and the ones being hindrances. Different influences are identified depending on landscape elements and on animal species. Knowledge were gathered from ecologists, literature and observation datasets. Second, we analysed the description of animal movement recorded with GPS at fine scale, corresponding to high temporal frequency and good location accuracy. Analysing this type of data provides information on the relation between landscape features and movements. We implemented an agent-based simulation approach to calculate potential trajectories constrained by the spatial environment and individual's behaviour. We tested two functions that consider space differently: one function takes into account the geometry and the types of landscape elements and one cost function sums up the spatial surroundings of an individual. Results highlight the fact that the cost function exaggerates the distances travelled by an individual and simplifies movement patterns. The geometry accurate function represents a good bottom-up approach for discovering interesting areas or obstacles for movements.

  10. MODELLING THE CONSTRAINTS OF SPATIAL ENVIRONMENT IN FAUNA MOVEMENT SIMULATIONS: COMPARISON OF A BOUNDARIES ACCURATE FUNCTION AND A COST FUNCTION

    L. Jolivet

    2015-08-01

    Full Text Available Landscape influences fauna movement at different levels, from habitat selection to choices of movements’ direction. Our goal is to provide a development frame in order to test simulation functions for animal’s movement. We describe our approach for such simulations and we compare two types of functions to calculate trajectories. To do so, we first modelled the role of landscape elements to differentiate between elements that facilitate movements and the ones being hindrances. Different influences are identified depending on landscape elements and on animal species. Knowledge were gathered from ecologists, literature and observation datasets. Second, we analysed the description of animal movement recorded with GPS at fine scale, corresponding to high temporal frequency and good location accuracy. Analysing this type of data provides information on the relation between landscape features and movements. We implemented an agent-based simulation approach to calculate potential trajectories constrained by the spatial environment and individual’s behaviour. We tested two functions that consider space differently: one function takes into account the geometry and the types of landscape elements and one cost function sums up the spatial surroundings of an individual. Results highlight the fact that the cost function exaggerates the distances travelled by an individual and simplifies movement patterns. The geometry accurate function represents a good bottom-up approach for discovering interesting areas or obstacles for movements.

  11. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which is not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  12. Efficient subspace construction for reduced order modeling in reactor analysis

    Subspace-based reduced order modeling (ROM) is a powerful technique for reducing the computational burden required for the analysis of complex engineering models, such as nuclear reactor core calculations. It identifies a relatively small subspace that represents the variations for the quantities of interest with quantifiable user-defined accuracy. Focusing on neutron transport calculations employed in reactor analysis, the subspace is defined by a set of basis vectors that are extracted from the converged flux solution associated with a set of randomized forward model executions. In each execution, the input cross-sections are randomly perturbed, and the corresponding converged flux solution, referred to as snapshot, is recorded. This work mathematically proves and numerically demonstrates that the non-converged flux iterates can be used to approximate the reduction subspace without compromising the user-defined accuracy. It is shown that although this is expected to be bigger than (and inclusive of) the one based on converged flux snapshots, the computational savings resulting from the early termination of the iterative solution are significant even for situations where the convergence is slow, such as for systems with high dominance ratio (e.g., BWR). A quarter BWR fuel assembly was modeled and the variations of multiplication factor and neutron multi-group flux distribution were used to assess the adequacy of the proposed approach. (author)

  13. Low-dose biplanar radiography can be used in children and adolescents to accurately assess femoral and tibial torsion and greatly reduce irradiation

    To evaluate in children the agreement between femoral and tibial torsion measurements obtained with low-dose biplanar radiography (LDBR) and CT, and to study dose reduction ratio between these two techniques both in vitro and in vivo. Thirty children with lower limb torsion abnormalities were included in a prospective study. Biplanar radiographs and CTs were performed for measurements of lower limb torsion on each patient. Values were compared using Bland-Altman plots. Interreader and intrareader agreements were evaluated by intraclass correlation coefficients. Comparative dosimetric study was performed using an ionization chamber in a tissue-equivalent phantom, and with thermoluminescent dosimeters in 5 patients. Average differences between CT and LDBR measurements were -0.1 ±1.1 for femoral torsion and -0.7 ±1.4 for tibial torsion. Interreader agreement for LDBR measurements was very good for both femoral torsion (FT) (0.81) and tibial torsion (TT) (0.87). Intrareader agreement was excellent for FT (0.97) and TT (0.89). The ratio between CT scan dose and LDBR dose was 22 in vitro (absorbed dose) and 32 in vivo (skin dose). Lower limb torsion measurements obtained with LDBR are comparable to CT measurements in children and adolescents, with a considerably reduced radiation dose. (orig.)

  14. Low-dose biplanar radiography can be used in children and adolescents to accurately assess femoral and tibial torsion and greatly reduce irradiation

    Meyrignac, Olivier; Baunin, Christiane; Vial, Julie; Sans, Nicolas [CHU Toulouse Purpan, Department of Radiology, Toulouse Cedex 9 (France); Moreno, Ramiro [ALARA Expertise, Oberhausbergen (France); Accadbled, Franck; Gauzy, Jerome Sales de [Hopital des Enfants, Department of Orthopedics, Toulouse Cedex 9 (France); Sommet, Agnes [Universite Paul Sabatier, Department of Fundamental Pharmaco-Clinical Pharmacology, Toulouse (France)

    2015-06-01

    To evaluate in children the agreement between femoral and tibial torsion measurements obtained with low-dose biplanar radiography (LDBR) and CT, and to study dose reduction ratio between these two techniques both in vitro and in vivo. Thirty children with lower limb torsion abnormalities were included in a prospective study. Biplanar radiographs and CTs were performed for measurements of lower limb torsion on each patient. Values were compared using Bland-Altman plots. Interreader and intrareader agreements were evaluated by intraclass correlation coefficients. Comparative dosimetric study was performed using an ionization chamber in a tissue-equivalent phantom, and with thermoluminescent dosimeters in 5 patients. Average differences between CT and LDBR measurements were -0.1 ±1.1 for femoral torsion and -0.7 ±1.4 for tibial torsion. Interreader agreement for LDBR measurements was very good for both femoral torsion (FT) (0.81) and tibial torsion (TT) (0.87). Intrareader agreement was excellent for FT (0.97) and TT (0.89). The ratio between CT scan dose and LDBR dose was 22 in vitro (absorbed dose) and 32 in vivo (skin dose). Lower limb torsion measurements obtained with LDBR are comparable to CT measurements in children and adolescents, with a considerably reduced radiation dose. (orig.)

  15. Accurate Monte Carlo modeling of cyclotrons for optimization of shielding and activation calculations in the biomedical field

    Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano

    2015-11-01

    Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended

  16. Construction of energy-stable Galerkin reduced order models.

    Kalashnikova, Irina; Barone, Matthew Franklin; Arunajatesan, Srinivasan; van Bloemen Waanders, Bart Gustaaf

    2013-05-01

    This report aims to unify several approaches for building stable projection-based reduced order models (ROMs). Attention is focused on linear time-invariant (LTI) systems. The model reduction procedure consists of two steps: the computation of a reduced basis, and the projection of the governing partial differential equations (PDEs) onto this reduced basis. Two kinds of reduced bases are considered: the proper orthogonal decomposition (POD) basis and the balanced truncation basis. The projection step of the model reduction can be done in two ways: via continuous projection or via discrete projection. First, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of PDEs using continuous projection is proposed. The idea is to apply to the set of PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The resulting ROM will be energy-stable for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special weighted L2 inner product, termed the %E2%80%9Csymmetry inner product%E2%80%9D. Attention is then turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, a weighted L2 inner product termed the %E2%80%9CLyapunov inner product%E2%80%9D, is derived. The weighting matrix that defines the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system arising from the discretization of a system of PDEs in space. It is shown that a ROM constructed via discrete projection using the Lyapunov inner product will be energy-stable for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. Comparisons are also made between the symmetry inner product and the Lyapunov inner product. The performance of ROMs constructed

  17. Probabilistic Rotor Life Assessment Using Reduced Order Models

    Brian K. Beachkofski

    2009-01-01

    Full Text Available Probabilistic failure assessments for integrally bladed disks are system reliability problems where a failure in at least one blade constitutes a rotor system failure. Turbine engine fan and compressor blade life is dominated by High Cycle Fatigue (HCF initiated either by pure HCF or Foreign Object Damage (FOD. To date performing an HCF life assessment for the entire rotor system has been too costly in analysis time to be practical. Although the substantial run-time has previously precluded a full-rotor probabilistic analysis, reduced order models make this process tractable as demonstrated in this work. The system model includes frequency prediction, modal stress variation, mistuning amplification, FOD effect, and random material capability. The model has many random variables which are most easily handled through simple random sampling.

  18. Reduced order methods for modeling and computational reduction

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  19. Reduced order component models for flexible multibody dynamics simulations

    Tsuha, Walter S.; Spanos, John T.

    1990-01-01

    Many flexible multibody dynamics simulation codes require some form of component description that properly characterizes the dynamic behavior of the system. A model reduction procedure for producing low order component models for flexible multibody simulation is described. Referred to as projection and assembly, the method is a Rayleigh-Ritz approach that uses partitions of the system modal matrix as component Ritz transformation matrices. It is shown that the projection and assembly method yields a reduced system model that preserves a specified set of the full order system modes. Unlike classical component mode synthesis methods, the exactness of the method described is obtained at the expense of having to compute the full order system modes. The paper provides a comprehensive description of the method, a proof of exactness, and numerical results demonstrating the method's effectiveness.

  20. Model-based design approach to reducing mechanical vibrations

    P. Czop

    2013-09-01

    Full Text Available Purpose: The paper presents a sensitivity analysis method based on a first-principle model in order to reduce mechanical vibrations of a hydraulic damper. Design/methodology/approach: The first-principle model is formulated using a system of continuous ordinary differential equations capturing usually nonlinear relations among variables of the hydraulic damper model. The model applies three categories of parameters: geometrical, physical and phenomenological. Geometrical and physical parameters are deduced from construction and operational documentation. The phenomenological parameters are the adjustable ones, which are estimated or adjusted based on their roughly known values, e.g. friction/damping coefficients. Findings: The sensitivity analysis method provides major contributors and their magnitude that cause vibrations Research limitations/implications: The method accuracy is limited by the model accuracy and inherited nonlinear effects. Practical implications: The proposed model-based sensitivity method can be used to optimize prototypes of hydraulic dampers. Originality/value: The proposed sensitivity-analysis method minimizes a risk that a hydraulic damper does not meet the customer specification.

  1. Reduced Lorenz models for anomalous transport and profile resilience

    The physical basis for the Lorenz equations for convective cells in stratified fluids, and for magnetized plasmas imbedded in curved magnetic fields, are reexamined with emphasis on anomalous transport. It is shown that the Galerkin truncation leading to the Lorenz equations for the closed boundary problem is incompatible with finite fluxes through the system in the limit of vanishing diffusion. An alternative formulation leading to the Lorenz equations is proposed, invoking open boundaries and the notion of convective streamers and their back-reaction on the profile gradient, giving rise to resilience of the profile. Particular emphasis is put on the diffusionless limit, where these equations reduce to a simple dynamical system depending only on one single forcing parameter. This model is studied numerically, stressing experimentally observable signatures, and some of the perils of dimension-reducing approximations are discussed

  2. Package Equivalent Reactor Networks as Reduced Order Models for Use with CAPE-OPEN Compliant Simulation

    Meeks, E.; Chou, C. -P.; Garratt, T.

    2013-03-31

    Engineering simulations of coal gasifiers are typically performed using computational fluid dynamics (CFD) software, where a 3-D representation of the gasifier equipment is used to model the fluid flow in the gasifier and source terms from the coal gasification process are captured using discrete-phase model source terms. Simulations using this approach can be very time consuming, making it difficult to imbed such models into overall system simulations for plant design and optimization. For such system-level designs, process flowsheet software is typically used, such as Aspen Plus® [1], where each component where each component is modeled using a reduced-order model. For advanced power-generation systems, such as integrated gasifier/gas-turbine combined-cycle systems (IGCC), the critical components determining overall process efficiency and emissions are usually the gasifier and combustor. Providing more accurate and more computationally efficient reduced-order models for these components, then, enables much more effective plant-level design optimization and design for control. Based on the CHEMKIN-PRO and ENERGICO software, we have developed an automated methodology for generating an advanced form of reduced-order model for gasifiers and combustors. The reducedorder model offers representation of key unit operations in flowsheet simulations, while allowing simulation that is fast enough to be used in iterative flowsheet calculations. Using high-fidelity fluiddynamics models as input, Reaction Design’s ENERGICO® [2] software can automatically extract equivalent reactor networks (ERNs) from a CFD solution. For the advanced reduced-order concept, we introduce into the ERN a much more detailed kinetics model than can be included practically in the CFD simulation. The state-of-the-art chemistry solver technology within CHEMKIN-PRO allows that to be accomplished while still maintaining a very fast model turn-around time. In this way, the ERN becomes the basis for

  3. Linear stability analysis of flow instabilities with a nodalized reduced order model in heated channel

    The prime objective of the presented work is to develop a Nodalized Reduced Order Model (NROM) to carry linear stability analysis of flow instabilities in a two-phase flow system. The model is developed by dividing the single phase and two-phase region of a uniformly heated channel into N number of nodes followed by time dependent spatial linear approximations for single phase enthalpy and two-phase quality between the consecutive nodes. Moving boundary scheme has been adopted in the model, where all the node boundaries vary with time due to the variation of boiling boundary inside the heated channel. Using a state space approach, the instability thresholds are delineated by stability maps plotted in parameter planes of phase change number (Npch) and subcooling number (Nsub). The prime feature of the present model is that, though the model equations are simpler due to presence of linear-linear approximations for single phase enthalpy and two-phase quality, yet the results are in good agreement with the existing models (Karve [33]; Dokhane [34]) where the model equations run for several pages and experimental data (Solberg [41]). Unlike the existing ROMs, different two-phase friction factor multiplier correlations have been incorporated in the model. The applicability of various two-phase friction factor multipliers and their effects on stability behaviour have been depicted by carrying a comparative study. It is also observed that the Friedel model for friction factor calculations produces the most accurate results with respect to the available experimental data. (authors)

  4. Glyburide reduces bacterial dissemination in a mouse model of melioidosis.

    Gavin C K W Koh

    Full Text Available BACKGROUND: Burkholderia pseudomallei infection (melioidosis is an important cause of community-acquired Gram-negative sepsis in Northeast Thailand, where it is associated with a ~40% mortality rate despite antimicrobial chemotherapy. We showed in a previous cohort study that patients taking glyburide ( = glibenclamide prior to admission have lower mortality and attenuated inflammatory responses compared to patients not taking glyburide. We sought to define the mechanism underlying this observation in a murine model of melioidosis. METHODS: Mice (C57BL/6 with streptozocin-induced diabetes were inoculated with ~6 × 10(2 cfu B. pseudomallei intranasally, then treated with therapeutic ceftazidime (600 mg/kg intraperitoneally twice daily starting 24 h after inoculation in order to mimic the clinical scenario. Glyburide (50 mg/kg or vehicle was started 7 d before inoculation and continued until sacrifice. The minimum inhibitory concentration of glyburide for B. pseudomallei was determined by broth microdilution. We also examined the effect of glyburide on interleukin (IL 1β by bone-marrow-derived macrophages (BMDM. RESULTS: Diabetic mice had increased susceptibility to melioidosis, with increased bacterial dissemination but no effect was seen of diabetes on inflammation compared to non-diabetic controls. Glyburide treatment did not affect glucose levels but was associated with reduced pulmonary cellular influx, reduced bacterial dissemination to both liver and spleen and reduced IL1β production when compared to untreated controls. Other cytokines were not different in glyburide-treated animals. There was no direct effect of glyburide on B. pseudomallei growth in vitro or in vivo. Glyburide directly reduced the secretion of IL1β by BMDMs in a dose-dependent fashion. CONCLUSIONS: Diabetes increases the susceptibility to melioidosis. We further show, for the first time in any model of sepsis, that glyburide acts as an anti-inflammatory agent by

  5. Randomized Wilson loops, reduced models and the large D expansion

    Evnin, Oleg

    2011-01-01

    Reduced models are matrix integrals believed to be related to the large N limit of gauge theories. These integrals are known to simplify further when the number of matrices D (corresponding to the number of space-time dimensions in the gauge theory) becomes large. Even though this limit appears to be of little use for computing the standard rectangular Wilson loop (which always singles out two directions out of D), a meaningful large D limit can be defined for a randomized Wilson loop (in whi...

  6. Reduced parameter model on trajectory tracking data with applications

    王正明; 朱炬波

    1999-01-01

    The data fusion in tracking the same trajectory by multi-measurernent unit (MMU) is considered. Firstly, the reduced parameter model (RPM) of trajectory parameter (TP), system error and random error are presented,and then the RPM on trajectory tracking data (TTD) is obtained, a weighted method on measuring elements (ME) is studied and criteria on selection of ME based on residual and accuracy estimation are put forward. According to RPM,the problem about selection of ME and self-calibration of TTD is thoroughly investigated. The method improves data accuracy in trajectory tracking obviously and gives accuracy evaluation of trajectory tracking system simultaneously.

  7. Formulation of a 1D finite element of heat exchanger for accurate modelling of the grouting behaviour: Application to cyclic thermal loading

    Cerfontaine, Benjamin; Radioti, Georgia; Collin, Frédéric; Charlier, Robert

    2016-01-01

    This paper presents a comprehensive formulation of a finite element for the modelling of borehole heat exchangers. This work focuses on the accurate modelling of the grouting and the field of temperature near a single borehole. Therefore the grouting of the BHE is explicitly modelled. The purpose of this work is to provide tools necessary to the further modelling of thermo-mechanical couplings. The finite element discretises the classical governing equation of advection-diffusion of heat w...

  8. Fragile DNA Repair Mechanism Reduces Ageing in Multicellular Model

    Bendtsen, Kristian Moss; Juul, Jeppe Søgaard; Trusina, Ala

    2012-01-01

    DNA damages, as well as mutations, increase with age. It is believed that these result from increased genotoxic stress and decreased capacity for DNA repair. The two causes are not independent, DNA damage can, for example, through mutations, compromise the capacity for DNA repair, which in turn...... increases the amount of unrepaired DNA damage. Despite this vicious circle, we ask, can cells maintain a high DNA repair capacity for some time or is repair capacity bound to continuously decline with age? We here present a simple mathematical model for ageing in multicellular systems where cells subjected...... to DNA damage can undergo full repair, go apoptotic, or accumulate mutations thus reducing DNA repair capacity. Our model predicts that at the tissue level repair rate does not continuously decline with age, but instead has a characteristic extended period of high and non-declining DNA repair...

  9. Reduced Modeling of Electron Trapping Nonlinearity in Raman Scattering

    Strozzi, D. J.; Berger, R. L.; Rose, H. A.; Langdon, A. B.; Williams, E. A.

    2009-11-01

    The trapping of resonant electrons in Langmuir waves generated by stimulated Raman scattering (SRS) gives rise to several nonlinear effects, which can either increase or decrease the reflectivity. We have implemented a reduced model of these nonlinearities in the paraxial propagation code pF3D [R. L. Berger et al., Phys. Plasmas 5 (1998)], consisting of a Landau damping reduction and Langmuir-wave frequency downshift. Both effects depend on the local wave amplitude, and gradually turn on with amplitude. This model is compared with 1D seeded Vlasov simulations, that include a Krook relaxation operator to mimic, e.g., transverse sideloss out of a multi-D, finite laser speckle. SRS in these runs develops from a counter-propagating seed light wave. Applications to ICF experiments will also be presented.

  10. A Reduced Order, One Dimensional Model of Joint Response

    DOHNER,JEFFREY L.

    2000-11-06

    As a joint is loaded, the tangent stiffness of the joint reduces due to slip at interfaces. This stiffness reduction continues until the direction of the applied load is reversed or the total interface slips. Total interface slippage in joints is called macro-slip. For joints not undergoing macro-slip, when load reversal occurs the tangent stiffness immediately rebounds to its maximum value. This occurs due to stiction effects at the interface. Thus, for periodic loads, a softening and rebound hardening cycle is produced which defines a hysteretic, energy absorbing trajectory. For many jointed sub-structures, this hysteretic trajectory can be approximated using simple polynomial representations. This allows for complex joint substructures to be represented using simple non-linear models. In this paper a simple one dimensional model is discussed.

  11. Improvement of fluorescence-enhanced optical tomography with improved optical filtering and accurate model-based reconstruction algorithms

    Lu, Yujie; Zhu, Banghe; Darne, Chinmay; Tan, I.-Chih; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-12-01

    The goal of preclinical fluorescence-enhanced optical tomography (FEOT) is to provide three-dimensional fluorophore distribution for a myriad of drug and disease discovery studies in small animals. Effective measurements, as well as fast and robust image reconstruction, are necessary for extensive applications. Compared to bioluminescence tomography (BLT), FEOT may result in improved image quality through higher detected photon count rates. However, background signals that arise from excitation illumination affect the reconstruction quality, especially when tissue fluorophore concentration is low and/or fluorescent target is located deeply in tissues. We show that near-infrared fluorescence (NIRF) imaging with an optimized filter configuration significantly reduces the background noise. Model-based reconstruction with a high-order approximation to the radiative transfer equation further improves the reconstruction quality compared to the diffusion approximation. Improvements in FEOT are demonstrated experimentally using a mouse-shaped phantom with targets of pico- and subpico-mole NIR fluorescent dye.

  12. A Taxonomic Reduced-Space Pollen Model for Paleoclimate Reconstruction

    Wahl, E. R.; Schoelzel, C.

    2010-12-01

    Paleoenvironmental reconstruction from fossil pollen often attempts to take advantage of the rich taxonomic diversity in such data. Here, a taxonomically "reduced-space" reconstruction model is explored that would be parsimonious in introducing parameters needing to be estimated within a Bayesian Hierarchical Modeling context. This work involves a refinement of the traditional pollen ratio method. This method is useful when one (or a few) dominant pollen type(s) in a region have a strong positive correlation with a climate variable of interest and another (or a few) dominant pollen type(s) have a strong negative correlation. When, e.g., counts of pollen taxa a and b (r >0) are combined with pollen types c and d (r binomial logistic generalized linear model (GLM). The GLM can readily model this relationship in the forward form, pollen = g(climate), which is more physically realistic than inverse models often used in paleoclimate reconstruction [climate = f(pollen)]. The specification of the model is: rnum Bin(n,p), where E(r|T) = p = exp(η)/[1+exp(η)], and η = α + β(T); r is the pollen ratio formed as above, rnum is the ratio numerator, n is the ratio denominator (i.e., the sum of pollen counts), the denominator-specific count is (n - rnum), and T is the temperature at each site corresponding to a specific value of r. Ecological and empirical screening identified the model (Spruce+Birch) / (Spruce+Birch+Oak+Hickory) for use in temperate eastern N. America. α and β were estimated using both "traditional" and Bayesian GLM algorithms (in R). Although it includes only four pollen types, the ratio model yields more explained variation ( 80%) in the pollen-temperature relationship of the study region than a 64-taxon modern analog technique (MAT). Thus, the new pollen ratio method represents an information-rich, reduced space data model that can be efficiently employed in a BHM framework. The ratio model can directly reconstruct past temperature by solving the GLM

  13. Triptolide reduces cystogenesis in a model of ADPKD.

    Leuenroth, Stephanie J; Bencivenga, Natasha; Igarashi, Peter; Somlo, Stefan; Crews, Craig M

    2008-09-01

    Mutations in PKD1 result in autosomal dominant polycystic kidney disease, which is characterized by increased proliferation of tubule cells leading to cyst initiation and subsequent expansion. Given the cell proliferation associated with cyst growth, an attractive therapeutic strategy has been to target the hyperproliferative nature of the disease. We previously demonstrated that the small molecule triptolide induces cellular calcium release through a polycystin-2-dependent pathway, arrests Pkd1(-/-) cell growth, and reduces cystic burden in Pkd1(-/-) embryonic mice. To assess cyst progression in neonates, we used the kidney-specific Pkd1(flox/-);Ksp-Cre mouse model of autosomal dominant polycystic kidney disease, in which the burden of cysts is negligible at birth but then progresses rapidly over days. The number, size, and proliferation rate of cysts were examined. Treatment with triptolide significantly improved renal function at postnatal day 8 by inhibition of the early phases of cyst growth. Because the proliferative index of kidney epithelium in neonates versus adults is significantly different, future studies will need to address whether triptolide delays or reduces cyst progression in the Pkd1 adult model. PMID:18650476

  14. Informing Investment to Reduce Inequalities: A Modelling Approach

    McAuley, Andrew; Denny, Cheryl; Taulbut, Martin; Mitchell, Rory; Fischbacher, Colin; Graham, Barbara; Grant, Ian; O’Hagan, Paul; McAllister, David; McCartney, Gerry

    2016-01-01

    Background Reducing health inequalities is an important policy objective but there is limited quantitative information about the impact of specific interventions. Objectives To provide estimates of the impact of a range of interventions on health and health inequalities. Materials and Methods Literature reviews were conducted to identify the best evidence linking interventions to mortality and hospital admissions. We examined interventions across the determinants of health: a ‘living wage’; changes to benefits, taxation and employment; active travel; tobacco taxation; smoking cessation, alcohol brief interventions, and weight management services. A model was developed to estimate mortality and years of life lost (YLL) in intervention and comparison populations over a 20-year time period following interventions delivered only in the first year. We estimated changes in inequalities using the relative index of inequality (RII). Results Introduction of a ‘living wage’ generated the largest beneficial health impact, with modest reductions in health inequalities. Benefits increases had modest positive impacts on health and health inequalities. Income tax increases had negative impacts on population health but reduced inequalities, while council tax increases worsened both health and health inequalities. Active travel increases had minimally positive effects on population health but widened health inequalities. Increases in employment reduced inequalities only when targeted to the most deprived groups. Tobacco taxation had modestly positive impacts on health but little impact on health inequalities. Alcohol brief interventions had modestly positive impacts on health and health inequalities only when strongly socially targeted, while smoking cessation and weight-reduction programmes had minimal impacts on health and health inequalities even when socially targeted. Conclusions Interventions have markedly different effects on mortality, hospitalisations and

  15. Reduced M(atrix) theory models: ground state solutions

    López, J L

    2015-01-01

    We propose a method to find exact ground state solutions to reduced models of the SU($N$) invariant matrix model arising from the quantization of the 11-dimensional supermembrane action in the light-cone gauge. We illustrate the method by applying it to lower dimensional toy models and for the SU(2) group. This approach could, in principle, be used to find ground state solutions to the complete 9-dimensional model and for any SU($N$) group. The Hamiltonian, the supercharges and the constraints related to the SU($2$) symmetry are built from operators that generate a multicomponent spinorial wave function. The procedure is based on representing the fermionic degrees of freedom by means of Dirac-like gamma matrices, as was already done in the first proposal of supersymmetric (SUSY) quantum cosmology. We exhibit a relation between these finite $N$ matrix theory ground state solutions and SUSY quantum cosmology wave functions giving a possible physical significance of the theory even for finite $N$.

  16. A reduced model of pulsatile flow in an arterial compartment

    In this article we propose a reduced model of the input-output behaviour of an arterial compartment, including the short systolic phase where wave phenomena are predominant. The objective is to provide basis for model-based signal processing methods for the estimation from non-invasive measurements and the interpretation of the characteristics of these waves. Due to phenomena such that peaking and steepening, the considered pressure pulse waves behave more like solitons generated by a Korteweg-de Vries (KdV) model than like linear waves. So we start with a quasi-1D Navier-Stokes equation taking into account radial acceleration of the wall: the radial acceleration term being supposed small, a 2scale singular perturbation technique is used to separate the fast wave propagation phenomena taking place in a boundary layer in time and space described by a KdV equation from the slow phenomena represented by a parabolic equation leading to two-elements windkessel models. Some particular solutions of the KdV equation, the 2 and 3-soliton solutions, seems to be good candidates to match the observed pressure pulse waves. Some very promising preliminary comparisons of numerical results obtained along this line with real pressure data are shown

  17. Optimal spatiotemporal reduced order modeling for nonlinear dynamical systems

    LaBryer, Allen

    Proposed in this dissertation is a novel reduced order modeling (ROM) framework called optimal spatiotemporal reduced order modeling (OPSTROM) for nonlinear dynamical systems. The OPSTROM approach is a data-driven methodology for the synthesis of multiscale reduced order models (ROMs) which can be used to enhance the efficiency and reliability of under-resolved simulations for nonlinear dynamical systems. In the context of nonlinear continuum dynamics, the OPSTROM approach relies on the concept of embedding subgrid-scale models into the governing equations in order to account for the effects due to unresolved spatial and temporal scales. Traditional ROMs neglect these effects, whereas most other multiscale ROMs account for these effects in ways that are inconsistent with the underlying spatiotemporal statistical structure of the nonlinear dynamical system. The OPSTROM framework presented in this dissertation begins with a general system of partial differential equations, which are modified for an under-resolved simulation in space and time with an arbitrary discretization scheme. Basic filtering concepts are used to demonstrate the manner in which residual terms, representing subgrid-scale dynamics, arise with a coarse computational grid. Models for these residual terms are then developed by accounting for the underlying spatiotemporal statistical structure in a consistent manner. These subgrid-scale models are designed to provide closure by accounting for the dynamic interactions between spatiotemporal macroscales and microscales which are otherwise neglected in a ROM. For a given resolution, the predictions obtained with the modified system of equations are optimal (in a mean-square sense) as the subgrid-scale models are based upon principles of mean-square error minimization, conditional expectations and stochastic estimation. Methods are suggested for efficient model construction, appraisal, error measure, and implementation with a couple of well-known time

  18. Accurate, precise modeling of cell proliferation kinetics from time-lapse imaging and automated image analysis of agar yeast culture arrays

    Zhao Lue

    2007-01-01

    Full Text Available Abstract Background Genome-wide mutant strain collections have increased demand for high throughput cellular phenotyping (HTCP. For example, investigators use HTCP to investigate interactions between gene deletion mutations and additional chemical or genetic perturbations by assessing differences in cell proliferation among the collection of 5000 S. cerevisiae gene deletion strains. Such studies have thus far been predominantly qualitative, using agar cell arrays to subjectively score growth differences. Quantitative systems level analysis of gene interactions would be enabled by more precise HTCP methods, such as kinetic analysis of cell proliferation in liquid culture by optical density. However, requirements for processing liquid cultures make them relatively cumbersome and low throughput compared to agar. To improve HTCP performance and advance capabilities for quantifying interactions, YeastXtract software was developed for automated analysis of cell array images. Results YeastXtract software was developed for kinetic growth curve analysis of spotted agar cultures. The accuracy and precision for image analysis of agar culture arrays was comparable to OD measurements of liquid cultures. Using YeastXtract, image intensity vs. biomass of spot cultures was linearly correlated over two orders of magnitude. Thus cell proliferation could be measured over about seven generations, including four to five generations of relatively constant exponential phase growth. Spot area normalization reduced the variation in measurements of total growth efficiency. A growth model, based on the logistic function, increased precision and accuracy of maximum specific rate measurements, compared to empirical methods. The logistic function model was also more robust against data sparseness, meaning that less data was required to obtain accurate, precise, quantitative growth phenotypes. Conclusion Microbial cultures spotted onto agar media are widely used for genotype

  19. The modelling of bolted flange joints used with disc springs and tube spacers to reduce relaxation

    Bolted flange joints are prone to leakage when exposed to high temperature. In several cases, the root cause is relaxation that takes place as a result of material creep of the gasket, the bolt and the flange. One way to overcome this problem is to make the joint less stiff by introducing disc springs or the use of longer bolts with spacers. Although widely used, these two methods have no reliable analytical model that could be used to evaluate the exact number of washers or length of the bolts required to reduce relaxation to a minimum acceptable level. This paper describes an analytical model based on the flexibility and deflection interactions of the joint different elements including the axial stiffness of the flange and bolts, used to evaluate relaxation. The developed analytical flange model can accommodate either disc springs or longer bolts with spacer tubes to reduce the bolt load loss to a maximum acceptable value. This model is validated by comparison with the more accurate FEA findings. Calculation examples on a bolted flanged joint are presented to illustrate the suggested analytical calculation procedure.

  20. Parameterized reduced order models from a single mesh using hyper-dual numbers

    Brake, M. R. W.; Fike, J. A.; Topping, S. D.

    2016-06-01

    In order to assess the predicted performance of a manufactured system, analysts must consider random variations (both geometric and material) in the development of a model, instead of a single deterministic model of an idealized geometry with idealized material properties. The incorporation of random geometric variations, however, potentially could necessitate the development of thousands of nearly identical solid geometries that must be meshed and separately analyzed, which would require an impractical number of man-hours to complete. This research advances a recent approach to uncertainty quantification by developing parameterized reduced order models. These parameterizations are based upon Taylor series expansions of the system's matrices about the ideal geometry, and a component mode synthesis representation for each linear substructure is used to form an efficient basis with which to study the system. The numerical derivatives required for the Taylor series expansions are obtained via hyper-dual numbers, and are compared to parameterized models constructed with finite difference formulations. The advantage of using hyper-dual numbers is two-fold: accuracy of the derivatives to machine precision, and the need to only generate a single mesh of the system of interest. The theory is applied to a stepped beam system in order to demonstrate proof of concept. The results demonstrate that the hyper-dual number multivariate parameterization of geometric variations, which largely are neglected in the literature, are accurate for both sensitivity and optimization studies. As model and mesh generation can constitute the greatest expense of time in analyzing a system, the foundation to create a parameterized reduced order model based off of a single mesh is expected to reduce dramatically the necessary time to analyze multiple realizations of a component's possible geometry.

  1. Reduced models of networks of coupled enzymatic reactions

    Kumar, Ajit

    2011-01-01

    The Michaelis-Menten equation has played a central role in our understanding of biochemical processes. It has long been understood how this equation approximates the dynamics of irreversible enzymatic reactions. However, a similar approximation in the case of networks, where the product of one reaction can act as an enzyme in another, has not been fully developed. Here we rigorously derive such an approximation in a class of coupled enzymatic networks where the individual interactions are of Michaelis-Menten type. We show that the sufficient conditions for the validity of the total quasi steady state assumption (tQSSA), obtained in a single protein case by Borghans, de Boer and Segel can be extended to sufficient conditions for the validity of the tQSSA in a large class of enzymatic networks. Secondly, we derive reduced equations that approximate the network's dynamics and involve only protein concentrations. This significantly reduces the number of equations necessary to model such systems. We prove the vali...

  2. Assimilating in-situ Measurements into a Reduced-Dimensionality Model of an Estuary- Plume System.

    Frolov, S.; Baptista, A.; Leen, T.; Lu, Z.; van der Merwe, R.

    2006-12-01

    A very fast, model independent, fully non-linear extension to the reduced space Kalman filter has been recently proposed and demonstrated for the assimilation of the non-linear circulation in both a synthetic estuary and in the river-dominated Columbia River estuary. Here, we extend the application to another complex problem the simulation of a coupled estuary-plume system. Our data assimilation method is based on the same three stages as in our previous work: (1) generate a database of hindcast runs with a forward numerical circulation model like SELFE; (2) use examples from the hindcast database to train a fast, non-linear neural network model surrogate that approximates the dynamics of the forward model; and (3) use a Sigma Point Kalman filter, incorporating the model surrogate dynamics, to estimate the true state of the system. Both model surrogates (2) and state estimation (3) operate in the reduced space spanned by the Empirical Orthogonal Functions (EOFs, aka principal components). The key modifications that we are introducing are the improved EOF analysis for more accurate dimension reduction in the plume region, a more compact noise model for faster DA, and the improved treatment of wetting and drying. The resulting data assimilation system is ~100 faster than the forward model and ~10000 faster than the existing variational and sequential methods for data assimilation. As a test of the system, we assimilate in-situ data from four offshore moorings and 14 estuarine stations during May-September of 2004. For validation of the experiments we use cross-validation against in-situ data, data from research cruises, and satellite imagery. We show that data assimilation is effective for improving the simulation of at least three highly non-linear processes: the dynamics of the estuarine salt-wedge, the response of the plume to wind shifts, the propagation of the shallow water tides, and wetting and drying of tidal flats.

  3. Improved Reduced Models for Single-Pass and Reflective Semiconductor Optical Amplifiers

    Dúill, Seán P Ó

    2014-01-01

    We present highly accurate and easy to implement, improved lumped semiconductor optical amplifier (SOA) models for both single-pass and reflective semiconductor optical amplifiers (RSOA). The key feature of the model is the inclusion of the internal losses and we show that a few subdivisions are required to achieve an accuracy of 0.12 dB. For the case of RSOAs, we generalize a recently published model to account for the internal losses that are vital to replicate observed RSOA behavior. The results of the improved reduced RSOA model show large overlap when compared to a full bidirectional travelling wave model over a 40 dB dynamic range of input powers and a 20 dB dynamic range of reflectivity values. The models would be useful for the rapid system simulation of signals in communication systems, i.e. passive optical networks that employ RSOAs, signal processing using SOAs and for implementing digital back propagation to undo amplifier induced signal distortions.

  4. Fragile DNA repair mechanism reduces ageing in multicellular model.

    Kristian Moss Bendtsen

    Full Text Available DNA damages, as well as mutations, increase with age. It is believed that these result from increased genotoxic stress and decreased capacity for DNA repair. The two causes are not independent, DNA damage can, for example, through mutations, compromise the capacity for DNA repair, which in turn increases the amount of unrepaired DNA damage. Despite this vicious circle, we ask, can cells maintain a high DNA repair capacity for some time or is repair capacity bound to continuously decline with age? We here present a simple mathematical model for ageing in multicellular systems where cells subjected to DNA damage can undergo full repair, go apoptotic, or accumulate mutations thus reducing DNA repair capacity. Our model predicts that at the tissue level repair rate does not continuously decline with age, but instead has a characteristic extended period of high and non-declining DNA repair capacity, followed by a rapid decline. Furthermore, the time of high functionality increases, and consequently slows down the ageing process, if the DNA repair mechanism itself is vulnerable to DNA damages. Although counterintuitive at first glance, a fragile repair mechanism allows for a faster removal of compromised cells, thus freeing the space for healthy peers. This finding might be a first step toward understanding why a mutation in single DNA repair protein (e.g. Wrn or Blm is not buffered by other repair proteins and therefore, leads to severe ageing disorders.

  5. Testing the importance of accurate meteorological input fields and parameterizations in atmospheric transport modelling using DREAM - Validation against ETEX-1

    Brandt, J.; Bastrup-Birk, A.; Christensen, J.H.;

    1998-01-01

    A tracer model, the DREAM, which is based on a combination of a near-range Lagrangian model and a long-range Eulerian model, has been developed. The meteorological meso-scale model, MM5V1, is implemented as a meteorological driver for the tracer model. The model system is used for studying transp...

  6. High-resolution LES of the rotating stall in a reduced scale model pump-turbine

    Extending the operating range of modern pump-turbines becomes increasingly important in the course of the integration of renewable energy sources in the existing power grid. However, at partial load condition in pumping mode, the occurrence of rotating stall is critical to the operational safety of the machine and on the grid stability. The understanding of the mechanisms behind this flow phenomenon yet remains vague and incomplete. Past numerical simulations using a RANS approach often led to inconclusive results concerning the physical background. For the first time, the rotating stall is investigated by performing a large scale LES calculation on the HYDRODYNA pump-turbine scale model featuring approximately 100 million elements. The computations were performed on the PRIMEHPC FX10 of the University of Tokyo using the overset Finite Element open source code FrontFlow/blue with the dynamic Smagorinsky turbulence model and the no-slip wall condition. The internal flow computed is the one when operating the pump-turbine at 76% of the best efficiency point in pumping mode, as previous experimental research showed the presence of four rotating cells. The rotating stall phenomenon is accurately reproduced for a reduced Reynolds number using the LES approach with acceptable computing resources. The results show an excellent agreement with available experimental data from the reduced scale model testing at the EPFL Laboratory for Hydraulic Machines. The number of stall cells as well as the propagation speed corroborates the experiment

  7. Charging and discharging tests for obtaining an accurate dynamic electro-thermal model of high power lithium-ion pack system for hybrid and EV applications

    Mihet-Popa, Lucian; Camacho, Oscar Mauricio Forero; Nørgård, Per Bromand

    2013-01-01

    This paper presents a battery test platform including two Li-ion battery designed for hybrid and EV applications, and charging/discharging tests under different operating conditions carried out for developing an accurate dynamic electro-thermal model of a high power Li-ion battery pack system. The...

  8. Modelling obesity outcomes : reducing obesity risk in adulthood may have greater impact than reducing obesity prevalence in childhood

    Lhachimi, S. K.; Nusselder, W. J.; Lobstein, T. J.; Smit, H. A.; Baili, P.; Bennett, K.; Kulik, M. C.; Jackson-Leach, R.; Boshuizen, H. C.; Mackenbach, J. P.

    2013-01-01

    A common policy response to the rise in obesity prevalence is to undertake interventions in childhood, but it is an open question whether this is more effective than reducing the risk of becoming obese during adulthood. In this paper, we model the effect on health outcomes of (i) reducing the preval

  9. Modelling obesity outcomes: reducing obesity risk in adulthood may have grater impact than reducing obesity prevalence in childhood

    Lhachimi, S.K.; Nusselder, W.J.; Lobstein, T.J.; Smit, H.A.; Baili, P.; Bennett, K.; Kulik, M.C.; Jackson-Leach, R.; Boshuizen, H.C.; Mackenbach, J.P.

    2013-01-01

    A common policy response to the rise in obesity prevalence is to undertake interventions in childhood, but it is an open question whether this is more effective than reducing the risk of becoming obese during adulthood. In this paper, we model the effect on health outcomes of (i) reducing the preval

  10. Design of multivariable feedback control systems via spectral assignment using reduced-order models and reduced-order observers

    Mielke, R. R.; Tung, L. J.; Carraway, P. I., III

    1985-01-01

    The feasibility of using reduced order models and reduced order observers with eigenvalue/eigenvector assignment procedures is investigated. A review of spectral assignment synthesis procedures is presented. Then, a reduced order model which retains essential system characteristics is formulated. A constant state feedback matrix which assigns desired closed loop eigenvalues and approximates specified closed loop eigenvectors is calculated for the reduced order model. It is shown that the eigenvalue and eigenvector assignments made in the reduced order system are retained when the feedback matrix is implemented about the full order system. In addition, those modes and associated eigenvectors which are not included in the reduced order model remain unchanged in the closed loop full order system. The fulll state feedback design is then implemented by using a reduced order observer. It is shown that the eigenvalue and eigenvector assignments of the closed loop full order system remain unchanged when a reduced order observer is used. The design procedure is illustrated by an actual design problem.

  11. Scoring predictive models using a reduced representation of proteins: model and energy definition

    Corazza Alessandra; Giugliarelli Gilberto; Bortolussi Luca; Dovier Agostino; Pieri Lidia; Fogolari Federico; Esposito Gennaro; Viglino Paolo

    2007-01-01

    Abstract Background Reduced representations of proteins have been playing a keyrole in the study of protein folding. Many such models are available, with different representation detail. Although the usefulness of many such models for structural bioinformatics applications has been demonstrated in recent years, there are few intermediate resolution models endowed with an energy model capable, for instance, of detecting native or native-like structures among decoy sets. The aim of the present ...

  12. Reduced Moment-Based Models for Oxygen Precipitates and Dislocation Loops in Silicon

    Trzynadlowski, Bart

    The demand for ever smaller, higher-performance integrated circuits and more efficient, cost-effective solar cells continues to push the frontiers of process technology. Fabrication of silicon devices requires extremely precise control of impurities and crystallographic defects. Failure to do so not only reduces performance, efficiency, and yield, it threatens the very survival of commercial enterprises in today's fiercely competitive and price-sensitive global market. The presence of oxygen in silicon is an unavoidable consequence of the Czochralski process, which remains the most popular method for large-scale production of single-crystal silicon. Oxygen precipitates that form during thermal processing cause distortion of the surrounding silicon lattice and can lead to the formation of dislocation loops. Localized deformation caused by both of these defects introduces potential wells that trap diffusing impurities such as metal atoms, which is highly desirable if done far away from sensitive device regions. Unfortunately, dislocations also reduce the mechanical strength of silicon, which can cause wafer warpage and breakage. Engineers must negotiate this and other complex tradeoffs when designing fabrication processes. Accomplishing this in a complex, modern process involving a large number of thermal steps is impossible without the aid of computational models. In this dissertation, new models for oxygen precipitation and dislocation loop evolution are described. An oxygen model using kinetic rate equations to evolve the complete precipitate size distribution was developed first. This was then used to create a reduced model tracking only the moments of the size distribution. The moment-based model was found to run significantly faster than its full counterpart while accurately capturing the evolution of oxygen precipitates. The reduced model was fitted to experimental data and a sensitivity analysis was performed to assess the robustness of the results. Source

  13. Wind Farm Flow Modeling using an Input-Output Reduced-Order Model

    Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter

    2016-08-01

    Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used to extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.

  14. A reduced order aerothermodynamic modeling framework for hypersonic vehicles based on surrogate and POD

    Chen Xin

    2015-10-01

    Full Text Available Aerothermoelasticity is one of the key technologies for hypersonic vehicles. Accurate and efficient computation of the aerothermodynamics is one of the primary challenges for hypersonic aerothermoelastic analysis. Aimed at solving the shortcomings of engineering calculation, computation fluid dynamics (CFD and experimental investigation, a reduced order modeling (ROM framework for aerothermodynamics based on CFD predictions using an enhanced algorithm of fast maximin Latin hypercube design is developed. Both proper orthogonal decomposition (POD and surrogate are considered and compared to construct ROMs. Two surrogate approaches named Kriging and optimized radial basis function (ORBF are utilized to construct ROMs. Furthermore, an enhanced algorithm of fast maximin Latin hypercube design is proposed, which proves to be helpful to improve the precisions of ROMs. Test results for the three-dimensional aerothermodynamic over a hypersonic surface indicate that: the ROMs precision based on Kriging is better than that by ORBF, ROMs based on Kriging are marginally more accurate than ROMs based on POD-Kriging. In a word, the ROM framework for hypersonic aerothermodynamics has good precision and efficiency.

  15. An ONIOM study of the Bergman reaction: a computationally efficient and accurate method for modeling the enediyne anticancer antibiotics

    Feldgus, Steven; Shields, George C.

    2001-10-01

    The Bergman cyclization of large polycyclic enediyne systems that mimic the cores of the enediyne anticancer antibiotics was studied using the ONIOM hybrid method. Tests on small enediynes show that ONIOM can accurately match experimental data. The effect of the triggering reaction in the natural products is investigated, and we support the argument that it is strain effects that lower the cyclization barrier. The barrier for the triggered molecule is very low, leading to a reasonable half-life at biological temperatures. No evidence is found that would suggest a concerted cyclization/H-atom abstraction mechanism is necessary for DNA cleavage.

  16. A Two-Scale Reduced Model for Darcy Flow in Fractured Porous Media

    Chen, Huangxin

    2016-06-01

    In this paper, we develop a two-scale reduced model for simulating the Darcy flow in two-dimensional porous media with conductive fractures. We apply the approach motivated by the embedded fracture model (EFM) to simulate the flow on the coarse scale, and the effect of fractures on each coarse scale grid cell intersecting with fractures is represented by the discrete fracture model (DFM) on the fine scale. In the DFM used on the fine scale, the matrix-fracture system are resolved on unstructured grid which represents the fractures accurately, while in the EFM used on the coarse scale, the flux interaction between fractures and matrix are dealt with as a source term, and the matrix-fracture system can be resolved on structured grid. The Raviart-Thomas mixed finite element methods are used for the solution of the coupled flows in the matrix and the fractures on both fine and coarse scales. Numerical results are presented to demonstrate the efficiency of the proposed model for simulation of flow in fractured porous media.

  17. A reduced-form intensity-based model under fuzzy environments

    Wu, Liang; Zhuang, Yaming

    2015-05-01

    The external shocks and internal contagion are the important sources of default events. However, the external shocks and internal contagion effect on the company is not observed, we cannot get the accurate size of the shocks. The information of investors relative to the default process exhibits a certain fuzziness. Therefore, using randomness and fuzziness to study such problems as derivative pricing or default probability has practical needs. But the idea of fuzzifying credit risk models is little exploited, especially in a reduced-form model. This paper proposes a new default intensity model with fuzziness and presents a fuzzy default probability and default loss rate, and puts them into default debt and credit derivative pricing. Finally, the simulation analysis verifies the rationality of the model. Using fuzzy numbers and random analysis one can consider more uncertain sources in the default process of default and investors' subjective judgment on the financial markets in a variety of fuzzy reliability so as to broaden the scope of possible credit spreads.

  18. Prediction of a Francis turbine prototype full load instability from investigations on the reduced scale model

    The growing development of renewable energies combined with the process of privatization, lead to a change of economical energy market strategies. Instantaneous pricings of electricity as a function of demand or predictions, induces profitable peak productions which are mainly covered by hydroelectric power plants. Therefore, operators harness more hydroelectric facilities at full load operating conditions. However, the Francis Turbine features an axi-symmetric rope leaving the runner which may act under certain conditions as an internal energy source leading to instability. Undesired power and pressure fluctuations are induced which may limit the maximum available power output. BC Hydro experiences such constraints in a hydroelectric power plant consisting of four 435 MW Francis Turbine generating units, which is located in Canada's province of British Columbia. Under specific full load operating conditions, one unit experiences power and pressure fluctuations at 0.46 Hz. The aim of the paper is to present a methodology allowing prediction of this prototype's instability frequency from investigations on the reduced scale model. A new hydro acoustic vortex rope model has been developed in SIMSEN software, taking into account the energy dissipation due to the thermodynamic exchange between the gas and the surrounding liquid. A combination of measurements, CFD simulations and computation of eigenmodes of the reduced scale model installed on test rig, allows the accurate calibration of the vortex rope model parameters at the model scale. Then, transposition of parameters to the prototype according to similitude laws is applied and stability analysis of the power plant is performed. The eigenfrequency of 0.39 Hz related to the first eigenmode of the power plant is determined to be unstable. Predicted frequency of the full load power and pressure fluctuations at the unit unstable operating point is found to be in general agreement with the prototype measurements.

  19. Prediction of a Francis turbine prototype full load instability from investigations on the reduced scale model

    Alligné, S.; Maruzewski, P.; Dinh, T.; Wang, B.; Fedorov, A.; Iosfin, J.; Avellan, F.

    2010-08-01

    The growing development of renewable energies combined with the process of privatization, lead to a change of economical energy market strategies. Instantaneous pricings of electricity as a function of demand or predictions, induces profitable peak productions which are mainly covered by hydroelectric power plants. Therefore, operators harness more hydroelectric facilities at full load operating conditions. However, the Francis Turbine features an axi-symmetric rope leaving the runner which may act under certain conditions as an internal energy source leading to instability. Undesired power and pressure fluctuations are induced which may limit the maximum available power output. BC Hydro experiences such constraints in a hydroelectric power plant consisting of four 435 MW Francis Turbine generating units, which is located in Canada's province of British Columbia. Under specific full load operating conditions, one unit experiences power and pressure fluctuations at 0.46 Hz. The aim of the paper is to present a methodology allowing prediction of this prototype's instability frequency from investigations on the reduced scale model. A new hydro acoustic vortex rope model has been developed in SIMSEN software, taking into account the energy dissipation due to the thermodynamic exchange between the gas and the surrounding liquid. A combination of measurements, CFD simulations and computation of eigenmodes of the reduced scale model installed on test rig, allows the accurate calibration of the vortex rope model parameters at the model scale. Then, transposition of parameters to the prototype according to similitude laws is applied and stability analysis of the power plant is performed. The eigenfrequency of 0.39 Hz related to the first eigenmode of the power plant is determined to be unstable. Predicted frequency of the full load power and pressure fluctuations at the unit unstable operating point is found to be in general agreement with the prototype measurements.

  20. Reduced-order model based feedback control of the modified Hasegawa-Wakatani model

    Goumiri, I. R.; Rowley, C. W.; Ma, Z. [Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, New Jersey 08544 (United States); Gates, D. A.; Krommes, J. A.; Parker, J. B. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08544 (United States)

    2013-04-15

    In this work, the development of model-based feedback control that stabilizes an unstable equilibrium is obtained for the Modified Hasegawa-Wakatani (MHW) equations, a classic model in plasma turbulence. First, a balanced truncation (a model reduction technique that has proven successful in flow control design problems) is applied to obtain a low dimensional model of the linearized MHW equation. Then, a model-based feedback controller is designed for the reduced order model using linear quadratic regulators. Finally, a linear quadratic Gaussian controller which is more resistant to disturbances is deduced. The controller is applied on the non-reduced, nonlinear MHW equations to stabilize the equilibrium and suppress the transition to drift-wave induced turbulence.

  1. Nonlinear dynamics of an electrically actuated imperfect microbeam resonator: Experimental investigation and reduced-order modeling

    Ruzziconi, Laura

    2013-06-10

    We present a study of the dynamic behavior of a microelectromechanical systems (MEMS) device consisting of an imperfect clamped-clamped microbeam subjected to electrostatic and electrodynamic actuation. Our objective is to develop a theoretical analysis, which is able to describe and predict all the main relevant aspects of the experimental response. Extensive experimental investigation is conducted, where the main imperfections coming from microfabrication are detected, the first four experimental natural frequencies are identified and the nonlinear dynamics are explored at increasing values of electrodynamic excitation, in a neighborhood of the first symmetric resonance. Several backward and forward frequency sweeps are acquired. The nonlinear behavior is highlighted, which includes ranges of multistability, where the nonresonant and the resonant branch coexist, and intervals where superharmonic resonances are clearly visible. Numerical simulations are performed. Initially, two single mode reduced-order models are considered. One is generated via the Galerkin technique, and the other one via the combined use of the Ritz method and the Padé approximation. Both of them are able to provide a satisfactory agreement with the experimental data. This occurs not only at low values of electrodynamic excitation, but also at higher ones. Their computational efficiency is discussed in detail, since this is an essential aspect for systematic local and global simulations. Finally, the theoretical analysis is further improved and a two-degree-of-freedom reduced-order model is developed, which is also capable of capturing the measured second symmetric superharmonic resonance. Despite the apparent simplicity, it is shown that all the proposed reduced-order models are able to describe the experimental complex nonlinear dynamics of the device accurately and properly, which validates the proposed theoretical approach. © 2013 IOP Publishing Ltd.

  2. Nonlinear dynamics of an electrically actuated imperfect microbeam resonator: experimental investigation and reduced-order modeling

    We present a study of the dynamic behavior of a microelectromechanical systems (MEMS) device consisting of an imperfect clamped–clamped microbeam subjected to electrostatic and electrodynamic actuation. Our objective is to develop a theoretical analysis, which is able to describe and predict all the main relevant aspects of the experimental response. Extensive experimental investigation is conducted, where the main imperfections coming from microfabrication are detected, the first four experimental natural frequencies are identified and the nonlinear dynamics are explored at increasing values of electrodynamic excitation, in a neighborhood of the first symmetric resonance. Several backward and forward frequency sweeps are acquired. The nonlinear behavior is highlighted, which includes ranges of multistability, where the nonresonant and the resonant branch coexist, and intervals where superharmonic resonances are clearly visible. Numerical simulations are performed. Initially, two single mode reduced-order models are considered. One is generated via the Galerkin technique, and the other one via the combined use of the Ritz method and the Padé approximation. Both of them are able to provide a satisfactory agreement with the experimental data. This occurs not only at low values of electrodynamic excitation, but also at higher ones. Their computational efficiency is discussed in detail, since this is an essential aspect for systematic local and global simulations. Finally, the theoretical analysis is further improved and a two-degree-of-freedom reduced-order model is developed, which is also capable of capturing the measured second symmetric superharmonic resonance. Despite the apparent simplicity, it is shown that all the proposed reduced-order models are able to describe the experimental complex nonlinear dynamics of the device accurately and properly, which validates the proposed theoretical approach. (paper)

  3. A pharmacokinetic/pharmacodynamic mathematical model accurately describes the activity of voriconazole against Candida spp. in vitro

    Li, Yanjun; Nguyen, M. Hong; Cheng, Shaoji; Schmidt, Stephan; Zhong, Li; Derendorf, Hartmut; Clancy, Cornelius J.

    2008-01-01

    We developed a pharmacokinetic/pharmacodynamic (PK/PD) mathematical model that fits voriconazole time–kill data against Candida isolates in vitro and used the model to simulate the expected kill curves for typical intravenous and oral dosing regimens. A series of Emax mathematical models were used to fit time–kill data for two isolates each of Candida albicans, Candida glabrata and Candida parapsilosis. PK parameters extracted from human data sets were used in the model to simulate kill curve...

  4. Can crop-climate models be accurate and precise? A case study for wheat production in Denmark

    Martin, M M -S; Olesen, Jørgen E; Porter, John Roy

    2015-01-01

    Crop models, used to make projections of climate change impacts, differ greatly in structural detail. Complexity of model structure has generic effects on uncertainty and error propagation in climate change impact assessments. We applied Bayesian calibration to three distinctly different empirical...... make them suitable for generic model ensembles for near-term agricultural impact assessments of climate change....

  5. Reduced Order Aeroservoelastic Models with Rigid Body Modes Project

    National Aeronautics and Space Administration — Complex aeroelastic and aeroservoelastic phenomena can be modeled on complete aircraft configurations generating models with millions of degrees of freedom....

  6. Reduced Order Aeroservoelastic Models with Rigid Body Modes Project

    National Aeronautics and Space Administration — Complex aeroelastic and aeroservoelastic phenomena can be modeled on complete aircraft configurations, generating models with millions of degrees of freedom....

  7. Reduced-Order Model Based Feedback Control For Modified Hasegawa-Wakatani Model

    Goumiri, I. R.; Rowley, C. W.; Ma, Z.; Gates, D. A.; Krommes, J. A.; Parker, J. B.

    2013-01-28

    In this work, the development of model-based feedback control that stabilizes an unstable equilibrium is obtained for the Modi ed Hasegawa-Wakatani (MHW) equations, a classic model in plasma turbulence. First, a balanced truncation (a model reduction technique that has proven successful in ow control design problems) is applied to obtain a low dimensional model of the linearized MHW equation. Then a modelbased feedback controller is designed for the reduced order model using linear quadratic regulators (LQR). Finally, a linear quadratic gaussian (LQG) controller, which is more resistant to disturbances is deduced. The controller is applied on the non-reduced, nonlinear MHW equations to stabilize the equilibrium and suppress the transition to drift-wave induced turbulence.

  8. Testing the importance of accurate meteorological input fields and parameterizations in atmospheric transport modelling using DREAM - Validation against ETEX-1

    Brandt, J.; Bastrup-Birk, A.; Christensen, J.H.; Mikkelsen, T.; Thykier-Nielsen, S.; Zlatev, Z.

    1998-01-01

    transport and dispersion of air pollutants caused by a single but strong source as, e.g. an accidental release from a nuclear power plant. The model system including the coupling of the Lagrangian model with the Eulerian model are described. Various simple and comprehensive parameterizations of the mixing...... the parameterizations and meterological input data in order to find the best performing solution. (C) 1998 Elsevier Science Ltd. All rights reserved....

  9. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    Shiyao Wang; Zhidong Deng; Gang Yin

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ...

  10. The PRESTO-EPA MODEL - A user friendly, comprehensive, efficient, and accurate health effects simulation model for assessing low-level radioactive waste disposal sites

    This paper presents the characteristics of the PRESTO-EPA-CPG model with emphasis on its application features, efficiency and accuracy. The original model published in 1985 was designed to assess the maximum individual dose to a critical population group and the genetic and somatic health effects to the general population due to the disposal of radioactive wastes in near surface trenches. This model was subsequently modified to improve its efficiency and accuracy and to expand its potential application to various practices, including waste disposal soil cleanup, and agricultural land application of waste materials. Accuracy of analysis was emphasised as one of the important goals of the model design. To achieve this goal a dynamic infiltration submodel, a multi phase leaching submodel and a dynamic well mechanics submodel was used. As a result model complexity was considerably increased. To reduce the complexity and to increase the efficiency of the model simplified equations were used. For instance, the original partial differential equation system for the infiltration submodel was transformed into an ordinary differential equation system by dividing the soil moisture into three components; gravity water, pellicular water, and hygroscopic water. The transformed model was validated using field data obtained from Barnwell, South Carolina. In addition, an Ad Hoc leaching submodel was also developed based on results obtained using EPA's multiphase leaching model. Hung's one-dimensional ground water model was adopted for the groundwater submodel, which has greatly simplified the simulation, especially with daughter nuclide ingrowth effects built in. A one-dimensional model could inherit significant theoretical errors relative to a three-dimensional model. However, these theoretical errors can be minimised in normal PRESTO-EPA model applications, as verified in a recent benchmarking study on solute transport. The PRESTO-EPA Operation System also includes interface

  11. Transgenic Mouse Model for Reducing Oxidative Damage in Bone

    Schreurs, A.-S.; Torres, S.; Truong, T.; Kumar, A.; Alwood, J. S.; Limoli, C. L.; Globus, R. K.

    2014-01-01

    Exposure to musculoskeletal disuse and radiation result in bone loss; we hypothesized that these catabolic treatments cause excess reactive oxygen species (ROS), and thereby alter the tight balance between bone resorption by osteoclasts and bone formation by osteoblasts, culminating in bone loss. To test this, we used transgenic mice which over-express the human gene for catalase, targeted to mitochondria (MCAT). Catalase is an anti-oxidant that converts the ROS hydrogen peroxide into water and oxygen. MCAT mice were shown previously to display reduced mitochondrial oxidative stress and radiosensitivity of the CNS compared to wild type controls (WT). As expected, MCAT mice expressed the transgene in skeletal tissue, and in marrow-derived osteoblasts and osteoclast precursors cultured ex vivo, and also showed greater catalase activity compared to wildtype (WT) mice (3-6 fold). Colony expansion in marrow cells cultured under osteoblastogenic conditions was 2-fold greater in the MCAT mice compared to WT mice, while the extent of mineralization was unaffected. MCAT mice had slightly longer tibiae than WT mice (2%, P less than 0.01), although cortical bone area was slightly lower in MCAT mice than WT mice (10%, p=0.09). To challenge the skeletal system, mice were treated by exposure to combined disuse (2 wk Hindlimb Unloading) and total body irradiation Cs(137) (2 Gy, 0.8 Gy/min), then bone parameters were analyzed by 2-factor ANOVA to detect possible interaction effects. Treatment caused a 2-fold increase (p=0.015) in malondialdehyde levels of bone tissue (ELISA) in WT mice, but had no effect in MCAT mice. These findings indicate that the transgene conferred protection from oxidative damage caused by treatment. Unexpected differences between WT and MCAT mice emerged in skeletal responses to treatment.. In WT mice, treatment did not alter osteoblastogenesis, cortical bone area, moment of inertia, or bone perimeter, whereas in MCAT mice, treatment increased these

  12. Optimization models for reducing air emissions from ships

    Balland, Océane

    2013-01-01

    This research deals with the reduction of air emissions from ships. Ships are large contributors to air pollution with implications for climate change and human health. Regulations have entered into force or will in the near future, which will force shipowners to reduce the air emissions from their vessels. Multiple technologies or operational measures reducing these main pollutants are available and the term air emission control has here been defined as any effort made by ship-owners, operat...

  13. Efficient and accurate modeling of multi-wavelength propagation in SOAs: a generalized coupled-mode approach

    Antonelli, Cristian; Li, Wangzhe; Coldren, Larry

    2015-01-01

    We present a model for multi-wavelength mixing in semiconductor optical amplifiers (SOAs) based on coupled-mode equations. The proposed model applies to all kinds of SOA structures, takes into account the longitudinal dependence of carrier density caused by saturation, it accommodates an arbitrary functional dependencies of the material gain and carrier recombination rate on the local value of carrier density, and is computationally more efficient by orders of magnitude as compared with the standard full model based on space-time equations. We apply the coupled-mode equations model to a recently demonstrated phase-sensitive amplifier based on an integrated SOA and prove its results to be consistent with the experimental data. The accuracy of the proposed model is certified by means of a meticulous comparison with the results obtained by integrating the space-time equations.

  14. Caries risk assessment in school children using a reduced Cariogram model without saliva tests

    Petersson, Gunnel Hänsel; Isberg, Per-Erik; Twetman, Svante

    2010-01-01

    To investigate the caries predictive ability of a reduced Cariogram model without salivary tests in schoolchildren.......To investigate the caries predictive ability of a reduced Cariogram model without salivary tests in schoolchildren....

  15. An Accurate Analytical Model for 802.11e EDCA under Different Traffic Conditions with Contention-Free Bursting

    Nada Chendeb Taher

    2011-01-01

    Full Text Available Extensive research addressing IEEE 802.11e enhanced distributed channel access (EDCA performance analysis, by means of analytical models, exist in the literature. Unfortunately, the currently proposed models, even though numerous, do not reach this accuracy due to the great number of simplifications that have been done. Particularly, none of these models considers the 802.11e contention free burst (CFB mode which allows a given station to transmit a burst of frames without contention during a given transmission opportunity limit (TXOPLimit time interval. Despite its influence on the global performance, TXOPLimit is ignored in almost all existing models. To fill in this gap, we develop in this paper a new and complete analytical model that (i reflects the correct functioning of EDCA, (ii includes all the 802.11e EDCA differentiation parameters, (iii takes into account all the features of the protocol, and (iv can be applied to all network conditions, going from nonsaturation to saturation conditions. Additionally, this model is developed in order to be used in admission control procedure, so it was designed to have a low complexity and an acceptable response time. The proposed model is validated by means of both calculations and extensive simulations.

  16. What makes an accurate and reliable subject-specific finite element model? A case study of an elephant femur

    Panagiotopoulou, O.; Wilshin, S. D.; Rayfield, E J; Shefelbine, S. J.; Hutchinson, J. R.

    2014-01-01

    Finite element modelling is well entrenched in comparative vertebrate biomechanics as a tool to assess the mechanical design of skeletal structures and to better comprehend the complex interaction of their form–function relationships. But what makes a reliable subject-specific finite element model? To approach this question, we here present a set of convergence and sensitivity analyses and a validation study as an example, for finite element analysis (FEA) in general, of ways to ensure a reli...

  17. Minocycline reduces reactive gliosis in the rat model of hydrocephalus

    Xu Hao

    2012-12-01

    Full Text Available Abstract Background Reactive gliosis had been implicated in injury and recovery patterns associated with hydrocephalus. Our aim is to determine the efficacy of minocycline, an antibiotic known for its anti-inflammatory properties, to reduce reactive gliosis and inhibit the development of hydrocephalus. Results The ventricular dilatation were evaluated by MRI at 1-week post drugs treated, while GFAP and Iba-1were detected by RT-PCR, Immunohistochemistry and Western blot. The expression of GFAP and Iba-1 was significantly higher in hydrocephalic group compared with saline control group (p . Minocycline treatment of hydrocephalic animals reduced the expression of GFAP and Iba-1 significantly (p . Likewise, the severity of ventricular dilatation is lower in minocycline treated hydrocephalic animals compared with the no minocycline group (p . Conclusion Minocycline treatment is effective in reducing the gliosis and delaying the development of hydrocephalus with prospective to be the auxiliary therapeutic method of hydrocephalus.

  18. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  19. RCK: accurate and efficient inference of sequence- and structure-based protein–RNA binding models from RNAcompete data

    Orenstein, Yaron; Wang, Yuhao; Berger, Bonnie

    2016-01-01

    Motivation: Protein–RNA interactions, which play vital roles in many processes, are mediated through both RNA sequence and structure. CLIP-based methods, which measure protein–RNA binding in vivo, suffer from experimental noise and systematic biases, whereas in vitro experiments capture a clearer signal of protein RNA-binding. Among them, RNAcompete provides binding affinities of a specific protein to more than 240 000 unstructured RNA probes in one experiment. The computational challenge is to infer RNA structure- and sequence-based binding models from these data. The state-of-the-art in sequence models, Deepbind, does not model structural preferences. RNAcontext models both sequence and structure preferences, but is outperformed by GraphProt. Unfortunately, GraphProt cannot detect structural preferences from RNAcompete data due to the unstructured nature of the data, as noted by its developers, nor can it be tractably run on the full RNACompete dataset. Results: We develop RCK, an efficient, scalable algorithm that infers both sequence and structure preferences based on a new k-mer based model. Remarkably, even though RNAcompete data is designed to be unstructured, RCK can still learn structural preferences from it. RCK significantly outperforms both RNAcontext and Deepbind in in vitro binding prediction for 244 RNAcompete experiments. Moreover, RCK is also faster and uses less memory, which enables scalability. While currently on par with existing methods in in vivo binding prediction on a small scale test, we demonstrate that RCK will increasingly benefit from experimentally measured RNA structure profiles as compared to computationally predicted ones. By running RCK on the entire RNAcompete dataset, we generate and provide as a resource a set of protein–RNA structure-based models on an unprecedented scale. Availability and Implementation: Software and models are freely available at http://rck.csail.mit.edu/ Contact: bab@mit.edu Supplementary information

  20. Fast and accurate two-dimensional modelling of high-current, high-voltage air-cored transformers

    This paper presents a detailed two-dimensional model for high-voltage air-cored pulse transformers of two quite different designs. A filamentary technique takes magnetic diffusion fully into account and enables the resistances and self and mutual inductances that are effective under fast transient conditions to be calculated. Very good agreement between calculated and measured results for typical transformers has been obtained in several cases, and the model is now regularly used in the design of compact high-power sources

  1. Development of accurate UWB dielectric properties dispersion at CST simulation tool for modeling microwave interactions with numerical breast phantoms

    In this paper, a reformulation for the recently published dielectric properties dispersion models of the breast tissues is carried out to be used by CST simulation tool. The reformulation includes tabulation of the real and imaginary parts versus frequency on ultra-wideband (UWB) for these models by MATLAB programs. The tables are imported and fitted by CST simulation tool to second or first order general equations. The results have shown good agreement between the original and the imported data. The MATLAB programs written in MATLAB code are included in the appendix.

  2. How accurately can subject-specific finite element models predict strains and strength of human femora? Investigation using full-field measurements.

    Grassi, Lorenzo; Väänänen, Sami P; Ristinmaa, Matti; Jurvelin, Jukka S; Isaksson, Hanna

    2016-03-21

    Subject-specific finite element models have been proposed as a tool to improve fracture risk assessment in individuals. A thorough laboratory validation against experimental data is required before introducing such models in clinical practice. Results from digital image correlation can provide full-field strain distribution over the specimen surface during in vitro test, instead of at a few pre-defined locations as with strain gauges. The aim of this study was to validate finite element models of human femora against experimental data from three cadaver femora, both in terms of femoral strength and of the full-field strain distribution collected with digital image correlation. The results showed a high accuracy between predicted and measured principal strains (R(2)=0.93, RMSE=10%, 1600 validated data points per specimen). Femoral strength was predicted using a rate dependent material model with specific strain limit values for yield and failure. This provided an accurate prediction (strain accuracy was comparable to that obtained in state-of-the-art studies which validated their prediction accuracy against 10-16 strain gauge measurements. Fracture force was accurately predicted, with the predicted failure location being very close to the experimental fracture rim. Despite the low sample size and the single loading condition tested, the present combined numerical-experimental method showed that finite element models can predict femoral strength by providing a thorough description of the local bone mechanical response. PMID:26944687

  3. Efficient and physically accurate modeling and simulation of anisoplanatic imaging through the atmosphere: a space-variant volumetric image blur method

    Reinhardt, Colin N.; Ritcey, James A.

    2015-09-01

    We present a novel method for efficient and physically-accurate modeling & simulation of anisoplanatic imaging through the atmosphere; in particular we present a new space-variant volumetric image blur algorithm. The method is based on the use of physical atmospheric meteorology models, such as vertical turbulence profiles and aerosol/molecular profiles which can be in general fully spatially-varying in 3 dimensions and also evolving in time. The space-variant modeling method relies on the metadata provided by 3D computer graphics modeling and rendering systems to decompose the image into a set of slices which can be treated in an independent but physically consistent manner to achieve simulated image blur effects which are more accurate and realistic than the homogeneous and stationary blurring methods which are commonly used today. We also present a simple illustrative example of the application of our algorithm, and show its results and performance are in agreement with the expected relative trends and behavior of the prescribed turbulence profile physical model used to define the initial spatially-varying environmental scenario conditions. We present the details of an efficient Fourier-transform-domain formulation of the SV volumetric blur algorithm and detailed algorithm pseudocode description of the method implementation and clarification of some nonobvious technical details.

  4. Scoring predictive models using a reduced representation of proteins: model and energy definition

    Corazza Alessandra

    2007-03-01

    Full Text Available Abstract Background Reduced representations of proteins have been playing a keyrole in the study of protein folding. Many such models are available, with different representation detail. Although the usefulness of many such models for structural bioinformatics applications has been demonstrated in recent years, there are few intermediate resolution models endowed with an energy model capable, for instance, of detecting native or native-like structures among decoy sets. The aim of the present work is to provide a discrete empirical potential for a reduced protein model termed here PC2CA, because it employs a PseudoCovalent structure with only 2 Centers of interactions per Amino acid, suitable for protein model quality assessment. Results All protein structures in the set top500H have been converted in reduced form. The distribution of pseudobonds, pseudoangle, pseudodihedrals and distances between centers of interactions have been converted into potentials of mean force. A suitable reference distribution has been defined for non-bonded interactions which takes into account excluded volume effects and protein finite size. The correlation between adjacent main chain pseudodihedrals has been converted in an additional energetic term which is able to account for cooperative effects in secondary structure elements. Local energy surface exploration is performed in order to increase the robustness of the energy function. Conclusion The model and the energy definition proposed have been tested on all the multiple decoys' sets in the Decoys'R'us database. The energetic model is able to recognize, for almost all sets, native-like structures (RMSD less than 2.0 Å. These results and those obtained in the blind CASP7 quality assessment experiment suggest that the model compares well with scoring potentials with finer granularity and could be useful for fast exploration of conformational space. Parameters are available at the url: http://www.dstb.uniud.it/~ffogolari/download/.

  5. Reduced model-based decision-making in schizophrenia.

    Culbreth, Adam J; Westbrook, Andrew; Daw, Nathaniel D; Botvinick, Matthew; Barch, Deanna M

    2016-08-01

    Individuals with schizophrenia have a diminished ability to use reward history to adaptively guide behavior. However, tasks traditionally used to assess such deficits often rely on multiple cognitive and neural processes, leaving etiology unresolved. In the current study, we adopted recent computational formalisms of reinforcement learning to distinguish between model-based and model-free decision-making in hopes of specifying mechanisms associated with reinforcement-learning dysfunction in schizophrenia. Under this framework, decision-making is model-free to the extent that it relies solely on prior reward history, and model-based if it relies on prospective information such as motivational state, future consequences, and the likelihood of obtaining various outcomes. Model-based and model-free decision-making was assessed in 33 schizophrenia patients and 30 controls using a 2-stage 2-alternative forced choice task previously demonstrated to discern individual differences in reliance on the 2 forms of reinforcement-learning. We show that, compared with controls, schizophrenia patients demonstrate decreased reliance on model-based decision-making. Further, parameter estimates of model-based behavior correlate positively with IQ and working memory measures, suggesting that model-based deficits seen in schizophrenia may be partially explained by higher-order cognitive deficits. These findings demonstrate specific reinforcement-learning and decision-making deficits and thereby provide valuable insights for understanding disordered behavior in schizophrenia. (PsycINFO Database Record PMID:27175984

  6. Numerical simulations of a reduced model for blood coagulation

    Pavlova, Jevgenija; Fasano, Antonio; Sequeira, Adélia

    2016-04-01

    In this work, the three-dimensional numerical resolution of a complex mathematical model for the blood coagulation process is presented. The model was illustrated in Fasano et al. (Clin Hemorheol Microcirc 51:1-14, 2012), Pavlova et al. (Theor Biol 380:367-379, 2015). It incorporates the action of the biochemical and cellular components of blood as well as the effects of the flow. The model is characterized by a reduction in the biochemical network and considers the impact of the blood slip at the vessel wall. Numerical results showing the capacity of the model to predict different perturbations in the hemostatic system are discussed.

  7. Damage Detection in Flexible Plates through Reduced-Order Modeling and Hybrid Particle-Kalman Filtering.

    Capellari, Giovanni; Azam, Saeed Eftekhar; Mariani, Stefano

    2015-01-01

    Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation), whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated. PMID:26703615

  8. Damage Detection in Flexible Plates through Reduced-Order Modeling and Hybrid Particle-Kalman Filtering

    Giovanni Capellari

    2015-12-01

    Full Text Available Health monitoring of lightweight structures, like thin flexible plates, is of interest in several engineering fields. In this paper, a recursive Bayesian procedure is proposed to monitor the health of such structures through data collected by a network of optimally placed inertial sensors. As a main drawback of standard monitoring procedures is linked to the computational costs, two remedies are jointly considered: first, an order-reduction of the numerical model used to track the structural dynamics, enforced with proper orthogonal decomposition; and, second, an improved particle filter, which features an extended Kalman updating of each evolving particle before the resampling stage. The former remedy can reduce the number of effective degrees-of-freedom of the structural model to a few only (depending on the excitation, whereas the latter one allows to track the evolution of damage and to locate it thanks to an intricate formulation. To assess the effectiveness of the proposed procedure, the case of a plate subject to bending is investigated; it is shown that, when the procedure is appropriately fed by measurements, damage is efficiently and accurately estimated.

  9. A reduced-dimensional model for near-wall transport in cardiovascular flows.

    Hansen, Kirk B; Shadden, Shawn C

    2016-06-01

    Near-wall mass transport plays an important role in many cardiovascular processes, including the initiation of atherosclerosis, endothelial cell vasoregulation, and thrombogenesis. These problems are characterized by large Péclet and Schmidt numbers as well as a wide range of spatial and temporal scales, all of which impose computational difficulties. In this work, we develop an analytical relationship between the flow field and near-wall mass transport for high-Schmidt-number flows. This allows for the development of a wall-shear-stress-driven transport equation that lies on a codimension-one vessel-wall surface, significantly reducing computational cost in solving the transport problem. Separate versions of this equation are developed for the reaction-rate-limited and transport-limited cases, and numerical results in an idealized abdominal aortic aneurysm are compared to those obtained by solving the full transport equations over the entire domain. The reaction-rate-limited model matches the expected results well. The transport-limited model is accurate in the developed flow regions, but overpredicts wall flux at entry regions and reattachment points in the flow. PMID:26298313

  10. Accurate determination of the superfluid-insulator transition in the one-dimensional Bose-Hubbard model

    Zakrzewski, Jakub; Delande, Dominique

    2007-01-01

    The quantum phase transition point between the insulator and the superfluid phase at unit filling factor of the infinite one-dimensional Bose-Hubbard model is numerically computed with a high accuracy, better than current state of the art calculations. The method uses the infinite system version of the time evolving block decimation algorithm, here tested in a challenging case.

  11. Effects of Modeling and Desensitation in Reducing Dentist Phobia

    Shaw, David W.; Thoresen, Carl E.

    1974-01-01

    Many persons avoid dentists and dental work. The present study explored the effects of systematic desensitization and social-modeling treatments with placebo and assessment control groups. Modeling was more effective than desensitization as shown by the number of subjects who went to a dentist. (Author)

  12. Accurate small and wide angle x-ray scattering profiles from atomic models of proteins and nucleic acids

    Nguyen, Hung T.; Pabit, Suzette A.; Meisburger, Steve P.; Pollack, Lois; Case, David A.

    2014-12-01

    A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb+ and Sr2+) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein-Zernike equations, with results from the Kovalenko-Hirata closure being closest to experiment for the cases studied here.

  13. Traveled Distance Is a Sensitive and Accurate Marker of Motor Dysfunction in a Mouse Model of Multiple Sclerosis

    Takemiya, Takako; Takeuchi, Chisen

    2013-01-01

    Multiple sclerosis (MS) is a common central nervous system disease associated with progressive physical impairment. To study the mechanisms of the disease, we used experimental autoimmune encephalomyelitis (EAE), an animal model of MS. EAE is induced by myelin oligodendrocyte glycoprotein35–55 peptide, and the severity of paralysis in the disease is generally measured using the EAE score. Here, we compared EAE scores and traveled distance using the open-field test for an assessment of EAE pro...

  14. Dixon sequence with superimposed model-based bone compartment provides highly accurate PET/MR attenuation correction of the brain

    Koesters, Thomas; Friedman, Kent P.; Fenchel, Matthias; Zhan, Yiqiang; Hermosillo, Gerardo; Babb, James; Jelescu, Ileana O.; Faul, David; Boada, Fernando E.; Shepherd, Timothy M.

    2016-01-01

    Simultaneous PET/MR of the brain is a promising new technology for characterizing patients with suspected cognitive impairment or epilepsy. Unlike CT though, MR signal intensities do not provide a direct correlate to PET photon attenuation correction (AC) and inaccurate radiotracer standard uptake value (SUV) estimation could limit future PET/MR clinical applications. We tested a novel AC method that supplements standard Dixon-based tissue segmentation with a superimposed model-based bone com...

  15. ACCURATE 3D TEXTURED MODELS OF VESSELS FOR THE IMPROVEMENT OF THE EDUCATIONAL TOOLS OF A MUSEUM

    S. Soile; Adam, K.; C. Ioannidis; A. Georgopoulos

    2013-01-01

    Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museu...

  16. ADMET evaluation in drug discovery: 15. Accurate prediction of rat oral acute toxicity using relevance vector machine and consensus modeling

    Lei, Tailong; Li, Youyong; Song, Yunlong; Li, Dan; Sun, Huiyong; Hou, Tingjun

    2016-01-01

    Background Determination of acute toxicity, expressed as median lethal dose (LD50), is one of the most important steps in drug discovery pipeline. Because in vivo assays for oral acute toxicity in mammals are time-consuming and costly, there is thus an urgent need to develop in silico prediction models of oral acute toxicity. Results In this study, based on a comprehensive data set containing 7314 diverse chemicals with rat oral LD50 values, relevance vector machine (RVM) technique was employ...

  17. Rotating Arc Jet Test Model: Time-Accurate Trajectory Heat Flux Replication in a Ground Test Environment

    Laub, Bernard; Grinstead, Jay; Dyakonov, Artem; Venkatapathy, Ethiraj

    2011-01-01

    Though arc jet testing has been the proven method employed for development testing and certification of TPS and TPS instrumentation, the operational aspects of arc jets limit testing to selected, but constant, conditions. Flight, on the other hand, produces timevarying entry conditions in which the heat flux increases, peaks, and recedes as a vehicle descends through an atmosphere. As a result, we are unable to "test as we fly." Attempts to replicate the time-dependent aerothermal environment of atmospheric entry by varying the arc jet facility operating conditions during a test have proven to be difficult, expensive, and only partially successful. A promising alternative is to rotate the test model exposed to a constant-condition arc jet flow to yield a time-varying test condition at a point on a test article (Fig. 1). The model shape and rotation rate can be engineered so that the heat flux at a point on the model replicates the predicted profile for a particular point on a flight vehicle. This simple concept will enable, for example, calibration of the TPS sensors on the Mars Science Laboratory (MSL) aeroshell for anticipated flight environments.

  18. Can AERONET data be used to accurately model the monochromatic beam and circumsolar irradiances under cloud-free conditions in desert environment?

    Eissa, Y.; Blanc, P.; Wald, L.; Ghedira, H.

    2015-12-01

    Routine measurements of the beam irradiance at normal incidence include the irradiance originating from within the extent of the solar disc only (DNIS), whose angular extent is 0.266° ± 1.7 %, and from a larger circumsolar region, called the circumsolar normal irradiance (CSNI). This study investigates whether the spectral aerosol optical properties of the AERONET stations are sufficient for an accurate modelling of the monochromatic DNIS and CSNI under cloud-free conditions in a desert environment. The data from an AERONET station in Abu Dhabi, United Arab Emirates, and the collocated Sun and Aureole Measurement instrument which offers reference measurements of the monochromatic profile of solar radiance were exploited. Using the AERONET data both the radiative transfer models libRadtran and SMARTS offer an accurate estimate of the monochromatic DNIS, with a relative root mean square error (RMSE) of 6 % and a coefficient of determination greater than 0.96. The observed relative bias obtained with libRadtran is +2 %, while that obtained with SMARTS is -1 %. After testing two configurations in SMARTS and three in libRadtran for modelling the monochromatic CSNI, libRadtran exhibits the most accurate results when the AERONET aerosol phase function is presented as a two-term Henyey-Greenstein phase function. In this case libRadtran exhibited a relative RMSE and a bias of respectively 27 and -24 % and a coefficient of determination of 0.882. Therefore, AERONET data may very well be used to model the monochromatic DNIS and the monochromatic CSNI. The results are promising and pave the way towards reporting the contribution of the broadband circumsolar irradiance to standard measurements of the beam irradiance.

  19. Even faster and even more accurate first-passage time densities and distributions for the Wiener diffusion model

    Gondan, Matthias; Blurton, Steven Paul; Kesselmeier, Miriam

    2014-01-01

    The Wiener diffusion model with two absorbing barriers is often used to describe response times and error probabilities in two-choice decisions. Different representations exist for the density and cumulative distribution of first-passage times, all including infinite series, but with different...... convergence for small and large times. We present a method that controls the approximation error of the small-time representation that occurs due to finite truncation of these series. Our approach improves and simplifies related work by Navarro and Fuss (2009) and Blurton et al. (2012, both in the Journal...... of Mathematical Psychology)....

  20. Reducing uncertainty in high-resolution sea ice models.

    Peterson, Kara J.; Bochev, Pavel Blagoveston

    2013-07-01

    Arctic sea ice is an important component of the global climate system, reflecting a significant amount of solar radiation, insulating the ocean from the atmosphere and influencing ocean circulation by modifying the salinity of the upper ocean. The thickness and extent of Arctic sea ice have shown a significant decline in recent decades with implications for global climate as well as regional geopolitics. Increasing interest in exploration as well as climate feedback effects make predictive mathematical modeling of sea ice a task of tremendous practical import. Satellite data obtained over the last few decades have provided a wealth of information on sea ice motion and deformation. The data clearly show that ice deformation is focused along narrow linear features and this type of deformation is not well-represented in existing models. To improve sea ice dynamics we have incorporated an anisotropic rheology into the Los Alamos National Laboratory global sea ice model, CICE. Sensitivity analyses were performed using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) to determine the impact of material parameters on sea ice response functions. Two material strength parameters that exhibited the most significant impact on responses were further analyzed to evaluate their influence on quantitative comparisons between model output and data. The sensitivity analysis along with ten year model runs indicate that while the anisotropic rheology provides some benefit in velocity predictions, additional improvements are required to make this material model a viable alternative for global sea ice simulations.

  1. A new expression of Ns versus Ef to an accurate control charge model for AlGaAs/GaAs

    Bouneb, I.; Kerrour, F.

    2016-03-01

    Semi-conductor components become the privileged support of information and communication, particularly appreciation to the development of the internet. Today, MOS transistors on silicon dominate largely the semi-conductors market, however the diminution of transistors grid length is not enough to enhance the performances and respect Moore law. Particularly, for broadband telecommunications systems, where faster components are required. For this reason, alternative structures proposed like hetero structures IV-IV or III-V [1] have been.The most effective components in this area (High Electron Mobility Transistor: HEMT) on IIIV substrate. This work investigates an approach for contributing to the development of a numerical model based on physical and numerical modelling of the potential at heterostructure in AlGaAs/GaAs interface. We have developed calculation using projective methods allowed the Hamiltonian integration using Green functions in Schrodinger equation, for a rigorous resolution “self coherent” with Poisson equation. A simple analytical approach for charge-control in quantum well region of an AlGaAs/GaAs HEMT structure was presented. A charge-control equation, accounting for a variable average distance of the 2-DEG from the interface was introduced. Our approach which have aim to obtain ns-Vg characteristics is mainly based on: A new linear expression of Fermi-level variation with two-dimensional electron gas density in high electron mobility and also is mainly based on the notion of effective doping and a new expression of AEc

  2. Accurate metamodels of device parameters and their applications in performance modeling and optimization of analog integrated circuits

    Techniques for constructing metamodels of device parameters at BSIM3v3 level accuracy are presented to improve knowledge-based circuit sizing optimization. Based on the analysis of the prediction error of analytical performance expressions, operating point driven (OPD) metamodels of MOSFETs are introduced to capture the circuit's characteristics precisely. In the algorithm of metamodel construction, radial basis functions are adopted to interpolate the scattered multivariate data obtained from a well tailored data sampling scheme designed for MOSFETs. The OPD metamodels can be used to automatically bias the circuit at a specific DC operating point. Analytical-based performance expressions composed by the OPD metamodels show obvious improvement for most small-signal performances compared with simulation-based models. Both operating-point variables and transistor dimensions can be optimized in our nesting-loop optimization formulation to maximize design flexibility. The method is successfully applied to a low-voltage low-power amplifier. (semiconductor integrated circuits)

  3. Testing models of basin inversion in the eastern North Sea using exceptionally accurate thermal and maturity data

    Nielsen, S.B.; Clausen, O.R.; Gallagher, Kerry; Balling, N.

    One difficulty of testing models of basin inversion against data is that erosion has erased the stratigraphic record along the inversion ridge. The depth of erosion therefore cannot be determined. However, thermal maturity data may contain a signal of deeper burial in the past. Here we consider the...... background heat flow, matrix thermal conductivity of sand, shale and chalk, and depositional and erosional episodes during the Cenozoic hiatus. The results show that the data are consistent with none or very limited deposition and erosion during the Cenozoic hiatus after the late Cretaceous compressional...... Cretaceous. A thick (c. 1600 m) late Cretaceous and Danian chalk sequence has recorded the associated marginal trough formation. A hiatus of duration c. 60 Myr follows until the deposition of thin Quaternary sediments. The question we address here is if the thermal data from the wells contain information...

  4. The CPA Equation of State and an Activity Coefficient Model for Accurate Molar Enthalpy Calculations of Mixtures with Carbon Dioxide and Water/Brine

    Myint, P. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hao, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Firoozabadi, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-03-27

    Thermodynamic property calculations of mixtures containing carbon dioxide (CO2) and water, including brines, are essential in theoretical models of many natural and industrial processes. The properties of greatest practical interest are density, solubility, and enthalpy. Many models for density and solubility calculations have been presented in the literature, but there exists only one study, by Spycher and Pruess, that has compared theoretical molar enthalpy predictions with experimental data [1]. In this report, we recommend two different models for enthalpy calculations: the CPA equation of state by Li and Firoozabadi [2], and the CO2 activity coefficient model by Duan and Sun [3]. We show that the CPA equation of state, which has been demonstrated to provide good agreement with density and solubility data, also accurately calculates molar enthalpies of pure CO2, pure water, and both CO2-rich and aqueous (H2O-rich) mixtures of the two species. It is applicable to a wider range of conditions than the Spycher and Pruess model. In aqueous sodium chloride (NaCl) mixtures, we show that Duan and Sun’s model yields accurate results for the partial molar enthalpy of CO2. It can be combined with another model for the brine enthalpy to calculate the molar enthalpy of H2O-CO2-NaCl mixtures. We conclude by explaining how the CPA equation of state may be modified to further improve agreement with experiments. This generalized CPA is the basis of our future work on this topic.

  5. Charging and discharging tests for obtaining an accurate dynamic electro-thermal model of high power lithium-ion pack system for hybrid and EV applications

    Mihet-Popa, Lucian; Camacho, Oscar Mauricio Forero; Nørgård, Per Bromand

    2013-01-01

    This paper presents a battery test platform including two Li-ion battery designed for hybrid and EV applications, and charging/discharging tests under different operating conditions carried out for developing an accurate dynamic electro-thermal model of a high power Li-ion battery pack system. The...... aim of the tests has been to study the impact of the battery degradation and to find out the dynamic characteristics of the cells including nonlinear open circuit voltage, series resistance and parallel transient circuit at different charge/discharge currents and cell temperature. An equivalent...

  6. Highly accurate stability-preserving optimization of the Zener viscoelastic model, with application to wave propagation in the presence of strong attenuation

    Blanc, Émilie; Komatitsch, Dimitri; Chaljub, Emmanuel; Lombard, Bruno; Xie, Zhinan

    2016-04-01

    This paper concerns the numerical modelling of time-domain mechanical waves in viscoelastic media based on a generalized Zener model. To do so, classically in the literature relaxation mechanisms are introduced, resulting in a set of the so-called memory variables and thus in large computational arrays that need to be stored. A challenge is thus to accurately mimic a given attenuation law using a minimal set of relaxation mechanisms. For this purpose, we replace the classical linear approach of Emmerich & Korn with a nonlinear optimization approach with constraints of positivity. We show that this technique is more accurate than the linear approach. Moreover, it ensures that physically meaningful relaxation times that always honour the constraint of decay of total energy with time are obtained. As a result, these relaxation times can always be used in a stable way in a modelling algorithm, even in the case of very strong attenuation for which the classical linear approach may provide some negative and thus unusable coefficients.

  7. A Hybrid Mode Model of the Blazhko Effect, Shown to Accurately Fit Kepler Data for RR Lyr

    Bryant, Paul H

    2013-01-01

    A new hypothesis is presented for the Blazhko effect in RRab stars. A nonlinear model is developed for the first overtone mode, which, if excited to large amplitude, is found to drop strongly in frequency while becoming highly nonsinusoidal. Its frequency is shown to drop sufficiently to become equal that of the fundamental mode. It is proposed that this may lead to phase-locking between the fundamental and the overtone forming a hybrid mode at the fundamental frequency. The fundamental mode, excited less strongly than the overtone, remains nearly sinusoidal and constant in frequency. By varying the fundamental's peak amplitude and its phase relative to the overtone, the hybrid mode can produce a variety of forms that match those observed in various parts of the Blazhko cycle. The presence of the fundamental also serves to stabilize the period of the hybrid which is found in real Blazhko data to be extremely stable. It is proposed that the variations in amplitude and phase might result from a nonlinear intera...

  8. The Type IIP Supernova 2012aw in M95: hydrodynamical modelling of the photospheric phase from accurate spectrophotometric monitoring

    Dall'Ora, M; Pumo, M L; Zampieri, L; Tomasella, L; Pignata, G; Bayless, A J; Pritchard, T A; Taubenberger, S; Kotak, R; Inserra, C; Della Valle, M; Cappellaro, E; Benetti, S; Benitez, S; Bufano, F; Elias-Rosa, N; Fraser, M; Haislip, J B; Harutyunyan, A; Howell, D A; Hsiao, E Y; Iijima, T; Kankare, E; Kuin, P; Maund, J R; Morales-Garoffolo, A; Morrell, N; Munari, U; Ochner, P; Pastorello, A; Patat, F; Phillips, M M; Reichart, D; Roming, P W A; Siviero, A; Smartt, S J; Sollerman, J; Taddia, F; Valenti, S; Wright, D

    2014-01-01

    We present an extensive optical and near-infrared photometric and spectroscopic campaign of the type IIP supernova SN 2012aw. The dataset densely covers the evolution of SN 2012aw shortly after the explosion up to the end of the photospheric phase, with two additional photometric observations collected during the nebular phase, to fit the radioactive tail and estimate the $^{56}$Ni mass. Also included in our analysis is the already published \\textit{Swift} UV data, therefore providing a complete view of the ultraviolet-optical-infrared evolution of the photospheric phase. On the basis of our dataset, we estimate all the relevant physical parameters of SN 2012aw with our radiation-hydrodynamics code: envelope mass $M_{env} \\sim 20 M_\\odot$, progenitor radius $R \\sim 3 \\times 10^{13}$ cm ($ \\sim 430 R_\\odot$), explosion energy $E \\sim 1.5$ foe, and initial $^{56}$Ni mass $\\sim 0.06$ $M_\\odot$. These mass and radius values are reasonably well supported by independent evolutionary models of the progenitor, and ma...

  9. The type IIP supernova 2012aw in M95: Hydrodynamical modeling of the photospheric phase from accurate spectrophotometric monitoring

    Dall' Ora, M.; Botticella, M. T.; Della Valle, M. [INAF, Osservatorio Astronomico di Capodimonte, Napoli (Italy); Pumo, M. L.; Zampieri, L.; Tomasella, L.; Cappellaro, E.; Benetti, S. [INAF, Osservatorio Astronomico di Padova, I-35122 Padova (Italy); Pignata, G.; Bufano, F. [Departamento de Ciencias Fisicas, Universidad Andres Bello, Avda. Republica 252, Santiago (Chile); Bayless, A. J. [Southwest Research Institute, Department of Space Science, 6220 Culebra Road, San Antonio, TX 78238 (United States); Pritchard, T. A. [Department of Astronomy and Astrophysics, Penn State University, 525 Davey Lab, University Park, PA 16802 (United States); Taubenberger, S.; Benitez, S. [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85741 Garching (Germany); Kotak, R.; Inserra, C.; Fraser, M. [Astrophysics Research Centre, School of Mathematics and Physics, Queen' s University Belfast, Belfast, BT7 1NN (United Kingdom); Elias-Rosa, N. [Institut de Ciències de l' Espai (CSIC-IEEC) Campus UAB, Torre C5, Za plata, E-08193 Bellaterra, Barcelona (Spain); Haislip, J. B. [Department of Physics and Astronomy, University of North Carolina at Chapel Hill, 120 E. Cameron Ave., Chapel Hill, NC 27599 (United States); Harutyunyan, A. [Fundación Galileo Galilei - Telescopio Nazionale Galileo, Rambla José Ana Fernández Pérez 7, E-38712 Breña Baja, TF - Spain (Spain); and others

    2014-06-01

    We present an extensive optical and near-infrared photometric and spectroscopic campaign of the Type IIP supernova SN 2012aw. The data set densely covers the evolution of SN 2012aw shortly after the explosion through the end of the photospheric phase, with two additional photometric observations collected during the nebular phase, to fit the radioactive tail and estimate the {sup 56}Ni mass. Also included in our analysis is the previously published Swift UV data, therefore providing a complete view of the ultraviolet-optical-infrared evolution of the photospheric phase. On the basis of our data set, we estimate all the relevant physical parameters of SN 2012aw with our radiation-hydrodynamics code: envelope mass M {sub env} ∼ 20 M {sub ☉}, progenitor radius R ∼ 3 × 10{sup 13} cm (∼430 R {sub ☉}), explosion energy E ∼ 1.5 foe, and initial {sup 56}Ni mass ∼0.06 M {sub ☉}. These mass and radius values are reasonably well supported by independent evolutionary models of the progenitor, and may suggest a progenitor mass higher than the observational limit of 16.5 ± 1.5 M {sub ☉} of the Type IIP events.

  10. A new approach and model for accurate determination of the dynamic pull-in parameters of microbeams actuated by a step voltage

    Accurate determination of pull-in voltage and pull-in position is crucial in the design of electrostatically actuated microbeam-based devices. In the past, there have been many works on analytical modeling of the static pull-in of microbeams. However, unlike the static pull-in of microbeams where the analytical models have been well established, there are few works on analytical modeling of the dynamic pull-in of microbeam actuated by a step voltage. This paper presents two analytical approximate models for calculating the dynamic pull-in voltage and pull-in position of a cantilever beam and a clamped–clamped beam, respectively. The effects of the fringing field are included in the two models. The two models are derived based on the energy balance method. An N-order algebraic equation for the dynamic pull-in position is derived. An approximate solution of the N-order algebraic equation yields the dynamic pull-in position and voltage. The accuracy of the present models is verified by comparing their results with the experimental results and the published models available in the literature. (paper)

  11. More Accurate Prediction of Metastatic Pancreatic Cancer Patients' Survival with Prognostic Model Using Both Host Immunity and Tumor Metabolic Activity.

    Younak Choi

    Full Text Available Neutrophil to lymphocyte ratio (NLR and standard uptake value (SUV by 18F-FDG PET represent host immunity and tumor metabolic activity, respectively. We investigated NLR and maximum SUV (SUVmax as prognostic markers in metastatic pancreatic cancer (MPC patients who receive palliative chemotherapy.We reviewed 396 MPC patients receiving palliative chemotherapy. NLR was obtained before and after the first cycle of chemotherapy. In 118 patients with PET prior to chemotherapy, SUVmax was collected. Cut-off values were determined by ROC curve.In multivariate analysis of all patients, NLR and change in NLR after the first cycle of chemotherapy (ΔNLR were independent prognostic factors for overall survival (OS. We scored the risk considering NLR and ΔNLR and identified 4 risk groups with different prognosis (risk score 0 vs 1 vs 2 vs 3: OS 9.7 vs 7.9 vs 5.7 vs 2.6 months, HR 1 vs 1.329 vs 2.137 vs 7.915, respectively; P<0.001. In PET cohort, NLR and SUVmax were independently prognostic for OS. Prognostication model using both NLR and SUVmax could define 4 risk groups with different OS (risk score 0 vs 1 vs 2 vs 3: OS 11.8 vs 9.8 vs 7.2 vs 4.6 months, HR 1 vs 1.536 vs 2.958 vs 5.336, respectively; P<0.001.NLR and SUVmax as simple parameters of host immunity and metabolic activity of tumor cell, respectively, are independent prognostic factors for OS in MPC patients undergoing palliative chemotherapy.

  12. Safety-relevant mode confusions-modelling and reducing them

    Mode confusions are a significant safety concern in safety-critical systems, for example in aircraft. A mode confusion occurs when the observed behaviour of a technical system is out of sync with the user's mental model of its behaviour. But the notion is described only informally in the literature. We present a rigorous way of modelling the user and the machine in a shared-control system. This enables us to propose precise definitions of 'mode' and 'mode confusion' for safety-critical systems. We then validate these definitions against the informal notions in the literature. A new classification of mode confusions by cause leads to a number of design recommendations for shared-control systems. These help in avoiding mode confusion problems. Our approach supports the automated detection of remaining mode confusion problems. We apply our approach practically to a wheelchair robot

  13. The i-V curve curve characteristics of burner-stabilized premixed flames: detailed and reduced models

    Han, Jie; Casey, Tiernan A; Bisetti, Fabrizio; Im, Hong G; Chen, Jyh-Yuan

    2016-01-01

    The i-V curve describes the current drawn from a flame as a function of the voltage difference applied across the reaction zone. Since combustion diagnostics and flame control strategies based on electric fields depend on the amount of current drawn from flames, there is significant interest in modeling and understanding i-V curves. We implement and apply a detailed model for the simulation of the production and transport of ions and electrons in one dimensional premixed flames. An analytical reduced model is developed based on the detailed one, and analytical expressions are used to gain insight into the characteristics of the i-V curve for various flame configurations. In order for the reduced model to capture the spatial distribution of the electric field accurately, the concept of a dead zone region, where voltage is constant, is introduced, and a suitable closure for the spatial extent of the dead zone is proposed and validated. The results from the reduced modeling framework are found to be in good agre...

  14. An Efficient Reduced-Order Model for the Nonlinear Dynamics of Carbon Nanotubes

    Xu, Tiantian

    2014-08-17

    Because of the inherent nonlinearities involving the behavior of CNTs when excited by electrostatic forces, modeling and simulating their behavior is challenging. The complicated form of the electrostatic force describing the interaction of their cylindrical shape, forming upper electrodes, to lower electrodes poises serious computational challenges. This presents an obstacle against applying and using several nonlinear dynamics tools that typically used to analyze the behavior of complicated nonlinear systems, such as shooting, continuation, and integrity analysis techniques. This works presents an attempt to resolve this issue. We present an investigation of the nonlinear dynamics of carbon nanotubes when actuated by large electrostatic forces. We study expanding the complicated form of the electrostatic force into enough number of terms of the Taylor series. We plot and compare the expanded form of the electrostatic force to the exact form and found that at least twenty terms are needed to capture accurately the strong nonlinear form of the force over the full range of motion. Then, we utilize this form along with an Euler–Bernoulli beam model to study the static and dynamic behavior of CNTs. The geometric nonlinearity and the nonlinear electrostatic force are considered. An efficient reduced-order model (ROM) based on the Galerkin method is developed and utilized to simulate the static and dynamic responses of the CNTs. We found that the use of the new expanded form of the electrostatic force enables avoiding the cumbersome evaluation of the spatial integrals involving the electrostatic force during the modal projection procedure in the Galerkin method, which needs to be done at every time step. Hence, the new method proves to be much more efficient computationally.

  15. On reduced models for gravity waves generated by moving bodies

    Trinh, Philippe H

    2015-01-01

    In 1982, Marshall P. Tulin published a report proposing a framework for reducing the equations for gravity waves generated by moving bodies into a single nonlinear differential equation solvable in closed form [Proc. 14th Symp. on Naval Hydrodynamics, 1982, pp.19-51]. Several new and puzzling issues were highlighted by Tulin, notably the existence of weak and strong wave-making regimes, and the paradoxical fact that the theory seemed to be applicable to flows at low speeds, "but not too low speeds". These important issues were left unanswered, and despite the novelty of the ideas, Tulin's report fell into relative obscurity. Now thirty years later, we will revive Tulin's observations, and explain how an asymptotically consistent framework allows us to address these concerns. Most notably, we will explain, using the asymptotic method of steepest descents, how the production of free-surface waves can be related to the arrangement of integration contours connected to the shape of the moving body. This approach p...

  16. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media

  17. Noether derivation of exact conservation laws for dissipationless reduced-fluid models

    The energy-momentum conservation laws for general reduced-fluid (e.g., gyrofluid) models are derived by Noether method from a general reduced variational principle. The reduced canonical energy-momentum tensor (which is explicitly asymmetric and has the Minkowski form) exhibits polarization and magnetization effects associated with dynamical reduction. In particular, the asymmetry in the reduced canonical momentum-stress tensor produces a nonvanishing reduced intrinsic torque that can drive spontaneous toroidal rotation in axisymmetric tokamak plasmas.

  18. Reducing Uncertainty in Chemistry Climate Model Predictions of Stratospheric Ozone

    Douglass, A. R.; Strahan, S. E.; Oman, L. D.; Stolarski, R. S.

    2014-01-01

    Chemistry climate models (CCMs) are used to predict the future evolution of stratospheric ozone as ozone-depleting substances decrease and greenhouse gases increase, cooling the stratosphere. CCM predictions exhibit many common features, but also a broad range of values for quantities such as year of ozone-return-to-1980 and global ozone level at the end of the 21st century. Multiple linear regression is applied to each of 14 CCMs to separate ozone response to chlorine change from that due to climate change. We show that the sensitivity of lower atmosphere ozone to chlorine change deltaO3/deltaCly is a near linear function of partitioning of total inorganic chlorine (Cly) into its reservoirs; both Cly and its partitioning are controlled by lower atmospheric transport. CCMs with realistic transport agree with observations for chlorine reservoirs and produce similar ozone responses to chlorine change. After 2035 differences in response to chlorine contribute little to the spread in CCM results as the anthropogenic contribution to Cly becomes unimportant. Differences among upper stratospheric ozone increases due to temperature decreases are explained by differences in ozone sensitivity to temperature change deltaO3/deltaT due to different contributions from various ozone loss processes, each with their own temperature dependence. In the lower atmosphere, tropical ozone decreases caused by a predicted speed-up in the Brewer-Dobson circulation may or may not be balanced by middle and high latitude increases, contributing most to the spread in late 21st century predictions.

  19. Chaotic vibrations of circular cylindrical shells: Galerkin versus reduced-order models via the proper orthogonal decomposition method

    Amabili, M.; Sarkar, A.; Païdoussis, M. P.

    2006-03-01

    The geometric nonlinear response of a water-filled, simply supported circular cylindrical shell to harmonic excitation in the spectral neighbourhood of the fundamental natural frequency is investigated. The response is investigated for a fixed excitation frequency by using the excitation amplitude as bifurcation parameter for a wide range of variation. Bifurcation diagrams of Poincaré maps obtained from direct time integration and calculation of the Lyapunov exponents and Lyapunov dimension have been used to study the system. By increasing the excitation amplitude, the response undergoes (i) a period-doubling bifurcation, (ii) subharmonic response, (iii) quasi-periodic response and (iv) chaotic behaviour with up to 16 positive Lyapunov exponents (hyperchaos). The model is based on Donnell's nonlinear shallow-shell theory, and the reference solution is obtained by the Galerkin method. The proper orthogonal decomposition (POD) method is used to extract proper orthogonal modes that describe the system behaviour from time-series response data. These time-series have been obtained via the conventional Galerkin approach (using normal modes as a projection basis) with an accurate model involving 16 degrees of freedom (dofs), validated in previous studies. The POD method, in conjunction with the Galerkin approach, permits to build a lower-dimensional model as compared to those obtainable via the conventional Galerkin approach. Periodic and quasi-periodic response around the fundamental resonance for fixed excitation amplitude, can be very successfully simulated with a 3-dof reduced-order model. However, in the case of large variation of the excitation, even a 5-dof reduced-order model is not fully accurate. Results show that the POD methodology is not as "robust" as the Galerkin method.

  20. Comparison of reduced models for blood flow using Runge-Kutta discontinuous Galerkin methods

    Puelz, Charles; Canic, Suncica; Rusin, Craig G

    2015-01-01

    Reduced, or one-dimensional blood flow models take the general form of nonlinear hyperbolic systems, but differ greatly in their formulation. One class of models considers the physically conserved quantities of mass and momentum, while another class describes mass and velocity. Further, the averaging process employed in the model derivation requires the specification of the axial velocity profile; this choice differentiates models within each class. Discrepancies among differing models have yet to be investigated. In this paper, we systematically compare several reduced models of blood flow for physiologically relevant vessel parameters, network topology, and boundary data. The models are discretized by a class of Runge-Kutta discontinuous Galerkin methods.

  1. Pseudo-spectral Maxwell solvers for an accurate modeling of Doppler harmonic generation on plasma mirrors with Particle-In-Cell codes

    Blaclard, G; Lehe, R; Vay, J L

    2016-01-01

    With the advent of PW class lasers, the very large laser intensities attainable on-target should enable the production of intense high order Doppler harmonics from relativistic laser-plasma mirrors interactions. At present, the modeling of these harmonics with Particle-In-Cell (PIC) codes is extremely challenging as it implies an accurate description of tens of harmonic orders on a a broad range of angles. In particular, we show here that standard Finite Difference Time Domain (FDTD) Maxwell solvers used in most PIC codes partly fail to model Doppler harmonic generation because they induce numerical dispersion of electromagnetic waves in vacuum which is responsible for a spurious angular deviation of harmonic beams. This effect was extensively studied and a simple toy-model based on Snell-Descartes law was developed that allows us to finely predict the angular deviation of harmonics depending on the spatio-temporal resolution and the Maxwell solver used in the simulations. Our model demonstrates that the miti...

  2. The effect of audio and video modeling on beginning guitar students' ability to accurately sing and accompany a familiar melody on guitar by ear.

    Wlodarczyk, Natalie

    2010-01-01

    The purpose of this research was to determine the effect of audio and visual modeling on music and nonmusic majors' ability to accurately sing and accompany a familiar melody on guitar by ear. Two studies were run to investigate the impact of musical training on the ability to play by ear. All participants were student volunteers enrolled in sections of a beginning class guitar course and were randomly assigned to one of three groups: control, audio modeling only, or audio and visual modeling. All participants were asked to sing the same familiar song in the same key and accompany on guitar. Study 1 compared music majors with nonmusic majors and showed no significant difference between treatment conditions, however, there was a significant difference between music majors and nonmusic majors across all conditions. There was no significant interaction between groups and treatment conditions. Study 2 investigated the operational definition of "musically trained" and compared musically trained with nonmusically trained participants across the same three conditions. Results of Study 2 showed no significant difference between musically trained and nonmusically trained participants; however, there was a significant difference between treatment conditions with the audio-visual group completing the task in the shortest amount of time. There was no significant interaction between groups and treatment conditions. Results of these analyses support the use of instructor modeling for beginning guitar students and suggest that previous musical knowledge does not play a role in guitar skills acquisition at the beginning level. PMID:21141772

  3. Time-Dependent Solutions to the Fokker-Planck Equation of Maximum Reduced Air-Sea Coupling Climate Model

    FENG Guolin; DONG Wenjie; GAO Hongxing

    2005-01-01

    The time-dependent solution of reduced air-sea coupling stochastic-dynamic model is accurately obtained by using the Fokker-Planck equation and the quantum mechanical method. The analysis of the timedependent solution suggests that when the climate system is in the ground state, the behavior of the system appears to be a Brownian movement, thus reasoning the foothold of Hasselmann's stochastic climatic model;when the system is in the first excitation state, the motion of the system exhibits a form of time-decaying,or under certain condition a periodic oscillation with the main period being 2.3 yr. At last, the results are used to discuss the impact of the doubling of carbon dioxide on climate.

  4. Sensitivity of Reliability Estimates in Partially Damaged RC Structures subject to Earthquakes, using Reduced Hysteretic Models

    Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.

    The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....

  5. An Efficient and Accurate Numerical Algorithm for Multi-Dimensional Modeling of Casting Solidification, Part Ⅱ: Combination of FEM and FDM

    Jin Xuesong; Tsai Hailung

    1994-01-01

    This paper is a continuation of Ref. [1]. It employs frist-order accurate Taylor-Galerkin-based finite element approach for casting solidification. The approach is based on expressing the finite-difference approximation of the transient time derivative of temperature, while the expressions of the governing equations are discretized in space via the classical Galerkin scheme using finiteelement formulations. The detailed technique is reported in this study. Several casting solidification examples are solved to demonstrate the excellent agreements in comparison with the results obtained by using the control volume method, and to show the availability of combination of the finite element method and the finite difference method in multi-dimensional modeling of casting solidification.

  6. Reducing uncertainty for estimating forest carbon stocks and dynamics using integrated remote sensing, forest inventory and process-based modeling

    Poulter, B.; Ciais, P.; Joetzjer, E.; Maignan, F.; Luyssaert, S.; Barichivich, J.

    2015-12-01

    Accurately estimating forest biomass and forest carbon dynamics requires new integrated remote sensing, forest inventory, and carbon cycle modeling approaches. Presently, there is an increasing and urgent need to reduce forest biomass uncertainty in order to meet the requirements of carbon mitigation treaties, such as Reducing Emissions from Deforestation and forest Degradation (REDD+). Here we describe a new parameterization and assimilation methodology used to estimate tropical forest biomass using the ORCHIDEE-CAN dynamic global vegetation model. ORCHIDEE-CAN simulates carbon uptake and allocation to individual trees using a mechanistic representation of photosynthesis, respiration and other first-order processes. The model is first parameterized using forest inventory data to constrain background mortality rates, i.e., self-thinning, and productivity. Satellite remote sensing data for forest structure, i.e., canopy height, is used to constrain simulated forest stand conditions using a look-up table approach to match canopy height distributions. The resulting forest biomass estimates are provided for spatial grids that match REDD+ project boundaries and aim to provide carbon estimates for the criteria described in the IPCC Good Practice Guidelines Tier 3 category. With the increasing availability of forest structure variables derived from high-resolution LIDAR, RADAR, and optical imagery, new methodologies and applications with process-based carbon cycle models are becoming more readily available to inform land management.

  7. Bayesian State-Space Modelling of Conventional Acoustic Tracking Provides Accurate Descriptors of Home Range Behavior in a Small-Bodied Coastal Fish Species.

    Josep Alós

    Full Text Available State-space models (SSM are increasingly applied in studies involving biotelemetry-generated positional data because they are able to estimate movement parameters from positions that are unobserved or have been observed with non-negligible observational error. Popular telemetry systems in marine coastal fish consist of arrays of omnidirectional acoustic receivers, which generate a multivariate time-series of detection events across the tracking period. Here we report a novel Bayesian fitting of a SSM application that couples mechanistic movement properties within a home range (a specific case of random walk weighted by an Ornstein-Uhlenbeck process with a model of observational error typical for data obtained from acoustic receiver arrays. We explored the performance and accuracy of the approach through simulation modelling and extensive sensitivity analyses of the effects of various configurations of movement properties and time-steps among positions. Model results show an accurate and unbiased estimation of the movement parameters, and in most cases the simulated movement parameters were properly retrieved. Only in extreme situations (when fast swimming speeds are combined with pooling the number of detections over long time-steps the model produced some bias that needs to be accounted for in field applications. Our method was subsequently applied to real acoustic tracking data collected from a small marine coastal fish species, the pearly razorfish, Xyrichtys novacula. The Bayesian SSM we present here constitutes an alternative for those used to the Bayesian way of reasoning. Our Bayesian SSM can be easily adapted and generalized to any species, thereby allowing studies in freely roaming animals on the ecological and evolutionary consequences of home ranges and territory establishment, both in fishes and in other taxa.

  8. Bayesian State-Space Modelling of Conventional Acoustic Tracking Provides Accurate Descriptors of Home Range Behavior in a Small-Bodied Coastal Fish Species

    Alós, Josep; Palmer, Miquel; Balle, Salvador; Arlinghaus, Robert

    2016-01-01

    State-space models (SSM) are increasingly applied in studies involving biotelemetry-generated positional data because they are able to estimate movement parameters from positions that are unobserved or have been observed with non-negligible observational error. Popular telemetry systems in marine coastal fish consist of arrays of omnidirectional acoustic receivers, which generate a multivariate time-series of detection events across the tracking period. Here we report a novel Bayesian fitting of a SSM application that couples mechanistic movement properties within a home range (a specific case of random walk weighted by an Ornstein-Uhlenbeck process) with a model of observational error typical for data obtained from acoustic receiver arrays. We explored the performance and accuracy of the approach through simulation modelling and extensive sensitivity analyses of the effects of various configurations of movement properties and time-steps among positions. Model results show an accurate and unbiased estimation of the movement parameters, and in most cases the simulated movement parameters were properly retrieved. Only in extreme situations (when fast swimming speeds are combined with pooling the number of detections over long time-steps) the model produced some bias that needs to be accounted for in field applications. Our method was subsequently applied to real acoustic tracking data collected from a small marine coastal fish species, the pearly razorfish, Xyrichtys novacula. The Bayesian SSM we present here constitutes an alternative for those used to the Bayesian way of reasoning. Our Bayesian SSM can be easily adapted and generalized to any species, thereby allowing studies in freely roaming animals on the ecological and evolutionary consequences of home ranges and territory establishment, both in fishes and in other taxa. PMID:27119718

  9. Bayesian State-Space Modelling of Conventional Acoustic Tracking Provides Accurate Descriptors of Home Range Behavior in a Small-Bodied Coastal Fish Species.

    Alós, Josep; Palmer, Miquel; Balle, Salvador; Arlinghaus, Robert

    2016-01-01

    State-space models (SSM) are increasingly applied in studies involving biotelemetry-generated positional data because they are able to estimate movement parameters from positions that are unobserved or have been observed with non-negligible observational error. Popular telemetry systems in marine coastal fish consist of arrays of omnidirectional acoustic receivers, which generate a multivariate time-series of detection events across the tracking period. Here we report a novel Bayesian fitting of a SSM application that couples mechanistic movement properties within a home range (a specific case of random walk weighted by an Ornstein-Uhlenbeck process) with a model of observational error typical for data obtained from acoustic receiver arrays. We explored the performance and accuracy of the approach through simulation modelling and extensive sensitivity analyses of the effects of various configurations of movement properties and time-steps among positions. Model results show an accurate and unbiased estimation of the movement parameters, and in most cases the simulated movement parameters were properly retrieved. Only in extreme situations (when fast swimming speeds are combined with pooling the number of detections over long time-steps) the model produced some bias that needs to be accounted for in field applications. Our method was subsequently applied to real acoustic tracking data collected from a small marine coastal fish species, the pearly razorfish, Xyrichtys novacula. The Bayesian SSM we present here constitutes an alternative for those used to the Bayesian way of reasoning. Our Bayesian SSM can be easily adapted and generalized to any species, thereby allowing studies in freely roaming animals on the ecological and evolutionary consequences of home ranges and territory establishment, both in fishes and in other taxa. PMID:27119718

  10. The role of chemistry and pH of solid surfaces for specific adsorption of biomolecules in solution—accurate computational models and experiment

    Adsorption of biomolecules and polymers to inorganic nanostructures plays a major role in the design of novel materials and therapeutics. The behavior of flexible molecules on solid surfaces at a scale of 1–1000 nm remains difficult and expensive to monitor using current laboratory techniques, while playing a critical role in energy conversion and composite materials as well as in understanding the origin of diseases. Approaches to implement key surface features and pH in molecular models of solids are explained, and distinct mechanisms of peptide recognition on metal nanostructures, silica and apatite surfaces in solution are described as illustrative examples. The influence of surface energies, specific surface features and protonation states on the structure of aqueous interfaces and selective biomolecular adsorption is found to be critical, comparable to the well-known influence of the charge state and pH of proteins and surfactants on their conformations and assembly. The representation of such details in molecular models according to experimental data and available chemical knowledge enables accurate simulations of unknown complex interfaces in atomic resolution in quantitative agreement with independent experimental measurements. In this context, the benefits of a uniform force field for all material classes and of a mineral surface structure database are discussed. (paper)

  11. Reduced-order LPV model of flexible wind turbines from high fidelity aeroelastic codes

    Adegas, Fabiano Daher; Sønderby, Ivan Bergquist; Hansen, Morten Hartvig; Stoustrup, Jakob

    Linear aeroelastic models used for stability analysis of wind turbines are commonly of very high order. These high-order models are generally not suitable for control analysis and synthesis. This paper presents a methodology to obtain a reduced-order linear parameter varying (LPV) model from a se...

  12. Reduced order model (ROM) of a pouch type lithium polymer battery based on electrochemical thermal principles for real time applications

    Accurate and fast estimation of state of charge and health during operations plays pivotal roles in prevention of overcharge or undercharge and accurately monitor the state of cells degradation, which requires a model that can be embedded in the battery management system. Currently, models used are based on either empirical equations or electric equivalent circuit components with voltage sources or a combination of the two. The models are relatively simple, but limited to represent a wide range of operating behaviors that includes the effects of temperature, state of charge (SOC) and degradation, to name a few. On the other hand, full order models (FOM) are multi-dimensional or multi-scale models based on electrochemical and thermal principles capable of representing the details of the cell behavior, but inadequate for real time applications, simply because of the high computational time. Therefore, there are needs for the development of a model with an intermediate performance and real time capability, which is accomplished by reduction of the FOM that is called a reduced order model (ROM). The battery used for development of the ROM is a pouch type of lithium ion polymer battery (LiPB) made of LiMn2O4 (LMO)/carbon. The reduction of the model was carried out for ion concentrations, potentials and kinetics, the ion concentration in the electrode using the polynomial approach and the ion concentration in the electrolyte using the state space method, and potentials and electrochemical kinetics by linearization. In addition, the energy equation is used to calculate the cell temperature, on which the diffusion coefficient and the solid electrolyte interphase (SEI) resistance are dependent. The computational time step is determined based on the total computational time and errors at a given SOC range and different C-rates. ROM responses are compared with those of the FOM and experimental data at a single cycle and multiple cycles under different operating conditions

  13. Novel Reduced Order in Time Models for Problems in Nonlinear Aeroelasticity Project

    National Aeronautics and Space Administration — Research is proposed for the development and implementation of state of the art, reduced order models for problems in nonlinear aeroelasticity. Highly efficient and...

  14. Parametric reduced models for the nonlinear Schrödinger equation.

    Harlim, John; Li, Xiantao

    2015-05-01

    Reduced models for the (defocusing) nonlinear Schrödinger equation are developed. In particular, we develop reduced models that only involve the low-frequency modes given noisy observations of these modes. The ansatz of the reduced parametric models are obtained by employing a rational approximation and a colored-noise approximation, respectively, on the memory terms and the random noise of a generalized Langevin equation that is derived from the standard Mori-Zwanzig formalism. The parameters in the resulting reduced models are inferred from noisy observations with a recently developed ensemble Kalman filter-based parametrization method. The forecasting skill across different temperature regimes are verified by comparing the moments up to order four, a two-time correlation function statistics, and marginal densities of the coarse-grained variables. PMID:26066278

  15. Quantum corrections of (fuzzy) spacetimes from a supersymmetric reduced model with Filippov 3-algebra

    Tomino, Dan

    2010-01-01

    1-loop vacuum energies of (fuzzy) spacetimes from a supersymmetric reduced model with Filippov 3-algebra are discussed. A_{2,2} algebra, Nambu-Poisson algebra in flat spacetime, and a Lorentzian 3-algebra are examined as 3-algebras.

  16. Modeling and Optimization of Direct Chill Casting to Reduce Ingot Cracking

    Das, Subodh K.

    2006-01-09

    reheating-cooling method (RCM), was developed and validated for measuring mechanical properties in the nonequilibrium mushy zones of alloys. The new method captures the brittle nature of aluminum alloys at temperatures close to the nonequilibrium solidus temperature, while specimens tested using the reheating method exhibit significant ductility. The RCM has been used for determining the mechanical properties of alloys at nonequilibrium mushy zone temperatures. Accurate data obtained during this project show that the metal becomes more brittle at high temperatures and high strain rates. (4) The elevated-temperature mechanical properties of the alloy were determined. Constitutive models relating the stress and strain relationship at elevated temperatures were also developed. The experimental data fit the model well. (5) An integrated 3D DC casting model has been used to simulate heat transfer, fluid flow, solidification, and thermally induced stress-strain during casting. A temperature-dependent HTC between the cooling water and the ingot surface, cooling water flow rate, and air gap were coupled in this model. An elasto-viscoplastic model based on high-temperature mechanical testing was used to calculate the stress during casting. The 3D integrated model can be used for the prediction of temperature, fluid flow, stress, and strain distribution in DC cast ingots. (6) The cracking propensity of DC cast ingots can be predicted using the 3D integrated model as well as thermodynamic models. Thus, an ingot cracking index based on the ratio of local stress to local alloy strength was established. Simulation results indicate that cracking propensity increases with increasing casting speed. The composition of the ingots also has a major effect on cracking formation. It was found that copper and zinc increase the cracking propensity of DC cast ingots. The goal of this Aluminum Industry of the Future (IOF) project was to assist the aluminum industry in reducing the incidence of stress

  17. Accurate Finite Difference Algorithms

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  18. A random forest based risk model for reliable and accurate prediction of receipt of transfusion in patients undergoing percutaneous coronary intervention.

    Hitinder S Gurm

    Full Text Available BACKGROUND: Transfusion is a common complication of Percutaneous Coronary Intervention (PCI and is associated with adverse short and long term outcomes. There is no risk model for identifying patients most likely to receive transfusion after PCI. The objective of our study was to develop and validate a tool for predicting receipt of blood transfusion in patients undergoing contemporary PCI. METHODS: Random forest models were developed utilizing 45 pre-procedural clinical and laboratory variables to estimate the receipt of transfusion in patients undergoing PCI. The most influential variables were selected for inclusion in an abbreviated model. Model performance estimating transfusion was evaluated in an independent validation dataset using area under the ROC curve (AUC, with net reclassification improvement (NRI used to compare full and reduced model prediction after grouping in low, intermediate, and high risk categories. The impact of procedural anticoagulation on observed versus predicted transfusion rates were assessed for the different risk categories. RESULTS: Our study cohort was comprised of 103,294 PCI procedures performed at 46 hospitals between July 2009 through December 2012 in Michigan of which 72,328 (70% were randomly selected for training the models, and 30,966 (30% for validation. The models demonstrated excellent calibration and discrimination (AUC: full model  = 0.888 (95% CI 0.877-0.899, reduced model AUC = 0.880 (95% CI, 0.868-0.892, p for difference 0.003, NRI = 2.77%, p = 0.007. Procedural anticoagulation and radial access significantly influenced transfusion rates in the intermediate and high risk patients but no clinically relevant impact was noted in low risk patients, who made up 70% of the total cohort. CONCLUSIONS: The risk of transfusion among patients undergoing PCI can be reliably calculated using a novel easy to use computational tool (https://bmc2.org/calculators/transfusion. This risk prediction

  19. Reduced thermal quadrupole heat transport modeling in harmonic and transient regime scanning thermal microscopy using nanofabricated thermal probes

    Bodzenta, J.; Chirtoc, M.; Juszczyk, J.

    2014-08-01

    The thermal model of a nanofabricated thermal probe (NTP) used in scanning thermal microscopy is proposed. It is based on consideration of the heat exchange channels between electrically heated probe, a sample, and their surroundings, in transient and harmonic regimes. Three zones in the probe-sample system were distinguished and modeled by using electrical analogies of heat flow through a chain of quadrupoles built from thermal resistances and thermal capacitances. The analytical transfer functions for two- and three-cell quadrupoles are derived. A reduced thermal quadrupole with merged RC elements allows for thermo-electrical modeling of the complex architecture of a NTP, with a minimum of independent parameters (two resistance ratios and two time constants). The validity of the model is examined by comparing computed values of discrete RC elements with results of finite element simulations and with experimental data. It is proved that the model consisting of two or three-cell quadrupole is sufficient for accurate interpretation of experimental results. The bandwidth of the NTP is limited to 10 kHz. The performance in dc regime can be simply obtained in the limit of zero frequency. One concludes that the low NTP sensitivity to sample thermal conductivity is due, much like in dc regime, to significant heat by-pass by conduction through the cantilever, and to the presence of probe-sample contact resistance in series with the sample.

  20. Reduced-order modeling for cardiac electrophysiology. Application to parameter identification

    Boulakia, Muriel; Gerbeau, Jean-Frédéric

    2011-01-01

    A reduced-order model based on Proper Orthogonal Decomposition (POD) is proposed for the bidomain equations of cardiac electrophysiology. Its accuracy is assessed through electrocardiograms in various configurations, including myocardium infarctions and long-time simulations. We show in particular that a restitution curve can efficiently be approximated by this approach. The reduced-order model is then used in an inverse problem solved by an evolutionary algorithm. Some attempts are presented to identify ionic parameters and infarction locations from synthetic ECGs.

  1. Rheological and thermophysical properties of model compounds for ice-cream with reduced fat and sugar

    Drago Šubarić; Jurislav Babić; Đurđica Ačkar; Borislav Miličević; Mirela Kopjar; Vedran Slačanac

    2010-01-01

    The aim of this research was to investigate the effect of hydrocolloid carrageenan, native tapioca starch and powdered whey on viscosity and thermophysical properties of model ice-cream mixtures with reduced content of sugar and fat. Measurements were performed immediately after mixture preparation and after two months of storage at -18 °C. Results showed that rheological properties of model ice-cream mixtures with reduced content of sugar and fat can be improved by addition of starch and whe...

  2. Reduced models of doubly fed induction generator system for wind turbine simulations

    Sørensen, Poul Ejnar; Hansen, Anca Daniela; Lund, Torsten; Bindner, Henrik W.

    2005-01-01

    fed induction generator system is very well represented by the dynamics due to the generator inertia and the generator control system, whereas the electromagnetic characteristics of the generator can be represented by the steady state relations. The parameters for the proposed models are derived from......This article compares three reduced models with a detailed model of a doubly fed induction generator system for wind turbine applications. The comparisons are based on simulations only. The main idea is to provide reduced generator models which are appropriate to simulate normal wind turbine...

  3. On the validity of drift-reduced fluid models for tokamak plasma simulation

    Leddy, Jarrod; Romanelli, Michele

    2015-01-01

    Drift-reduced plasma fluid models are commonly used in plasma physics for analytics and simulations; however, the validity of such models must be veri?ed for the regions of parameter space in which tokamak plasmas exist. By looking at the linear behaviour of drift-reduced and full-velocity models one can determine that the physics lost through the simplification that the drift-reduction provides is important in the core region of the tokamak. It is more acceptable for the edge-region but one must determine speci?cally for a given simulation if such a model is appropriate.

  4. Study on dynamic characteristics' change of hippocampal neuron reduced models caused by the Alzheimer's disease.

    Peng, Yueping; Wang, Jue; Zheng, Chongxun

    2016-12-01

    In the paper, based on the electrophysiological experimental data, the Hippocampal neuron reduced model under the pathology condition of Alzheimer's disease (AD) has been built by modifying parameters' values. The reduced neuron model's dynamic characteristics under effect of AD are comparatively studied. Under direct current stimulation, compared with the normal neuron model, the AD neuron model's dynamic characteristics have obviously been changed. The neuron model under the AD condition undergoes supercritical Andronov-Hopf bifurcation from the rest state to the continuous discharge state. It is different from the neuron model under the normal condition, which undergoes saddle-node bifurcation. So, the neuron model changes into a resonator with monostable state from an integrator with bistable state under AD's action. The research reveals the neuron model's dynamic characteristics' changing under effect of AD, and provides some theoretic basis for AD research by neurodynamics theory. PMID:26998957

  5. CLASH-VLT: Insights on the mass substructures in the Frontier Fields Cluster MACS J0416.1-2403 through accurate strong lens modeling

    Grillo, C; Rosati, P; Mercurio, A; Balestra, I; Munari, E; Nonino, M; Caminha, G B; Lombardi, M; De Lucia, G; Borgani, S; Gobat, R; Biviano, A; Girardi, M; Umetsu, K; Coe, D; Koekemoer, A M; Postman, M; Zitrin, A; Halkola, A; Broadhurst, T; Sartoris, B; Presotto, V; Annunziatella, M; Maier, C; Fritz, A; Vanzella, E; Frye, B

    2014-01-01

    We present a detailed mass reconstruction and a novel study on the substructure properties in the core of the CLASH and Frontier Fields galaxy cluster MACS J0416.1-2403. We show and employ our extensive spectroscopic data set taken with the VIMOS instrument as part of our CLASH-VLT program, to confirm spectroscopically 10 strong lensing systems and to select a sample of 175 plausible cluster members to a limiting stellar mass of log(M_*/M_Sun) ~ 8.6. We reproduce the measured positions of 30 multiple images with a remarkable median offset of only 0.3" by means of a comprehensive strong lensing model comprised of 2 cluster dark-matter halos, represented by cored elliptical pseudo-isothermal mass distributions, and the cluster member components. The latter have total mass-to-light ratios increasing with the galaxy HST/WFC3 near-IR (F160W) luminosities. The measurement of the total enclosed mass within the Einstein radius is accurate to ~5%, including systematic uncertainties. We emphasize that the use of multip...

  6. Detection of (15)NNH+ in L1544: non-LTE modelling of dyazenilium hyperfine line emission and accurate (14)N/(15)N values

    Bizzocchi, Luca; Leonardo, Elvira; Dore, Luca

    2013-01-01

    Samples of pristine Solar System material found in meteorites and interplanetary dust particles are highly enriched in (15)N. Conspicuous nitrogen isotopic anomalies have also been measured in comets, and the (14)N/(15)N abundance ratio of the Earth is itself larger than the recognised pre-solar value by almost a factor of two. Ion--molecules, low-temperature chemical reactions in the proto-solar nebula have been repeatedly indicated as responsible for these (15)N-enhancements. We have searched for (15)N variants of the N2H+ ion in L1544, a prototypical starless cloud core which is one of the best candidate sources for detection owing to its low central core temperature and high CO depletion. The goal is the evaluation of accurate and reliable (14)N/(15)N ratio values for this species in the interstellar gas. A deep integration of the (15)NNH+ (1-0) line at 90.4 GHz has been obtained with the IRAM 30 m telescope. Non-LTE radiative transfer modelling has been performed on the J=1-0 emissions of the parent and ...

  7. Accurate and Fully Automatic Hippocampus Segmentation Using Subject-Specific 3D Optimal Local Maps Into a Hybrid Active Contour Model.

    Zarpalas, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-01-01

    Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866

  8. Complex functionality with minimal computation: Promise and pitfalls of reduced-tracer ocean biogeochemistry models

    Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh

    2015-12-01

    Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of-the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. These results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate "sub-ecosystem-scale" parameterizations.

  9. A nonlinear manifold-based reduced order model for multiscale analysis of heterogeneous hyperelastic materials

    Bhattacharjee, Satyaki; Matouš, Karel

    2016-05-01

    A new manifold-based reduced order model for nonlinear problems in multiscale modeling of heterogeneous hyperelastic materials is presented. The model relies on a global geometric framework for nonlinear dimensionality reduction (Isomap), and the macroscopic loading parameters are linked to the reduced space using a Neural Network. The proposed model provides both homogenization and localization of the multiscale solution in the context of computational homogenization. To construct the manifold, we perform a number of large three-dimensional simulations of a statistically representative unit cell using a parallel finite strain finite element solver. The manifold-based reduced order model is verified using common principles from the machine-learning community. Both homogenization and localization of the multiscale solution are demonstrated on a large three-dimensional example and the local microscopic fields as well as the homogenized macroscopic potential are obtained with acceptable engineering accuracy.

  10. An optimal control model for reducing and trading of carbon emissions

    Guo, Huaying; Liang, Jin

    2016-03-01

    A stochastic optimal control model of reducing and trading for carbon emissions is established in this paper. With considerations of reducing the carbon emission growth and the price of the allowances in the market, an optimal policy is searched to have the minimum total costs to achieve the agreement of emission reduction targets. The model turns to a two-dimension HJB equation problem. By the methods of reducing dimension and Cole-Hopf transformation, a semi-closed form solution of the corresponding HJB problem under some assumptions is obtained. For more general cases, the numerical calculations, analysis and comparisons are presented.

  11. Parameter Identification and On-line Estimation of a Reduced Kinetic Model

    Dellorco, P.C.; Flesner, R.L.; Le, L.A.; Littell, J.D.; Muske, K.R.

    1999-02-01

    In this work, we present the estimation techniques used to update the model parameters in a reduced kinetic model describing the oxidation-reduction re- actions in a hydrothermal oxidation reactor. The model is used in a nonlinear model-based controller that minimizes the total aqueous nitrogen in the reac- tor effluent. Model reduction is accomplished by com- bining similar reacting compounds into one of four component groups and considering the global reac- tion pathways for each of these groups. The reduced kinetic model developed for thk reaction system pro- vides a means to characterize the complex chemical reaction system without considering each chemicaJ species present and the reaction kinetics of every pos- sible reaction pathway. For the reaction system under study, model reduction is essential in order to reduce the computational requirement so that on-line imple- mentation of the nonlinear model-based controller is possible and also to reduce the amount of a priori information required for the model.

  12. Reduced Order Modeling for Prediction and Control of Large-Scale Systems.

    Kalashnikova, Irina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Computational Mathematics; Arunajatesan, Srinivasan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Aerosciences Dept.; Barone, Matthew Franklin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Aerosciences Dept.; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Uncertainty Quantification and Optimization Dept.; Fike, Jeffrey A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Component Science and Mechanics Dept.

    2014-05-01

    This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest to Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier

  13. Stochastic reduced order computational model of structures having numerous local elastic modes in low frequency dynamics

    Arnoux, A.; Batou, A.; Soize, C.; Gagliardini, L.

    2013-08-01

    This paper is devoted to the construction of a stochastic reduced order computational model of structures having numerous local elastic modes in low frequency dynamics. We are particularly interested in automotive vehicles which are made up of stiff parts and flexible components. This type of structure is characterized by the fact that it exhibits, in the low frequency range, not only the classical global elastic modes but also numerous local elastic modes which cannot easily be separated from the global elastic modes. To solve this difficult problem, an innovative method has recently been proposed for constructing a reduced order computational dynamical model adapted to this particular situation for the low frequency range. Then a new adapted generalized eigenvalue problem is introduced and allows a global vector basis to be constructed for the global displacements space. This method requires to decompose the domain of the structure into sub-domains. Such a decomposition is carried out using the Fast Marching Method. This global vector basis is then used to construct the reduced order computational model. Since there are model uncertainties induced by modeling errors in the computational model, the nonparametric probabilistic approach of uncertainties is used and implemented in the reduced order computational model. The methodology is applied to a complex computational model of an automotive vehicle.

  14. Forward Modeling of Reduced Power Spectra From Three-Dimensional k-Space

    von Papen, Michael; Saur, Joachim

    2015-01-01

    We present results from a numerical forward model to evaluate one-dimensional reduced power spectral densities (PSD) from arbitrary energy distributions in $\\mathbf{k}$-space. In this model, we can separately calculate the diagonal elements of the spectral tensor for incompressible axisymmetric turbulence with vanishing helicity. Given a critically balanced turbulent cascade with $k_\\|\\sim k_\\perp^\\alpha$ and $\\alpha

  15. A Reduced Form Framework for Modeling Volatility of Speculative Prices based on Realized Variation Measures

    Andersen, Torben G.; Bollerslev, Tim; Huang, Xin

    Building on realized variance and bi-power variation measures constructed from high-frequency financial prices, we propose a simple reduced form framework for effectively incorporating intraday data into the modeling of daily return volatility. We decompose the total daily return variability into...... combination of an ACH model for the time-varying jump intensities coupled with a relatively simple log-linear structure for the jump sizes. Lastly, we discuss how the resulting reduced form model structure for each of the three components may be used in the construction of out-of-sample forecasts for the...

  16. A reduced model for ion temperature gradient turbulent transport in helical plasmas

    A novel reduced model for ion temperature gradient (ITG) turbulent transport in helical plasmas is presented. The model enables one to predict nonlinear gyrokinetic simulation results from linear gyrokinetic analyses. It is shown from nonlinear gyrokinetic simulations of the ITG turbulence in helical plasmas that the transport coefficient can be expressed as a function of the turbulent fluctuation level and the averaged zonal flow amplitude. Then, the reduced model for the turbulent ion heat diffusivity is derived by representing the nonlinear turbulent fluctuations and zonal flow amplitude in terms of the linear growth rate of the ITG instability and the linear response of the zonal flow potentials. It is confirmed that the reduced transport model results are in good agreement with those from nonlinear gyrokinetic simulations for high ion temperature plasmas in the Large Helical Device. (author)

  17. Application of a reduced order model to BWR corewide stability analysis

    The determination of system stability parameters from power readings is a problem usually solved by time series techniques such as autoregressive modeling. These techniques are capable of determining the system stability, but ignore the physics of the process and focus on the determination of a nth order linear model. A nonlinear reduced order system is used in conjunction with estimation techniques to present a different approach for stability determination. The simulation of the reduced order model shows the importance of the feedback reactivity imposed by the thermal-hydraulics; the dominant contribution to this feedback is provided by the void reactivity, being a function of power, burnup, power distribution, and in general of the operating conditions of the system. The feedback reactivity is estimated from power measurements and used in conjunction with a reduced order model to determine the system stability properties in terms of the decay ratio

  18. Incorporating Prior Knowledge for Quantifying and Reducing Model-Form Uncertainty in RANS Simulations

    Wang, Jian-Xun; Xiao, Heng

    2015-01-01

    Simulations based on Reynolds-Averaged Navier--Stokes (RANS) models have been used to support high-consequence decisions related to turbulent flows. Apart from the deterministic model predictions, the decision makers are often equally concerned about the predictions confidence. Among the uncertainties in RANS simulations, the model-form uncertainty is an important or even a dominant source. Therefore, quantifying and reducing the model-form uncertainties in RANS simulations are of critical importance to make risk-informed decisions. Researchers in statistics communities have made efforts on this issue by considering numerical models as black boxes. However, this physics-neutral approach is not a most efficient use of data, and is not practical for most engineering problems. Recently, we proposed an open-box, Bayesian framework for quantifying and reducing model-form uncertainties in RANS simulations by incorporating observation data and physics-prior knowledge. It can incorporate the information from the vast...

  19. Can hypoxia-PET map hypoxic cell density heterogeneity accurately in an animal tumor model at a clinically obtainable image contrast?

    Background: PET allows non-invasive mapping of tumor hypoxia, but the combination of low resolution, slow tracer adduct-formation and slow clearance of unbound tracer remains problematic. Using a murine tumor with a hypoxic fraction within the clinical range and a tracer post-injection sampling time that results in clinically obtainable tumor-to-reference tissue activity ratios, we have analyzed to what extent inherent limitations actually compromise the validity of PET-generated hypoxia maps. Materials and methods: Mice bearing SCCVII tumors were injected with the PET hypoxia-marker fluoroazomycin arabinoside (FAZA), and the immunologically detectable hypoxia marker, pimonidazole. Tumors and reference tissue (muscle, blood) were harvested 0.5, 2 and 4 h after FAZA administration. Tumors were analyzed for global (well counter) and regional (autoradiography) tracer distribution and compared to pimonidazole as visualized using immunofluorescence microscopy. Results: Hypoxic fraction as measured by pimonidazole staining ranged from 0.09 to 0.32. FAZA tumor to reference tissue ratios were close to unity 0.5 h post-injection but reached values of 2 and 6 when tracer distribution time was prolonged to 2 and 4 h, respectively. A fine-scale pixel-by-pixel comparison of autoradiograms and immunofluorescence images revealed a clear spatial link between FAZA and pimonidazole-adduct signal intensities at 2 h and later. Furthermore, when using a pixel size that mimics the resolution in PET, an excellent correlation between pixel FAZA mean intensity and density of hypoxic cells was observed already at 2 h post-injection. Conclusions: Despite inherent weaknesses, PET-hypoxia imaging is able to generate quantitative tumor maps that accurately reflect the underlying microscopic reality (i.e., hypoxic cell density) in an animal model with a clinical realistic image contrast.

  20. Study of the nutrient and plankton dynamics in Lake Tanganyika using a reduced-gravity model

    Naithani, Jaya; Darchambeau, François; Deleersnijder, Eric; Descy, Jean*-Pierre; Wolanski, Eric

    2007-01-01

    An eco-hydrodynamic (ECOH) model is proposed for Lake Tanganyika to study the plankton productivity. The hydrodynamic sub-model solves the non-linear, reduced-gravity equations in which wind is the dominant forcing. The ecological sub-model for the epilimnion comprises nutrients, primary production, phytoplankton biomass and zooplankton biomass. In the absence of significant terrestrial input of nutrients, the nutrient loss is compensated for by seasonal, wind-driven, turbulent entrainment of...

  1. Interpolation-based reduced-order models to predict transient thermal output for enhanced geothermal systems

    Mudunuru, M. K.; Karra, S.; D. R. Harp; Guthrie, G. D.; Viswanathan, H. S.

    2016-01-01

    The goal of this paper is to assess the utility of Reduced-Order Models (ROMs) developed from 3D physics-based models for predicting transient thermal power output for an enhanced geothermal reservoir while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on Latin Hypercube Sampling (LHS) of model inputs drawn from uniform probability distributions. Key sensitive parameters are identified from these simulatio...

  2. Accurate market price formation model with both supply-demand and trend-following for global food prices providing policy recommendations.

    Lagi, Marco; Bar-Yam, Yavni; Bertrand, Karla Z; Bar-Yam, Yaneer

    2015-11-10

    Recent increases in basic food prices are severely affecting vulnerable populations worldwide. Proposed causes such as shortages of grain due to adverse weather, increasing meat consumption in China and India, conversion of corn to ethanol in the United States, and investor speculation on commodity markets lead to widely differing implications for policy. A lack of clarity about which factors are responsible reinforces policy inaction. Here, for the first time to our knowledge, we construct a dynamic model that quantitatively agrees with food prices. The results show that the dominant causes of price increases are investor speculation and ethanol conversion. Models that just treat supply and demand are not consistent with the actual price dynamics. The two sharp peaks in 2007/2008 and 2010/2011 are specifically due to investor speculation, whereas an underlying upward trend is due to increasing demand from ethanol conversion. The model includes investor trend following as well as shifting between commodities, equities, and bonds to take advantage of increased expected returns. Claims that speculators cannot influence grain prices are shown to be invalid by direct analysis of price-setting practices of granaries. Both causes of price increase, speculative investment and ethanol conversion, are promoted by recent regulatory changes-deregulation of the commodity markets, and policies promoting the conversion of corn to ethanol. Rapid action is needed to reduce the impacts of the price increases on global hunger. PMID:26504216

  3. Charge density distributions derived from smoothed electrostatic potential functions: design of protein reduced point charge models.

    Leherte, Laurence; Vercauteren, Daniel P

    2011-10-01

    To generate reduced point charge models of proteins, we developed an original approach to hierarchically locate extrema in charge density distribution functions built from the Poisson equation applied to smoothed molecular electrostatic potential (MEP) functions. A charge fitting program was used to assign charge values to the so-obtained reduced representations. In continuation to a previous work, the Amber99 force field was selected. To easily generate reduced point charge models for protein structures, a library of amino acid templates was designed. Applications to four small peptides, a set of 53 protein structures, and four KcsA ion channel models, are presented. Electrostatic potential and solvation free energy values generated by the reduced models are compared with the corresponding values obtained using the original set of atomic charges. Results are in closer agreement with the original all-atom electrostatic properties than those obtained with a previous reduced model that was directly built from the smoothed MEP functions [Leherte and Vercauteren in J Chem Theory Comput 5:3279-3298, 2009]. PMID:21915750

  4. EXPERIMENTS OF A REDUCED GRID IN LASG/IAP WORLD OCEAN GENERAL CIRCULATION MODELS (OGCMs)

    LIU Xiying; LIU Hailong; ZHANG Xuehong; YU Rucong

    2006-01-01

    Due to the decrease in grid size associated with the convergence of meridians toward the poles in spherical coordinates, the time steps in many global climate models with finite-difference method are restricted to be unpleasantly small. To overcome the problem, a reduced grid is introduced to LASG/IAP world ocean general circulation models. The reduced grid is implemented successfully in the coarser resolutions version model L30T63 at first. Then, it is carried out in the improved version model LICOM with finer resolutions. In the experiment with model L30T63, under time step unchanged though, execution time per single model run is shortened significantly owing to the decrease of grid number and filtering execution in high latitudes. Results from additional experiments with L30T63 show that the time step of integration can be quadrupled at most in reduced grid with refinement ratio 3. In the experiment with model LICOM and with the model's original time step unchanged, the model covered area is extended to the whole globe from its original case with the grid point of North Pole considered as an isolated island and the results of experiment are shown to be acceptable.

  5. CLASH-VLT: INSIGHTS ON THE MASS SUBSTRUCTURES IN THE FRONTIER FIELDS CLUSTER MACS J0416.1–2403 THROUGH ACCURATE STRONG LENS MODELING

    Grillo, C. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen (Denmark); Suyu, S. H.; Umetsu, K. [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Rosati, P.; Caminha, G. B. [Dipartimento di Fisica e Scienze della Terra, Università degli Studi di Ferrara, Via Saragat 1, I-44122 Ferrara (Italy); Mercurio, A. [INAF - Osservatorio Astronomico di Capodimonte, Via Moiariello 16, I-80131 Napoli (Italy); Balestra, I.; Munari, E.; Nonino, M.; De Lucia, G.; Borgani, S.; Biviano, A.; Girardi, M. [INAF - Osservatorio Astronomico di Trieste, via G. B. Tiepolo 11, I-34143, Trieste (Italy); Lombardi, M. [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, I-20133 Milano (Italy); Gobat, R. [Laboratoire AIM-Paris-Saclay, CEA/DSM-CNRS-Universitè Paris Diderot, Irfu/Service d' Astrophysique, CEA Saclay, Orme des Merisiers, F-91191 Gif sur Yvette (France); Coe, D.; Koekemoer, A. M.; Postman, M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21208 (United States); Zitrin, A. [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); Halkola, A., E-mail: grillo@dark-cosmology.dk; and others

    2015-02-10

    We present a detailed mass reconstruction and a novel study on the substructure properties in the core of the Cluster Lensing And Supernova survey with Hubble (CLASH) and Frontier Fields galaxy cluster MACS J0416.1–2403. We show and employ our extensive spectroscopic data set taken with the VIsible Multi-Object Spectrograph instrument as part of our CLASH-VLT program, to confirm spectroscopically 10 strong lensing systems and to select a sample of 175 plausible cluster members to a limiting stellar mass of log (M {sub *}/M {sub ☉}) ≅ 8.6. We reproduce the measured positions of a set of 30 multiple images with a remarkable median offset of only 0.''3 by means of a comprehensive strong lensing model comprised of two cluster dark-matter halos, represented by cored elliptical pseudo-isothermal mass distributions, and the cluster member components, parameterized with dual pseudo-isothermal total mass profiles. The latter have total mass-to-light ratios increasing with the galaxy HST/WFC3 near-IR (F160W) luminosities. The measurement of the total enclosed mass within the Einstein radius is accurate to ∼5%, including the systematic uncertainties estimated from six distinct mass models. We emphasize that the use of multiple-image systems with spectroscopic redshifts and knowledge of cluster membership based on extensive spectroscopic information is key to constructing robust high-resolution mass maps. We also produce magnification maps over the central area that is covered with HST observations. We investigate the galaxy contribution, both in terms of total and stellar mass, to the total mass budget of the cluster. When compared with the outcomes of cosmological N-body simulations, our results point to a lack of massive subhalos in the inner regions of simulated clusters with total masses similar to that of MACS J0416.1–2403. Our findings of the location and shape of the cluster dark-matter halo density profiles and on the cluster substructures provide

  6. Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination

    Li, Weihua; Sankarasubramanian, A.

    2012-12-01

    Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs

  7. Approaches for Reduced Order Modeling of Electrically Actuated von Karman Microplates

    Saghir, Shahid

    2016-07-25

    This article presents and compares different approaches to develop reduced order models for the nonlinear von Karman rectangular microplates actuated by nonlinear electrostatic forces. The reduced-order models aim to investigate the static and dynamic behavior of the plate under small and large actuation forces. A fully clamped microplate is considered. Different types of basis functions are used in conjunction with the Galerkin method to discretize the governing equations. First we investigate the convergence with the number of modes retained in the model. Then for validation purpose, a comparison of the static results is made with the results calculated by a nonlinear finite element model. The linear eigenvalue problem for the plate under the electrostatic force is solved for a wide range of voltages up to pull-in. Results among the various reduced-order modes are compared and are also validated by comparing to results of the finite-element model. Further, the reduced order models are employed to capture the forced dynamic response of the microplate under small and large vibration amplitudes. Comparison of the different approaches are made for this case. Keywords: electrically actuated microplates, static analysis, dynamics of microplates, diaphragm vibration, large amplitude vibrations, nonlinear dynamics

  8. Strategies for reducing the climate noise in model simulations: ensemble runs versus a long continuous run

    Decremer, Damien; Chung, Chul E.; Räisänen, Petri

    2015-03-01

    Climate modelers often integrate the model with constant forcing over a long time period, and make an average over the period in order to reduce climate noise. If the time series is persistent, as opposed to rapidly varying, such an average does not reduce noise efficiently. In this case, ensemble runs, which ideally represent independent runs, can reduce noise more efficiently. We quantify the noise reduction gain by using ensemble runs over a long continuous run in constant-forcing simulations. We find that in terms of the amplitude of the noise, a continuous simulation of 30 years may be equivalent to as few as five 3-year long ensemble runs in a slab ocean-atmosphere coupled model and as few as two 3-year long ensemble runs in a fully coupled model. The outperformance of ensemble runs over a continuous run is strictly a function of the persistence of the time series. We find that persistence depends on model, location and variable, and that persistence in surface air temperature has robust spatial structures in coupled models. We demonstrate that lag-1 year autocorrelation represents persistence fairly well, but the use of lag-1 year-lag-5 years autocorrelations represents the persistence far more sufficiently. Furthermore, there is more persistence in coupled model output than in the output of a first-order autoregressive model with the same lag-1 autocorrelation.

  9. REDUCING PROCESS VARIABILITY BY USING DMAIC MODEL: A CASE STUDY IN BANGLADESH

    Ripon Kumar Chakrabortty

    2013-03-01

    Full Text Available Now-a-day's many leading manufacturing industry have started to practice Six Sigma and Lean manufacturing concepts to boost up their productivity as well as quality of products. In this paper, the Six Sigma approach has been used to reduce process variability of a food processing industry in Bangladesh. DMAIC (Define,Measure, Analyze, Improve, & Control model has been used to implement the Six Sigma Philosophy. Five phases of the model have been structured step by step respectively. Different tools of Total Quality Management, Statistical Quality Control and Lean Manufacturing concepts likely Quality function deployment, P Control chart, Fish-bone diagram, Analytical Hierarchy Process, Pareto analysis have been used in different phases of the DMAIC model. The process variability have been tried to reduce by identify the root cause of defects and reducing it. The ultimate goal of this study is to make the process lean and increase the level of sigma.

  10. Novel Framework for Reduced Order Modeling of Aero-engine Components

    Safi, Ali

    The present study focuses on the popular dynamic reduction methods used in design of complex assemblies (millions of Degrees of Freedom) where numerous iterations are involved to achieve the final design. Aerospace manufacturers such as Rolls Royce and Pratt & Whitney are actively seeking techniques that reduce computational time while maintaining accuracy of the models. This involves modal analysis of components with complex geometries to determine the dynamic behavior due to non-linearity and complicated loading conditions. In such a case the sub-structuring and dynamic reduction techniques prove to be an efficient tool to reduce design cycle time. The components whose designs are finalized can be dynamically reduced to mass and stiffness matrices at the boundary nodes in the assembly. These matrices conserve the dynamics of the component in the assembly, and thus avoid repeated calculations during the analysis runs for design modification of other components. This thesis presents a novel framework in terms of modeling and meshing of any complex structure, in this case an aero-engine casing. In this study the affect of meshing techniques on the run time are highlighted. The modal analysis is carried out using an extremely fine mesh to ensure all minor details in the structure are captured correctly in the Finite Element (FE) model. This is used as the reference model, to compare against the results of the reduced model. The study also shows the conditions/criteria under which dynamic reduction can be implemented effectively, proving the accuracy of Criag-Bampton (C.B.) method and limitations of Static Condensation. The study highlights the longer runtime needed to produce the reduced matrices of components compared to the overall runtime of the complete unreduced model. Although once the components are reduced, the assembly run is significantly. Hence the decision to use Component Mode Synthesis (CMS) is to be taken judiciously considering the number of

  11. Design and implementation of linear controllers for the active control of reduced models of thin-walled structures

    Ghareeb, Nader

    2013-01-01

    The main objectives of this work are twofold: 1.) to create reduced models of smart structures that are fully representative and 2.) to design different linear controllers and implement them into the active control of these reduced models. After a short introduction to the theory of piezoelectricity, the reduced model (super element model) is created starting from the finite element model. Damping properties are also calculated and added to the model. The relation between electrical and mecha...

  12. Non-reference Objective Quality Evaluation for Noise-Reduced Speech Using Overall Quality Estimation Model

    Yamada, Takeshi; Kasuya, Yuki; Shinohara, Yuki; KITAWAKI, Nobuhiko

    2010-01-01

    This paper describes non-reference objective quality evaluation for noise-reduced speech. First, a subjective test is conducted in accordance with ITU-T Rec. P.835 to obtain the speech quality, the noise quality, and the overall quality of noise-reduced speech. Based on the results, we then propose an overall quality estimation model. The unique point of the proposed model is that the estimation of the overall quality is done only using the previously estimated speech quality and noise qualit...

  13. Dynamic Modeling of the Human Coagulation Cascade Using Reduced Order Effective Kinetic Models

    Adithya Sagar

    2015-03-01

    Full Text Available In this study, we present a novel modeling approach which combines ordinary differential equation (ODE modeling with logical rules to simulate an archetype biochemical network, the human coagulation cascade. The model consisted of five differential equations augmented with several logical rules describing regulatory connections between model components, and unmodeled interactions in the network. This formulation was more than an order of magnitude smaller than current coagulation models, because many of the mechanistic details of coagulation were encoded as logical rules. We estimated an ensemble of likely model parameters (N = 20 from in vitro extrinsic coagulation data sets, with and without inhibitors, by minimizing the residual between model simulations and experimental measurements using particle swarm optimization (PSO. Each parameter set in our ensemble corresponded to a unique particle in the PSO. We then validated the model ensemble using thrombin data sets that were not used during training. The ensemble predicted thrombin trajectories for conditions not used for model training, including thrombin generation for normal and hemophilic coagulation in the presence of platelets (a significant unmodeled component. We then used flux analysis to understand how the network operated in a variety of conditions, and global sensitivity analysis to identify which parameters controlled the performance of the network. Taken together, the hybrid approach produced a surprisingly predictive model given its small size, suggesting the proposed framework could also be used to dynamically model other biochemical networks, including intracellular metabolic networks, gene expression programs or potentially even cell free metabolic systems.

  14. Modelling mitigation options to reduce diffuse nitrogen water pollution from agriculture.

    Bouraoui, Fayçal; Grizzetti, Bruna

    2014-01-15

    Agriculture is responsible for large scale water quality degradation and is estimated to contribute around 55% of the nitrogen entering the European Seas. The key policy instrument for protecting inland, transitional and coastal water resources is the Water Framework Directive (WFD). Reducing nutrient losses from agriculture is crucial to the successful implementation of the WFD. There are several mitigation measures that can be implemented to reduce nitrogen losses from agricultural areas to surface and ground waters. For the selection of appropriate measures, models are useful for quantifying the expected impacts and the associated costs. In this article we review some of the models used in Europe to assess the effectiveness of nitrogen mitigation measures, ranging from fertilizer management to the construction of riparian areas and wetlands. We highlight how the complexity of models is correlated with the type of scenarios that can be tested, with conceptual models mostly used to evaluate the impact of reduced fertilizer application, and the physically-based models used to evaluate the timing and location of mitigation options and the response times. We underline the importance of considering the lag time between the implementation of measures and effects on water quality. Models can be effective tools for targeting mitigation measures (identifying critical areas and timing), for evaluating their cost effectiveness, for taking into consideration pollution swapping and considering potential trade-offs in contrasting environmental objectives. Models are also useful for involving stakeholders during the development of catchments mitigation plans, increasing their acceptability. PMID:23998504

  15. Reduced Chern-Simons Quiver Theories and Cohomological 3-Algebra Models

    DeBellis, Joshua

    2013-01-01

    We study the BPS spectrum and vacuum moduli spaces in dimensional reductions of Chern-Simons-matter theories with N>=2 supersymmetry to zero dimensions. Our main example is a matrix model version of the ABJM theory which we relate explicitly to certain reduced 3-algebra models. We find the explicit maps from Chern-Simons quiver matrix models to dual IKKT matrix models. We address the problem of topologically twisting the ABJM matrix model, and along the way construct a new twist of the IKKT model. We construct a cohomological matrix model whose partition function localizes onto a moduli space specified by 3-algebra relations which live in the double of the conifold quiver. It computes an equivariant index enumerating framed BPS states with specified R-charges which can be expressed as a combinatorial sum over certain filtered pyramid partitions.

  16. Formulation of Japanese consensus-building model for HLW geological disposal site determination. 4. The influence of the accurate information on the decision making

    Investigation has been made to discuss how the accurate scientific information affects the perception of risk. To verify this investigation, dialogue seminars have been held. Based upon the outcomes of these investigations, the analysis of attribution was done to verify the factors affecting the risk perception and acceptance relevant to the consensus-building for HLW geological disposal site determination. (author)

  17. Dynamic energy conservation model REDUCE. Extension with experience curves, energy efficiency indicators and user's guide

    The main objective of the energy conservation model REDUCE (Reduction of Energy Demand by Utilization of Conservation of Energy) is the evaluation of the effectiveness of economical, financial, institutional, and regulatory measures for improving the rational use of energy in end-use sectors. This report presents the results of additional model development activities, partly based on the first experiences in a previous project. Energy efficiency indicators have been added as an extra tool for output analysis in REDUCE. The methodology is described and some examples are given. The model has been extended with a method for modelling the effects of technical development on production costs, by means of an experience curve. Finally, the report provides a 'users guide', by describing in more detail the input data specification as well as all menus and buttons. 19 refs

  18. Stochastic reduced order models for uncertainty quantification of intergranular corrosion rates

    Highlights: •A reduced-order model for uncertainty quantification in corrosive systems is shown. •Uncertainty in current density due to randomness in electrode sites is determined. •The proposed model achieves convergence with far fewer samples than Monte-Carlo. •Computation of correlation between different electrochemical quantities is demonstrated. -- Abstract: We present a stochastic reduced order model (SROM) approach for quantifying uncertainty in systems undergoing corrosion. A SROM is a simple random element with a small number of samples that approximates the statistics of another target random element. The parameters of a SROM are selected through an optimization problem. SROMs can be used to propagate uncertainty through a mathematical model of a corroding system in the same way as in Monte Carlo methods. We use SROMs to estimate the statistics of corrosion current density, considering randomness in anode–cathode sizes. We compare the performance of SROMs against the more common Monte-Carlo approach

  19. On the Nonlinear Structural Analysis of Wind Turbine Blades using Reduced Degree-of-Freedom Models

    Holm-Jørgensen, Kristian; Larsen, Jesper Winther; Nielsen, Søren R.K.

    2008-01-01

    Wind turbine blades are increasing in magnitude without a proportional increase of stiffness for which reason geometrical and inertial nonlinearities become increasingly important. Often these effects are analysed using a nonlinear truncated expansion in undamped fixed base mode shapes of a blade......, modelling geometrical and inertial nonlinear couplings in the fundamental flap and edge direction. The purpose of this article is to examine the applicability of such a reduced-degree-of-freedom model in predicting the nonlinear response and stability of a blade by comparison to a full model based on a...... nonlinear co-rotating FE formulation. By use of the reduced-degree-of-freedom model it is shown that under strong resonance excitation of the fundamental flap or edge modes, significant energy is transferred to higher modes due to parametric or nonlinear coupling terms, which influence the response and...

  20. Methodology for Constructing Reduced-Order Power Block Performance Models for CSP Applications: Preprint

    Wagner, M.

    2010-10-01

    The inherent variability of the solar resource presents a unique challenge for CSP systems. Incident solar irradiation can fluctuate widely over a short time scale, but plant performance must be assessed for long time periods. As a result, annual simulations with hourly (or sub-hourly) timesteps are the norm in CSP analysis. A highly detailed power cycle model provides accuracy but tends to suffer from prohibitively long run-times; alternatively, simplified empirical models can run quickly but don?t always provide enough information, accuracy, or flexibility for the modeler. The ideal model for feasibility-level analysis incorporates both the detail and accuracy of a first-principle model with the low computational load of a regression model. The work presented in this paper proposes a methodology for organizing and extracting information from the performance output of a detailed model, then using it to develop a flexible reduced-order regression model in a systematic and structured way. A similar but less generalized approach for characterizing power cycle performance and a reduced-order modeling methodology for CFD analysis of heat transfer from electronic devices have been presented. This paper builds on these publications and the non-dimensional approach originally described.

  1. Implementation of a Diabetes Educator Care Model to Reduce Paediatric Admission for Diabetic Ketoacidosis

    Asma Deeb; Hana Yousef; Layla Abdelrahman; Mary Tomy; Shaker Suliman; Salima Attia; Hana Al Suwaidi

    2016-01-01

    Introduction. Diabetic Ketoacidosis (DKA) is a serious complication that can be life-threatening. Management of DKA needs admission in a specialized center and imposes major constraints on hospital resources. Aim. We plan to study the impact of adapting a diabetes-educator care model on reducing the frequency of hospital admission of children and adolescents presenting with DKA. Method. We have proposed a model of care led by diabetes educators for children and adolescents with diabetes. The ...

  2. Reduced Effective Model for Condensation in Slender Tubes with Rotational Symmetry, Obtained by Generalized Dimensional Analysis

    Dziubek, Andrea

    2011-01-01

    Experimental results for condensation in compact heat exchangers show that the heat transfer due to condensation is significantly better compared to classical heat exchangers, especially when using R134a instead of water as the refrigerant. This suggests that surface tension plays a role. Using generalized dimensional analysis we derive reduced model equations and jump conditions for condensation in a vertical tube with cylindrical cross section. Based on this model we derive a single ordinar...

  3. Newly developed integrated model to reduce risks in the electricity market

    A new model which integrates hydro-scheduling and financial hedging has been developed in cooperation with Norsk Hydro. We believe the new tool will be useful for owners of hydropower plants that want to reduce risks in the power market. The model development started in 1997 and was financed by Norsk Hydro. As of 1998, the main financial contributor has been the Research Council of Norway through a project in the Strategic Institute Programme. (author)

  4. Fast procedure for reconstruction of full-atom protein models from reduced representations

    Rotkiewicz, Piotr; Skolnick, Jeffrey

    2008-01-01

    We introduce PULCHRA, a fast and robust method for the reconstruction of full-atom protein models starting from a reduced protein representation. The algorithm is particularly suitable as an intermediate step between coarse grained model-based structure prediction and applications requiring an all-atom structure, such as molecular dynamics, protein-ligand docking, structure-based function prediction, or assessment of quality of the predicted structure. The accuracy of the method was tested on...

  5. Modal testing and finite element modelling of a reduced-sized tyre for rolling contact investigation

    ZHANG, Yuan-Fang; Cesbron, Julien; BERENGIER, Michel; YIN, Hai Ping

    2015-01-01

    One of the main contributors to the generation of tyre/road noise is the vibrational mechanism. The understanding of the latter requires both numerical modelling of the tyre/road contact problem under rolling conditions and experimental validation. The use of a go-kart tyre presents advantages in comparison with a standard tyre due to its simpler structure for modelling and its reduced size that facilitates experimental studies in laboratory. Modal testing has first been performed on such a t...

  6. Caries risk assessment in school children using a reduced Cariogram model without saliva tests

    Twetman Svante; Isberg Per-Erik; Petersson Gunnel

    2010-01-01

    Abstract Background To investigate the caries predictive ability of a reduced Cariogram model without salivary tests in schoolchildren. Methods The study group consisted of 392 school children, 10-11 years of age, who volunteered after informed consent. A caries risk assessment was made at baseline with aid of the computer-based Cariogram model and expressed as "the chance of avoiding caries" and the children were divided into five risk groups. The caries increment (ΔDMFS) was extracted from ...

  7. Reduced animal use in efficacy testing in disease models with use of sequential experimental designs.

    Waterton JC, Middleton BJ, Pickford R, Allott CP, Checkley D, Keith RA.

    2000-01-01

    Although the use of animals in efficacy tests has declined substantially, there remains a small number of well-documented disease models which provide essential information about the efficacy of new compounds. Such models are typically used after extensive in vitro testing, to evaluate small numbers of compounds and to select the most promising agents for clinical trial in humans. The aim of this study was to reduce the number of animals required to achieve valid results, without compromising...

  8. A Reduced-Order Model of Transport Phenomena for Power Plant Simulation

    Paul Cizmas; Brian Richardson; Thomas Brenner; Raymond Fontenot

    2009-09-30

    A reduced-order model based on proper orthogonal decomposition (POD) has been developed to simulate transient two- and three-dimensional isothermal and non-isothermal flows in a fluidized bed. Reduced-order models of void fraction, gas and solids temperatures, granular energy, and z-direction gas and solids velocity have been added to the previous version of the code. These algorithms are presented and their implementation is discussed. Verification studies are presented for each algorithm. A number of methods to accelerate the computations performed by the reduced-order model are presented. The errors associated with each acceleration method are computed and discussed. Using a combination of acceleration methods, a two-dimensional isothermal simulation using the reduced-order model is shown to be 114 times faster than using the full-order model. In the pursue of achieving the objectives of the project and completing the tasks planned for this program, several unplanned and unforeseen results, methods and studies have been generated. These additional accomplishments are also presented and they include: (1) a study of the effect of snapshot sampling time on the computation of the POD basis functions, (2) an investigation of different strategies for generating the autocorrelation matrix used to find the POD basis functions, (3) the development and implementation of a bubble detection and tracking algorithm based on mathematical morphology, (4) a method for augmenting the proper orthogonal decomposition to better capture flows with discontinuities, such as bubbles, and (5) a mixed reduced-order/full-order model, called point-mode proper orthogonal decomposition, designed to avoid unphysical due to approximation errors. The limitations of the proper orthogonal decomposition method in simulating transient flows with moving discontinuities, such as bubbling flows, are discussed and several methods are proposed to adapt the method for future use.

  9. A Study of the Equivalence of the BLUEs between a Partitioned Singular Linear Model and Its Reduced Singular Linear Models

    Bao Xue ZHANG; Bai Sen LIU; Chang Yu LU

    2004-01-01

    Consider the partitioned linear regression model A = (y, X1β1 + X2β2, σ2V) and its four reduced linear models, where y is an n × 1 observable random vector with E(y) = Xβ and dispersion matrix Var(y) = σ2V, where σ2 is an unknown positive scalar, V is an n × n known symmetric nonnegative definite matrix, X = (X1: X2) is an n× (p+q) known design matrix with rank(X) = r ≤ (p+q),andβ = (β'1:β'2)' withβ1 andβ2 being p × 1 and q × 1 vectors of unknown parameters, respectively. In this article the formulae for the differences between the best linear unbiased estimators of M2X1β1under the model A and its best linear unbiased estimators under the reduced linear models of A are given,where M2 = I - X2X2+. Furthermore, the necessary and sufficient conditions for the equalities between the best linear unbiased estimators of M2X1β1 under the model A and those under its reduced linear models are established. Lastly, we also study the connections between the model A and its linear transformation model.

  10. Technical Note: Reducing the spin-up time of integrated surface water–groundwater models

    Ajami, H.

    2014-12-12

    One of the main challenges in the application of coupled or integrated hydrologic models is specifying a catchment\\'s initial conditions in terms of soil moisture and depth-to-water table (DTWT) distributions. One approach to reducing uncertainty in model initialization is to run the model recursively using either a single year or multiple years of forcing data until the system equilibrates with respect to state and diagnostic variables. However, such "spin-up" approaches often require many years of simulations, making them computationally intensive. In this study, a new hybrid approach was developed to reduce the computational burden of the spin-up procedure by using a combination of model simulations and an empirical DTWT function. The methodology is examined across two distinct catchments located in a temperate region of Denmark and a semi-arid region of Australia. Our results illustrate that the hybrid approach reduced the spin-up period required for an integrated groundwater–surface water–land surface model (ParFlow.CLM) by up to 50%. To generalize results to different climate and catchment conditions, we outline a methodology that is applicable to other coupled or integrated modeling frameworks when initialization from an equilibrium state is required.

  11. Mechanical disequilibria in two-phase flow models: approaches by relaxation and by a reduced model

    This thesis deals with hyperbolic models for the simulation of compressible two-phase flows, to find alternatives to the classical bi-fluid model. We first establish a hierarchy of two-phase flow models, obtained according to equilibrium hypothesis between the physical variables of each phase. The use of Chapman-Enskog expansions enables us to link the different existing models to each other. Moreover, models that take into account small physical unbalances are obtained by means of expansion to the order one. The second part of this thesis focuses on the simulation of flows featuring velocity unbalances and pressure balances, in two different ways. First, a two-velocity two-pressure model is used, where non-instantaneous velocity and pressure relaxations are applied so that a balancing of these variables is obtained. A new one-velocity one-pressure dissipative model is then proposed, where the arising of second-order terms enables us to take into account unbalances between the phase velocities. We develop a numerical method based on a fractional step approach for this model. (author)

  12. Who's afraid of reduced-rank parameterizations of multivariate models? Theory and example

    Gilbert, S.; Zemčík, Petr

    -, č. 223 (2004), s. 1-32. ISSN 1211-3298 R&D Projects: GA AV ČR KSK8002119 Institutional research plan: CEZ:AV0Z7085904 Keywords : reduced-rank parameterizations * multivariate models Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp223.pdf

  13. Who's afraid of reduced-rank parameterizations of multivariate models? Theory and example

    Gilbert, S.; Zemčík, Petr

    2006-01-01

    Roč. 97, č. 4 (2006), s. 925-945. ISSN 0047-259X Institutional research plan: CEZ:MSM0021620846 Keywords : multivariate model * coefficient matrix * reduced rank Subject RIV: AH - Economics Impact factor: 0.763, year: 2006 http://dx.doi.org/10.1016/j.jmva.2005.10.002

  14. Angra dos Reis nuclear power plant. Water intake. Hydraulic studies in reduced models

    This paper constitutes a summary about the first exploration stage of reduced model from refrigeration water intake in Angra dos Reis nuclear power. The results of wave measures during the analysis without protection work, joined and in the interior of water intake unities are presented. (C.G.C.)

  15. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cogliati, Joshua J. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Talbot, Paul W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rinaldi, Ivan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Dan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhao, Haihua [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, the overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.

  16. Accurately measuring dynamic coefficient of friction in ultraform finishing

    Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.

    2013-09-01

    UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.

  17. Technical Note: Reducing the spin-up time of integrated surface water–groundwater models

    H. Ajami

    2014-06-01

    Full Text Available One of the main challenges in catchment scale application of coupled/integrated hydrologic models is specifying a catchment's initial conditions in terms of soil moisture and depth to water table (DTWT distributions. One approach to reduce uncertainty in model initialization is to run the model recursively using a single or multiple years of forcing data until the system equilibrates with respect to state and diagnostic variables. However, such "spin-up" approaches often require many years of simulations, making them computationally intensive. In this study, a new hybrid approach was developed to reduce the computational burden of spin-up time for an integrated groundwater-surface water-land surface model (ParFlow.CLM by using a combination of ParFlow.CLM simulations and an empirical DTWT function. The methodology is examined in two catchments located in the temperate and semi-arid regions of Denmark and Australia respectively. Our results illustrate that the hybrid approach reduced the spin-up time required by ParFlow.CLM by up to 50%, and we outline a methodology that is applicable to other coupled/integrated modelling frameworks when initialization from equilibrium state is required.

  18. Technical Note: Reducing the spin-up time of integrated surface water–groundwater models

    Ajami, H.

    2014-06-26

    One of the main challenges in catchment scale application of coupled/integrated hydrologic models is specifying a catchment\\'s initial conditions in terms of soil moisture and depth to water table (DTWT) distributions. One approach to reduce uncertainty in model initialization is to run the model recursively using a single or multiple years of forcing data until the system equilibrates with respect to state and diagnostic variables. However, such "spin-up" approaches often require many years of simulations, making them computationally intensive. In this study, a new hybrid approach was developed to reduce the computational burden of spin-up time for an integrated groundwater-surface water-land surface model (ParFlow.CLM) by using a combination of ParFlow.CLM simulations and an empirical DTWT function. The methodology is examined in two catchments located in the temperate and semi-arid regions of Denmark and Australia respectively. Our results illustrate that the hybrid approach reduced the spin-up time required by ParFlow.CLM by up to 50%, and we outline a methodology that is applicable to other coupled/integrated modelling frameworks when initialization from equilibrium state is required.

  19. Strong and weak constraint variational assimilations for reduced order fluid flow modeling

    In this work we propose and evaluate two variational data assimilation techniques for the estimation of low order surrogate experimental dynamical models for fluid flows. Both methods are built from optimal control recipes and rely on proper orthogonal decomposition and a Galerkin projection of the Navier Stokes equation. The techniques proposed differ in the control variables they involve. The first one introduces a weak dynamical model defined only up to an additional uncertainty time-dependent function whereas the second one, handles a strong dynamical constraint in which the dynamical system’s coefficients constitute the control variables. Both choices correspond to different approximations of the relation between the reduced basis on which is expressed the motion field and the basis components that have been neglected in the reduced order model construction. The techniques have been assessed on numerical data and for real experimental conditions with noisy particle image velocimetry data.

  20. Reduced Order Model of a Spouted Fluidized Bed Utilizing Proper Orthogonal Decomposition

    Beck-Roth, Stephanie R.

    2011-07-01

    A reduced order model utilizing proper orthogonal decomposition for approximation of gas and solids velocities as well as pressure, solids granular temperature and gas void fraction for use in multiphase incompressible fluidized beds is developed and presented. The methodology is then tested on data representing a flat-bottom spouted fluidized bed and comparative results against the software Multiphase Flow with Interphase eXchanges (MFIX) are provided. The governing equations for the model development are based upon those implemented in the (MFIX) software. The three reduced order models explored are projective, extrapolative and interpolative. The first is an extension of the system solution beyond an original time sequence. The second is a numerical approximation to a new solution based on a small selected parameter deviation from an existing CFD data set. Finally an interpolative methodology approximates a solution between two existing CFD data sets both which vary a single parameter.

  1. Variance components for survival of piglets at farrowing using a reduced animal model

    Brotherstone Sue

    2006-06-01

    Full Text Available Abstract Farrowing survival is usually analysed as a trait of the sow, but this precludes estimation of any direct genetic effects associated with individual piglets. In order to estimate these effects, which are particularly important for sire lines, it is necessary to fit an animal model. However this can be computationally very demanding. We show how direct and maternal genetic effects can be estimated with a simpler analysis based on the reduced animal model and we illustrate the method using farrowing survival information on 118 193 piglets in 10 314 litters. We achieve a 30% reduction in computing time and a 70% reduction in memory use, with no important loss of accuracy. This use of the reduced animal model is not only of interest for pig breeding but also for poultry and fish breeding where large full-sib families are performance tested.

  2. Accurate and Fully Automatic Hippocampus Segmentation Using Subject-Specific 3D Optimal Local Maps Into a Hybrid Active Contour Model

    ZARPALAS, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-01-01

    Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on to...

  3. Accurate Weather Forecasting for Radio Astronomy

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  4. Escitalopram reduces increased hippocampal cytogenesis in a genetic rat depression model

    Petersén, Asa; Wörtwein, Gitta; Gruber, Susanne H M;

    2008-01-01

    separation, (3) reduced by escitalopram treatment in maternally separated animals to the level found in non-separated animals. These results argue against the prevailing hypothesis that adult cytogenesis is reduced in depression and that the common mechanism underlying antidepressant treatments is to...... increase adult cytogenesis. The results also point to the importance of using a disease model and not healthy animals for testing effects of potential treatments for human depression and suggest other cellular mechanisms of action than those that had previously been proposed for escitalopram....

  5. Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models

    Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.

    2012-12-01

    The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).

  6. Reduced model for combustion of a small biomass particle at high operating temperatures.

    Haseli, Y; van Oijen, J A; de Goey, L P H

    2013-03-01

    The aim of this work was to demonstrate a model for a spherical biomass particle combusting at high temperatures with reduced number of variables. The model is based on the observation that combustion of a small particle includes three main phases: heating up, pyrolysis, and char conversion. It is assumed that the pyrolysis begins as soon as the particle surface attains a pyrolysis temperature, yielding a char front, moving towards the center of particle as time passes. The formulation of the heating up and pyrolysis phases is based on an integral method which allows describing the energy conservation with an ordinary differential equation. The char combustion model is according to the shrinking core approximation. Model validation is carried out by comparing the predictions with experiments of sawdust particles taken from the literature, and with computations of partial differential equation-based models. Satisfactory agreement is achieved between the predictions and experimental data. PMID:23376204

  7. Optimization of a Reduced Chemical Kinetic Model for HCCI Engine Simulations by Micro-Genetic Algorithm

    2006-01-01

    A reduced chemical kinetic model (44 species and 72 reactions) for the homogeneous charge compression ignition (HCCI) combustion of n-heptane was optimized to improve its autoignition predictions under different engine operating conditions. The seven kinetic parameters of the optimized model were determined by using the combination of a micro-genetic algorithm optimization methodology and the SENKIN program of CHEMKIN chemical kinetics software package. The optimization was performed within the range of equivalence ratios 0.2-1.2, initial temperature 310-375 K and initial pressure 0.1-0.3 MPa. The engine simulations show that the optimized model agrees better with the detailed chemical kinetic model (544 species and 2 446 reactions) than the original model does.

  8. Reduced order models for thermal analysis : final report : LDRD Project No. 137807.

    Hogan, Roy E., Jr.; Gartling, David K.

    2010-09-01

    This LDRD Senior's Council Project is focused on the development, implementation and evaluation of Reduced Order Models (ROM) for application in the thermal analysis of complex engineering problems. Two basic approaches to developing a ROM for combined thermal conduction and enclosure radiation problems are considered. As a prerequisite to a ROM a fully coupled solution method for conduction/radiation models is required; a parallel implementation is explored for this class of problems. High-fidelity models of large, complex systems are now used routinely to verify design and performance. However, there are applications where the high-fidelity model is too large to be used repetitively in a design mode. One such application is the design of a control system that oversees the functioning of the complex, high-fidelity model. Examples include control systems for manufacturing processes such as brazing and annealing furnaces as well as control systems for the thermal management of optical systems. A reduced order model (ROM) seeks to reduce the number of degrees of freedom needed to represent the overall behavior of the large system without a significant loss in accuracy. The reduction in the number of degrees of freedom of the ROM leads to immediate increases in computational efficiency and allows many design parameters and perturbations to be quickly and effectively evaluated. Reduced order models are routinely used in solid mechanics where techniques such as modal analysis have reached a high state of refinement. Similar techniques have recently been applied in standard thermal conduction problems e.g. though the general use of ROM for heat transfer is not yet widespread. One major difficulty with the development of ROM for general thermal analysis is the need to include the very nonlinear effects of enclosure radiation in many applications. Many ROM methods have considered only linear or mildly nonlinear problems. In the present study a reduced order model is

  9. Contribution to BWR stability analysis. Part II: Numerical approach using a reduced order model

    Highlights: • We study the onset of power oscillations using a reduced order model. • We afford formulae for decay ratios and frequencies near the stability boundary. • We found a non-normal operator associated with the dynamics of the regional mode. • We study some consequences of the meeting of non-normality with nonlinearity. • A comparison between experimental data and model predictions is done. - Abstract: Using the reduced order model and its related analysis done in Suárez-Ántola and Flores-Godoy (submitted for publication), we study some aspects of the onset of power oscillations using numerical methods and digital simulations. From the analytical results we illustrate the usefulness of asymptotic methods to describe the change in behavior of the decay ratio and frequency of oscillations near the stability boundary in the reactor’s parameter space. We study through a dynamical simulation a supercritical Hopf bifurcation in the global mode when the effect of the regional mode on the global mode is neglected. We found that the uncoupled and linearized dynamics of the regional mode is closed related with a non-normal operator. Some of the possible consequences of the non-normality are studied using digital techniques reintroducing the effect of the regional mode on the global mode. A comparison between experimental data and predictions obtained from the present reduced order model is presented

  10. AGE CLASSIFICATIONS BASED ON SECOND ORDER IMAGE COMPRESSED AND FUZZY REDUCED GREY LEVEL (SICFRG MODEL

    Jangala. Sasi Kiran

    2013-06-01

    Full Text Available One of the most fundamental issues in image classification and recognition are how to characterize images using derived features. Many texture classification and recognition problems in the literature usually require the computation on entire image set and with large range of gray level values in order to achieve efficient and precise classification and recognition. This leads to lot of complexity in evaluating feature parameters. To address this,the present paper derives a Second Order image Compressed and Fuzzy Reduced Grey level (SICFRG model, which reduces the image dimension and grey level range without any loss of significant feature information. The present paper derives GLCM features on the proposed SICFRG model for efficient age classification that classifies facial image into a five groups. The SICFRG image mode of age classification is derived in three stages. In the first stage the 5 x 5 matrix is compressed into a 2 x 2 second order sub matrix without loosing anysignificant attributes, primitives, and any other local properties. In stage 2 Fuzzy logic is applied tPo reduce the Gray level range of compressed model of the image. In stage 3 GLCM is derived on SICFRG model of the image. The experimental evidence on FG-NET and Google aging database clearly indicates the high classification rate of the proposed method over the other methods.

  11. Accurate thickness measurement of graphene

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  12. ASPEN: A fully kinetic, reduced-description particle-in-cell model for simulating parametric instabilities

    A fully kinetic, reduced-description particle-in-cell (RPIC) model is presented in which deviations from quasineutrality, electron and ion kinetic effects, and nonlinear interactions between low-frequency and high-frequency parametric instabilities are modeled correctly. The model is based on a reduced description where the electromagnetic field is represented by three separate temporal envelopes in order to model parametric instabilities with low-frequency and high-frequency daughter waves. Because temporal envelope approximations are invoked, the simulation can be performed on the electron time scale instead of the time scale of the light waves. The electrons and ions are represented by discrete finite-size particles, permitting electron and ion kinetic effects to be modeled properly. The Poisson equation is utilized to ensure that space-charge effects are included. The RPIC model is fully three dimensional and has been implemented in two dimensions on the Accelerated Strategic Computing Initiative (ASCI) parallel computer at Los Alamos National Laboratory, and the resulting simulation code has been named ASPEN. The authors believe this code is the first particle-in-cell code capable of simulating the interaction between low-frequency and high-frequency parametric instabilities in multiple dimensions. Test simulations of stimulated Raman scattering, stimulated Brillouin scattering, and Langmuir decay instability are presented

  13. Using boundary layer equilibrium to reduce uncertainties in transport models and CO2 flux inversions

    S. C. Biraud

    2011-04-01

    Full Text Available This paper reexamines evidence for previously hypothesized errors in atmospheric transport models and CO2 flux inversions by evaluating the diagnostics used to infer vertical mixing rates from observations. Several conventional mixing diagnostics are compared to analyzed mixing using data from the US Southern Great Plains Atmospheric Radiation Measurement Climate Research Facility, the CarbonTracker data assimilation system based on Transport Model version 5 (TM5, and atmospheric reanalyses. The results demonstrate that previous diagnostics based on boundary layer depth and vertical concentration gradients are unreliable indicators of vertical mixing. Vertical mixing rates are anti-correlated with boundary layer depth at some sites, diminishing in summer when the boundary layer is deepest. Vertical CO2 gradients between the boundary layer and free-troposphere are strongly affected by seasonal surface fluxes and therefore do not accurately reflect vertical mixing rates. The finite timescale over which vertical tracer gradients relax toward equilibrium is proposed as an improved mixing diagnostic, which can be applied to observations and model simulations of CO2 or other conserved boundary layer tracers with surface sources and sinks. This diagnostic does not require dynamical variables from the transport models, and is independent of possible systematic biases in prior- and post-inversion seasonal surface fluxes. Results indicate that observations frequently cited as evidence for systematic biases in atmospheric transport models are insufficient to prove that such biases exist. Some previously hypothesized transport model biases, if found and corrected, could cause inverse estimates to further diverge from land-based estimates.

  14. Exact finite reduced density matrix and von Neumann entropy for the Calogero model

    The information content of continuous quantum variables systems is usually studied using a number of well known approximation methods. The approximations are made to obtain the spectrum, eigenfunctions or the reduced density matrices that are essential to calculate the entropy-like quantities that quantify the information. Even in the sparse cases where the spectrum and eigenfunctions are exactly known, the entanglement spectrum- the spectrum of the reduced density matrices that characterize the problem- must be obtained in an approximate fashion. In this work, we obtain analytically a finite representation of the reduced density matrices of the fundamental state of the N-particle Calogero model for a discrete set of values of the interaction parameter. As a consequence, the exact entanglement spectrum and von Neumann entropy is worked out. (paper)

  15. Forward Modeling of Reduced Power Spectra From Three-Dimensional $\\mathbf{k}$-Space

    von Papen, Michael

    2015-01-01

    We present results from a numerical forward model to evaluate one-dimensional reduced power spectral densities (PSD) from arbitrary energy distributions in $\\mathbf{k}$-space. In this model, we can separately calculate the diagonal elements of the spectral tensor for incompressible axisymmetric turbulence with vanishing helicity. Given a critically balanced turbulent cascade with $k_\\|\\sim k_\\perp^\\alpha$ and $\\alpha<1$, we explore the implications on the reduced PSD as a function of frequency. The spectra are obtained under the assumption of Taylor's hypothesis. We further investigate the functional dependence of the spectral index $\\kappa$ on the field-to-flow angle $\\theta$ between plasma flow and background magnetic field from MHD to electron kinetic scales. We show that critically balanced turbulence asymptotically develops toward $\\theta$-independent spectra with a slope corresponding to the perpendicular cascade. This occurs at a transition frequency $f_{2D}(L,\\alpha,\\theta)$, which is analytically ...

  16. REDUCING WARM BIAS OVER THE NORTH-EASTERN EUROPE IN A REGIONAL CLIMATE MODEL

    Güttler, Ivan

    2011-01-01

    Abstract: Large warm bias in near-surface temperature during winter was detected over northeastern Europe in simulations with RegCM4 regional climate model when compared to observational dataset. Modifications to alleviate warm bias included reductions of the low-level cloud cover fraction and the minimum turbulent mixing in stable planetary boundary layer. When implemented, these modifications reduced warm bias up to 50% and did not degrade, or substantially impact, the variables analyzed ou...

  17. Chronic antioxidant therapy reduces oxidative stress in a mouse model of Alzheimer’s disease

    Siedlak, Sandra L.; Casadesus, Gemma; Webber, Kate M; Pappolla, Miguel A.; Atwood, Craig S.; Smith, Mark A.; Perry, George

    2009-01-01

    Oxidative modifications are a hallmark of oxidative imbalance in the brains of individuals with Alzheimer’s, Parkinson’s and prion diseases and their respective animal models. While the causes of oxidative stress are relatively well-documented, the effects of chronically reducing oxidative stress on cognition, pathology and biochemistry require further clarification. To address this, young and aged control and amyloid-β protein precursor-over-expressing mice were fed a diet with added R-alpha...

  18. Reduced order modelling and numerical optimisation approach to reliability analysis of microsystems and power modules

    Rajaguru, Pushparajah

    2014-01-01

    The principal aim of this PhD program is the development of an optimisation and risk based methodology for reliability and robustness predictions of packaged electronic components. Reliability based design optimisation involves the integration of reduced order modelling, risk analysis and optimisation. The increasing cost of physical prototyping and extensive qualification testing for reliability assessment is making virtual qualification a very attractive alternative for the electronics indu...

  19. Manpower Consideration to Reduce Development Time for New Model in Automotive Industry

    N. M.Z.N. Mohamed

    2005-01-01

    Full Text Available A study of manpower consideration to reduce development time for new model in automotive industry is presented in this study. The approach taken are by studying the existing practice in car development and suggesting various ways for manpower improvement such as through early involvement and input from manufacturing personnel, the proper job scope structure, proper training to the new staffs to accomplish an important task at the specific timing and clear definition of criteria for a Project Manager's appointment.

  20. Reduced hippocampal neurogenesis in the GR+/− genetic mouse model of depression

    Kronenberg, Golo; Kirste, Imke; Inta, Dragos; Chourbaji, Sabine; Heuser, Isabella; Endres, Matthias; Gass, Peter

    2009-01-01

    Glucocorticoid receptor (GR) heterozygous mice (GR+/− ) represent a valuable animal model for major depression. GR+/− mice show a depression-related phenotype characterized by increased learned helplessness on the behavioral level and neuroendocrine alterations with hypothalamo-pituitary-adrenal (HPA) axis overdrive characteristic of depression. Hippocampal brain-derived neurotrophic factor (BDNF) levels have also been shown to be reduced in GR+/− animals. Because adult hippocampal neurogenes...