WorldWideScience

Sample records for accurate reduced models

  1. Capturing dopaminergic modulation and bimodal membrane behaviour of striatal medium spiny neurons in accurate, reduced models

    Mark D Humphries

    2009-11-01

    Full Text Available Loss of dopamine from the striatum can cause both profound motor deficits, as in Parkinsons's disease, and disrupt learning. Yet the effect of dopamine on striatal neurons remains a complex and controversial topic, and is in need of a comprehensive framework. We extend a reduced model of the striatal medium spiny neuron (MSN to account for dopaminergic modulation of its intrinsic ion channels and synaptic inputs. We tune our D1 and D2 receptor MSN models using data from a recent large-scale compartmental model. The new models capture the input-output relationships for both current injection and spiking input with remarkable accuracy, despite the order of magnitude decrease in system size. They also capture the paired pulse facilitation shown by MSNs. Our dopamine models predict that synaptic effects dominate intrinsic effects for all levels of D1 and D2 receptor activation. We analytically derive a full set of equilibrium points and their stability for the original and dopamine modulated forms of the MSN model. We find that the stability types are not changed by dopamine activation, and our models predict that the MSN is never bistable. Nonetheless, the MSN models can produce a spontaneously bimodal membrane potential similar to that recently observed in vitro following application of NMDA agonists. We demonstrate that this bimodality is created by modelling the agonist effects as slow, irregular and massive jumps in NMDA conductance and, rather than a form of bistability, is due to the voltage-dependent blockade of NMDA receptors. Our models also predict a more pronounced membrane potential bimodality following D1 receptor activation. This work thus establishes reduced yet accurate dopamine-modulated models of MSNs, suitable for use in large-scale models of the striatum. More importantly, these provide a tractable framework for further study of dopamine's effects on computation by individual neurons.

  2. Accurate Modeling of Advanced Reflectarrays

    Zhou, Min

    of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  3. Accurate Excited State Geometries within Reduced Subspace TDDFT/TDA.

    Robinson, David

    2014-12-01

    A method for the calculation of TDDFT/TDA excited state geometries within a reduced subspace of Kohn-Sham orbitals has been implemented and tested. Accurate geometries are found for all of the fluorophore-like molecules tested, with at most all valence occupied orbitals and half of the virtual orbitals included but for some molecules even fewer orbitals. Efficiency gains of between 15 and 30% are found for essentially the same level of accuracy as a standard TDDFT/TDA excited state geometry optimization calculation. PMID:26583218

  4. Towards accurate modeling of moving contact lines

    Holmgren, Hanna

    2015-01-01

    The present thesis treats the numerical simulation of immiscible incompressible two-phase flows with moving contact lines. The conventional Navier–Stokes equations combined with a no-slip boundary condition leads to a non-integrable stress singularity at the contact line. The singularity in the model can be avoided by allowing the contact line to slip. Implementing slip conditions in an accurate way is not straight-forward and different regularization techniques exist where ad-hoc procedures ...

  5. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    Bonney, Matthew S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Brake, Matthew R.W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.

  6. Accurate sky background modelling for ESO facilities

    Full text: Ground-based measurements like e.g. high resolution spectroscopy are heavily influenced by several physical processes. Amongst others, line absorption/ emission, air glow by OH molecules, and scattering of photons within the earth's atmosphere make observations in particular from facilities like the future European extremely large telescope a challenge. Additionally, emission from unresolved extrasolar objects, the zodiacal light, the moon and even thermal emission from the telescope and the instrument contribute significantly to the broad band background over a wide wavelength range. In our talk we review these influences and give an overview on how they can be accurately modeled for increasing the overall precision of spectroscopic and imaging measurements. (author)

  7. A new, accurate predictive model for incident hypertension

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  8. Spectropolarimetrically accurate magnetohydrostatic sunspot model for forward modelling in helioseismology

    Przybylski, D; Cally, P S

    2015-01-01

    We present a technique to construct a spectropolarimetrically accurate magneto-hydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion and absorption in the solar interior and photosphere with the sunspot embedded into it. With the $6173\\mathrm{\\AA}$ magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as full Stokes vector for the simulation at various positions at the solar disk, and analyse the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterised. An increase in acoustic power in the simulated observ...

  9. Reduced Order Podolsky Model

    Thibes, Ronaldo

    2016-01-01

    We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.

  10. On nonlinear reduced order modeling

    When applied to a model that receives n input parameters and predicts m output responses, a reduced order model estimates the variations in the m outputs of the original model resulting from variations in its n inputs. While direct execution of the forward model could provide these variations, reduced order modeling plays an indispensable role for most real-world complex models. This follows because the solutions of complex models are expensive in terms of required computational overhead, thus rendering their repeated execution computationally infeasible. To overcome this problem, reduced order modeling determines a relationship (often referred to as a surrogate model) between the input and output variations that is much cheaper to evaluate than the original model. While it is desirable to seek highly accurate surrogates, the computational overhead becomes quickly intractable especially for high dimensional model, n ≫ 10. In this manuscript, we demonstrate a novel reduced order modeling method for building a surrogate model that employs only 'local first-order' derivatives and a new tensor-free expansion to efficiently identify all the important features of the original model to reach a predetermined level of accuracy. This is achieved via a hybrid approach in which local first-order derivatives (i.e., gradient) of a pseudo response (a pseudo response represents a random linear combination of original model’s responses) are randomly sampled utilizing a tensor-free expansion around some reference point, with the resulting gradient information aggregated in a subspace (denoted by the active subspace) of dimension much less than the dimension of the input parameters space. The active subspace is then sampled employing the state-of-the-art techniques for global sampling methods. The proposed method hybridizes the use of global sampling methods for uncertainty quantification and local variational methods for sensitivity analysis. In a similar manner to

  11. Accurate Load Modeling Based on Analytic Hierarchy Process

    Zhenshu Wang

    2016-01-01

    Full Text Available Establishing an accurate load model is a critical problem in power system modeling. That has significant meaning in power system digital simulation and dynamic security analysis. The synthesis load model (SLM considers the impact of power distribution network and compensation capacitor, while randomness of power load is more precisely described by traction power system load model (TPSLM. On the basis of these two load models, a load modeling method that combines synthesis load with traction power load is proposed in this paper. This method uses analytic hierarchy process (AHP to interact with two load models. Weight coefficients of two models can be calculated after formulating criteria and judgment matrixes and then establishing a synthesis model by weight coefficients. The effectiveness of the proposed method was examined through simulation. The results show that accurate load modeling based on AHP can effectively improve the accuracy of load model and prove the validity of this method.

  12. ACCURATE FORECAST AS AN EFFECTIVE WAY TO REDUCE THE ECONOMIC RISK OF AGRO-INDUSTRIAL COMPLEX

    Kymratova A. M.

    2014-11-01

    Full Text Available This article discusses the ways of reducing the financial, economic and social risks on the basis of an accurate prediction. We study the importance of natural time series of winter wheat yield, minimum winter, winter-spring daily temperatures. The feature of the time series of this class is disobeying a normal distribution, there is no visible trend

  13. An accurate and simple quantum model for liquid water.

    Paesani, Francesco; Zhang, Wei; Case, David A; Cheatham, Thomas E; Voth, Gregory A

    2006-11-14

    The path-integral molecular dynamics and centroid molecular dynamics methods have been applied to investigate the behavior of liquid water at ambient conditions starting from a recently developed simple point charge/flexible (SPC/Fw) model. Several quantum structural, thermodynamic, and dynamical properties have been computed and compared to the corresponding classical values, as well as to the available experimental data. The path-integral molecular dynamics simulations show that the inclusion of quantum effects results in a less structured liquid with a reduced amount of hydrogen bonding in comparison to its classical analog. The nuclear quantization also leads to a smaller dielectric constant and a larger diffusion coefficient relative to the corresponding classical values. Collective and single molecule time correlation functions show a faster decay than their classical counterparts. Good agreement with the experimental measurements in the low-frequency region is obtained for the quantum infrared spectrum, which also shows a higher intensity and a redshift relative to its classical analog. A modification of the original parametrization of the SPC/Fw model is suggested and tested in order to construct an accurate quantum model, called q-SPC/Fw, for liquid water. The quantum results for several thermodynamic and dynamical properties computed with the new model are shown to be in a significantly better agreement with the experimental data. Finally, a force-matching approach was applied to the q-SPC/Fw model to derive an effective quantum force field for liquid water in which the effects due to the nuclear quantization are explicitly distinguished from those due to the underlying molecular interactions. Thermodynamic and dynamical properties computed using standard classical simulations with this effective quantum potential are found in excellent agreement with those obtained from significantly more computationally demanding full centroid molecular dynamics

  14. Mouse models of human AML accurately predict chemotherapy response

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.

    2009-01-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to co...

  15. An accurate RLGC circuit model for dual tapered TSV structure

    A fast RLGC circuit model with analytical expression is proposed for the dual tapered through-silicon via (TSV) structure in three-dimensional integrated circuits under different slope angles at the wide frequency region. By describing the electrical characteristics of the dual tapered TSV structure, the RLGC parameters are extracted based on the numerical integration method. The RLGC model includes metal resistance, metal inductance, substrate resistance, outer inductance with skin effect and eddy effect taken into account. The proposed analytical model is verified to be nearly as accurate as the Q3D extractor but more efficient. (semiconductor integrated circuits)

  16. Robust Small Sample Accurate Inference in Moment Condition Models

    Serigne N. Lo; Elvezio Ronchetti

    2006-01-01

    Procedures based on the Generalized Method of Moments (GMM) (Hansen, 1982) are basic tools in modern econometrics. In most cases, the theory available for making inference with these procedures is based on first order asymptotic theory. It is well-known that the (first order) asymptotic distribution does not provide accurate p-values and confidence intervals in moderate to small samples. Moreover, in the presence of small deviations from the assumed model, p-values and confidence intervals ba...

  17. Bayesian calibration of power plant models for accurate performance prediction

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  18. On the importance of having accurate data for astrophysical modelling

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  19. Accurate method of modeling cluster scaling relations in modified gravity

    He, Jian-hua; Li, Baojiu

    2016-06-01

    We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.

  20. Accurate macroscale modelling of spatial dynamics in multiple dimensions

    Roberts, A ~J; Bunder, J ~E

    2011-01-01

    Developments in dynamical systems theory provides new support for the macroscale modelling of pdes and other microscale systems such as Lattice Boltzmann, Monte Carlo or Molecular Dynamics simulators. By systematically resolving subgrid microscale dynamics the dynamical systems approach constructs accurate closures of macroscale discretisations of the microscale system. Here we specifically explore reaction-diffusion problems in two spatial dimensions as a prototype of generic systems in multiple dimensions. Our approach unifies into one the modelling of systems by a type of finite elements, and the `equation free' macroscale modelling of microscale simulators efficiently executing only on small patches of the spatial domain. Centre manifold theory ensures that a closed model exist on the macroscale grid, is emergent, and is systematically approximated. Dividing space either into overlapping finite elements or into spatially separated small patches, the specially crafted inter-element\\slash patch coupling als...

  1. Congestion Control in WMSNs by Reducing Congestion and Free Resources to Set Accurate Rates and Priority

    Akbar Majidi

    2014-08-01

    Full Text Available The main intention of this paper is focus on mechanism for reducing congestion in the network by free resources to set accurate rates and priority data needs. If two nodes send their packets in the shortest path to the parent node in a crowded place, a source node must prioritize the data and uses data that have lower priorities of a suitable detour nodes consisting of low or non- active consciously. The proposed algorithm is applied to the nodes near the base station (which convey more traffic after the congestion detection mechanism detected the congestion. Obtained results from simulation test done by NS-2 simulator demonstrate the innovation and validity of proposed method with better performance in comparison with CCF, PCCP and DCCP protocols.

  2. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  3. Accurate Modeling of Buck Converters with Magnetic-Core Inductors

    Astorino, Antonio; Antonini, Giulio; Swaminathan, Madhavan

    2015-01-01

    In this paper, a modeling approach for buck converters with magnetic-core inductors is presented. Due to the high nonlinearity of magnetic materials, the frequency domain analysis of such circuits is not suitable for an accurate description of their behaviour. Hence, in this work, a timedomain model...... of buck converters with magnetic-core inductors in a SimulinkR environment is proposed. As an example, the presented approach is used to simulate an eight-phase buck converter. The simulation results show that an unexpected system behaviour in terms of current ripple amplitude needs the inductor core...

  4. Velocity potential formulations of highly accurate Boussinesq-type models

    Bingham, Harry B.; Madsen, Per A.; Fuhrman, David R.

    2009-01-01

    interest because it reduces the computational effort by approximately a factor of two and facilitates a coupling to other potential flow solvers. A new shoaling enhancement operator is introduced to derive new models (in both formulations) with a velocity profile which is always consistent with the...... satisfy a potential flow and/or conserve mass up to the order of truncation of the model. The performance of the new formulation is validated using computations of linear and nonlinear shoaling problems. The behaviour on a rapidly varying bathymetry is also checked using linear wave reflection from a...

  5. A Method to Build a Super Small but Practically Accurate Language Model for Handheld Devices

    WU GenQing (吴根清); ZHENG Fang (郑方)

    2003-01-01

    In this paper, an important question, whether a small language model can be practically accurate enough, is raised. Afterwards, the purpose of a language model, the problems that a language model faces, and the factors that affect the performance of a language model,are analyzed. Finally, a novel method for language model compression is proposed, which makes the large language model usable for applications in handheld devices, such as mobiles, smart phones, personal digital assistants (PDAs), and handheld personal computers (HPCs). In the proposed language model compression method, three aspects are included. First, the language model parameters are analyzed and a criterion based on the importance measure of n-grams is used to determine which n-grams should be kept and which removed. Second, a piecewise linear warping method is proposed to be used to compress the uni-gram count values in the full language model. And third, a rank-based quantization method is adopted to quantize the bi-gram probability values. Experiments show that by using this compression method the language model can be reduced dramatically to only about 1M bytes while the performance almost does not decrease. This provides good evidence that a language model compressed by means of a well-designed compression technique is practically accurate enough, and it makes the language model usable in handheld devices.

  6. BWR stability using a reducing dynamical model

    BWR stability can be treated with reduced order dynamical models. When the parameters of the model came from dynamical models. When the parameters of the model came from experimental data, the predictions are accurate. In this work an alternative derivation for the void fraction equation is made, but remarking the physical structure of the parameters. As the poles of power/reactivity transfer function are related with the parameters, the measurement of the poles by other techniques such as noise analysis will lead to the parameters, but the system of equations is non-linear. Simple parametric calculation of decay ratio are performed, showing why BWRs become unstable when they are operated at low flow and high power. (Author)

  7. Fast and accurate prediction of numerical relativity waveforms from binary black hole mergers using surrogate models

    Blackman, Jonathan; Galley, Chad R; Szilagyi, Bela; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-01-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. In this paper, we construct an accurate and fast-to-evaluate surrogate model for numerical relativity (NR) waveforms from non-spinning binary black hole coalescences with mass ratios from $1$ to $10$ and durations corresponding to about $15$ orbits before merger. Our surrogate, which is built using reduced order modeling techniques, is distinct from traditional modeling efforts. We find that the full multi-mode surrogate model agrees with waveforms generated by NR to within the numerical error of the NR code. In particular, we show that our modeling strategy produces surrogates which can correctly predict NR waveforms that were {\\em not} used for the surrogate's training. For all practical purposes, then, the surrogate waveform model is equivalent to the high-accuracy, large-scale simulation waveform but can be evaluated in a millisecond to a second dependin...

  8. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models

    Blackman, Jonathan; Field, Scott E.; Galley, Chad R.; Szilágyi, Béla; Scheel, Mark A.; Tiglio, Manuel; Hemberger, Daniel A.

    2015-09-01

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic -2Yℓm waveform modes resolved by the NR code up to ℓ=8 . We compare our surrogate model to effective one body waveforms from 50 M⊙ to 300 M⊙ for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  9. Can Raters with Reduced Job Descriptive Information Provide Accurate Position Analysis Questionnaire (PAQ) Ratings?

    Friedman, Lee; Harvey, Robert J.

    1986-01-01

    Job-naive raters provided with job descriptive information made Position Analysis Questionnaire (PAQ) ratings which were validated against ratings of job analysts who were also job content experts. None of the reduced job descriptive information conditions enabled job-naive raters to obtain either acceptable levels of convergent validity with…

  10. BWR stability using a reduced dynamical model

    BWR stability can be treated with reduced order dynamical models. When the parameters of the model came from experimental data, the predictions are accurate. In this work an alternative derivation for the void fraction equation is made, but remarking the physical struct-ure of the parameters. As the poles of power/reactivity transfer function are related with the parameters, the measurement of the poles by other techniques such as noise analysis will lead to the parameters, but the system of equations in non-linear. Simple parametric calculat-ion of decay ratio are performed, showing why BWRs become unstable when they are operated at low flow and high power. (Author). 7 refs

  11. Coupling Efforts to the Accurate and Efficient Tsunami Modelling System

    Son, S.

    2015-12-01

    In the present study, we couple two different types of tsunami models, i.e., nondispersive shallow water model of characteristic form(MOST ver.4) and dispersive Boussinesq model of non-characteristic form(Son et al. (2011)) in an attempt to improve modelling accuracy and efficiency. Since each model deals with different type of primary variables, additional care on matching boundary condition is required. Using an absorbing-generating boundary condition developed by Van Dongeren and Svendsen(1997), model coupling and integration is achieved. Characteristic variables(i.e., Riemann invariants) in MOST are converted to non-characteristic variables for Boussinesq solver without any loss of physical consistency. Established modelling system has been validated through typical test problems to realistic tsunami events. Simulated results reveal good performance of developed modelling system. Since coupled modelling system provides advantageous flexibility feature during implementation, great efficiencies and accuracies are expected to be gained through spot-focusing application of Boussinesq model inside the entire domain of tsunami propagation.

  12. Fully Automated Generation of Accurate Digital Surface Models with Sub-Meter Resolution from Satellite Imagery

    Wohlfeil, J.; Hirschmüller, H.; Piltz, B.; Börner, A.; Suppa, M.

    2012-07-01

    Modern pixel-wise image matching algorithms like Semi-Global Matching (SGM) are able to compute high resolution digital surface models from airborne and spaceborne stereo imagery. Although image matching itself can be performed automatically, there are prerequisites, like high geometric accuracy, which are essential for ensuring the high quality of resulting surface models. Especially for line cameras, these prerequisites currently require laborious manual interaction using standard tools, which is a growing problem due to continually increasing demand for such surface models. The tedious work includes partly or fully manual selection of tie- and/or ground control points for ensuring the required accuracy of the relative orientation of images for stereo matching. It also includes masking of large water areas that seriously reduce the quality of the results. Furthermore, a good estimate of the depth range is required, since accurate estimates can seriously reduce the processing time for stereo matching. In this paper an approach is presented that allows performing all these steps fully automated. It includes very robust and precise tie point selection, enabling the accurate calculation of the images' relative orientation via bundle adjustment. It is also shown how water masking and elevation range estimation can be performed automatically on the base of freely available SRTM data. Extensive tests with a large number of different satellite images from QuickBird and WorldView are presented as proof of the robustness and reliability of the proposed method.

  13. Reduced-Order Model Development for Airfoil Forced Response

    Ramana V. Grandhi

    2008-04-01

    Full Text Available Two new reduced-order models are developed to accurately and rapidly predict geometry deviation effects on airfoil forced response. Both models have significant application to improved mistuning analysis. The first developed model integrates a principal component analysis approach to reduce the number of defining geometric parameters, semianalytic eigensensitivity analysis, and first-order Taylor series approximation to allow rapid as-measured airfoil response analysis. A second developed model extends this approach and quantifies both random and bias errors between the reduced and full models. Adjusting for the bias significantly improves reduced-order model accuracy. The error model is developed from a regression analysis of the relationship between airfoil geometry parameters and reduced-order model error, leading to physics-based error quantification. Both models are demonstrated on an advanced fan airfoil's frequency, modal force, and forced response.

  14. A more accurate model of wetting transitions with liquid helium

    Up to now the analysis of the liquid helium prewetting line on alkali metal substrates have been made using the simple model proposed by Saam et al. Some improvements on this model are considered within a mean field, sharp kink model. The temperature variations of the substrate-liquid interface energy and that of the liquid density are considered, as well as a more realistic effective potential for the film-substrate interaction. A comparison is made with the experimental data on rubidium and cesium

  15. Visual texture accurate material appearance measurement, representation and modeling

    Haindl, Michal

    2013-01-01

    This book surveys the state of the art in multidimensional, physically-correct visual texture modeling. Features: reviews the entire process of texture synthesis, including material appearance representation, measurement, analysis, compression, modeling, editing, visualization, and perceptual evaluation; explains the derivation of the most common representations of visual texture, discussing their properties, advantages, and limitations; describes a range of techniques for the measurement of visual texture, including BRDF, SVBRDF, BTF and BSSRDF; investigates the visualization of textural info

  16. Accurate wind farm development and operation. Advanced wake modelling

    Brand, A.; Bot, E.; Ozdemir, H. [ECN Unit Wind Energy, P.O. Box 1, NL 1755 ZG Petten (Netherlands); Steinfeld, G.; Drueke, S.; Schmidt, M. [ForWind, Center for Wind Energy Research, Carl von Ossietzky Universitaet Oldenburg, D-26129 Oldenburg (Germany); Mittelmeier, N. REpower Systems SE, D-22297 Hamburg (Germany))

    2013-11-15

    The ability is demonstrated to calculate wind farm wakes on the basis of ambient conditions that were calculated with an atmospheric model. Specifically, comparisons are described between predicted and observed ambient conditions, and between power predictions from three wind farm wake models and power measurements, for a single and a double wake situation. The comparisons are based on performance indicators and test criteria, with the objective to determine the percentage of predictions that fall within a given range about the observed value. The Alpha Ventus site is considered, which consists of a wind farm with the same name and the met mast FINO1. Data from the 6 REpower wind turbines and the FINO1 met mast were employed. The atmospheric model WRF predicted the ambient conditions at the location and the measurement heights of the FINO1 mast. May the predictability of the wind speed and the wind direction be reasonable if sufficiently sized tolerances are employed, it is fairly impossible to predict the ambient turbulence intensity and vertical shear. Three wind farm wake models predicted the individual turbine powers: FLaP-Jensen and FLaP-Ainslie from ForWind Oldenburg, and FarmFlow from ECN. The reliabilities of the FLaP-Ainslie and the FarmFlow wind farm wake models are of equal order, and higher than FLaP-Jensen. Any difference between the predictions from these models is most clear in the double wake situation. Here FarmFlow slightly outperforms FLaP-Ainslie.

  17. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    Mead, Alexander; Heymans, Catherine; Joudaki, Shahab; Heavens, Alan

    2015-01-01

    We present an optimised variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically-motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of $\\Lambda$CDM and $w$CDM models the halo-model power is accurate to $\\simeq 5$ per cent for $k\\leq 10h\\,\\mathrm{Mpc}^{-1}$ and $z\\leq 2$. We compare our results with recent revisions of the popular HALOFIT model and show that our predictions are more accurate. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limi...

  18. The slow-scale linear noise approximation: an accurate, reduced stochastic description of biochemical networks under timescale separation conditions

    Thomas Philipp

    2012-05-01

    Full Text Available Abstract Background It is well known that the deterministic dynamics of biochemical reaction networks can be more easily studied if timescale separation conditions are invoked (the quasi-steady-state assumption. In this case the deterministic dynamics of a large network of elementary reactions are well described by the dynamics of a smaller network of effective reactions. Each of the latter represents a group of elementary reactions in the large network and has associated with it an effective macroscopic rate law. A popular method to achieve model reduction in the presence of intrinsic noise consists of using the effective macroscopic rate laws to heuristically deduce effective probabilities for the effective reactions which then enables simulation via the stochastic simulation algorithm (SSA. The validity of this heuristic SSA method is a priori doubtful because the reaction probabilities for the SSA have only been rigorously derived from microscopic physics arguments for elementary reactions. Results We here obtain, by rigorous means and in closed-form, a reduced linear Langevin equation description of the stochastic dynamics of monostable biochemical networks in conditions characterized by small intrinsic noise and timescale separation. The slow-scale linear noise approximation (ssLNA, as the new method is called, is used to calculate the intrinsic noise statistics of enzyme and gene networks. The results agree very well with SSA simulations of the non-reduced network of elementary reactions. In contrast the conventional heuristic SSA is shown to overestimate the size of noise for Michaelis-Menten kinetics, considerably under-estimate the size of noise for Hill-type kinetics and in some cases even miss the prediction of noise-induced oscillations. Conclusions A new general method, the ssLNA, is derived and shown to correctly describe the statistics of intrinsic noise about the macroscopic concentrations under timescale separation conditions

  19. Accurate models of collisions in glow discharge simulation

    Very detailed, self-consistent kinetic glow discharge simulations are used to examine the effect of various models of collisional processes. The effects of allowing anisotropy in elastic electron collisions with neutral atoms instead of using the momentum transfer cross-section, the effects of using an isotropic distribution in inelastic electron-atom collisions, and the effects of including a Coulomb electron-electron collision operator are all described. It is shown that changes in any of the collisional models, especially the second and third described above, can make a profound difference in the simulation results. This confirms that many discharge simulations have great sensitivity to the physical and numerical approximations used. The results reinforce the importance of using a kinetic theory approach with highly realistic models of various collisional processes

  20. Accurate Sliding-Mode Control System Modeling for Buck Converters

    Høyerby, Mikkel Christian Wendelboe; Andersen, Michael Andreas E.

    2007-01-01

    This paper shows that classical sliding mode theory fails to correctly predict the output impedance of the highly useful sliding mode PID compensated buck converter. The reason for this is identified as the assumption of the sliding variable being held at zero during sliding mode, effectively...... modeling the hysteretic comparator as an infinite gain. Correct prediction of output impedance is shown to be enabled by the use of a more elaborate, finite-gain model of the hysteretic comparator, which takes the effects of time delay and finite switching frequency into account. The demonstrated modeling...... approach also predicts the self-oscillating switching action of the sliding-mode control system correctly. Analytical findings are verified by simulation as well as experimentally in a 10-30V/3A buck converter....

  1. An accurate and efficient Lagrangian sub-grid model

    Mazzitelli, I M; Lanotte, A S

    2014-01-01

    A computationally efficient model is introduced to account for the sub-grid scale velocities of tracer particles dispersed in statistically homogeneous and isotropic turbulent flows. The model embeds the multi-scale nature of turbulent temporal and spatial correlations, that are essential to reproduce multi-particle dispersion. It is capable to describe the Lagrangian diffusion and dispersion of temporally and spatially correlated clouds of particles. Although the model neglects intermittent corrections, we show that pair and tetrad dispersion results nicely compare with Direct Numerical Simulations of statistically isotropic and homogeneous $3D$ turbulence. This is in agreement with recent observations that deviations from self-similar pair dispersion statistics are rare events.

  2. Accurate modelling of flow induced stresses in rigid colloidal aggregates

    Vanni, Marco

    2015-07-01

    A method has been developed to estimate the motion and the internal stresses induced by a fluid flow on a rigid aggregate. The approach couples Stokesian dynamics and structural mechanics in order to take into account accurately the effect of the complex geometry of the aggregates on hydrodynamic forces and the internal redistribution of stresses. The intrinsic error of the method, due to the low-order truncation of the multipole expansion of the Stokes solution, has been assessed by comparison with the analytical solution for the case of a doublet in a shear flow. In addition, it has been shown that the error becomes smaller as the number of primary particles in the aggregate increases and hence it is expected to be negligible for realistic reproductions of large aggregates. The evaluation of internal forces is performed by an adaptation of the matrix methods of structural mechanics to the geometric features of the aggregates and to the particular stress-strain relationship that occurs at intermonomer contacts. A preliminary investigation on the stress distribution in rigid aggregates and their mode of breakup has been performed by studying the response to an elongational flow of both realistic reproductions of colloidal aggregates (made of several hundreds monomers) and highly simplified structures. A very different behaviour has been evidenced between low-density aggregates with isostatic or weakly hyperstatic structures and compact aggregates with highly hyperstatic configuration. In low-density clusters breakup is caused directly by the failure of the most stressed intermonomer contact, which is typically located in the inner region of the aggregate and hence originates the birth of fragments of similar size. On the contrary, breakup of compact and highly cross-linked clusters is seldom caused by the failure of a single bond. When this happens, it proceeds through the removal of a tiny fragment from the external part of the structure. More commonly, however

  3. Double Layered Sheath in Accurate HV XLPE Cable Modeling

    Gudmundsdottir, Unnur Stella; Silva, J. De; Bak, Claus Leth;

    2010-01-01

    This paper discusses modelling of high voltage AC underground cables. For long cables, when crossbonding points are present, not only the coaxial mode of propagation is excited during transient phenomena, but also the intersheath mode. This causes inaccurate simulation results for high frequency...

  4. Parameterized reduced-order models using hyper-dual numbers.

    Fike, Jeffrey A.; Brake, Matthew Robert

    2013-10-01

    The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize the effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.

  5. Relevance of accurate Monte Carlo modeling in nuclear medical imaging

    Zaidi, H

    1999-01-01

    Monte Carlo techniques have become popular in different areas of medical physics with advantage of powerful computing systems. In particular, they have been extensively applied to simulate processes involving random behavior and to quantify physical parameters that are difficult or even impossible to calculate by experimental measurements. Recent nuclear medical imaging innovations such as single-photon emission computed tomography (SPECT), positron emission tomography (PET), and multiple emission tomography (MET) are ideal for Monte Carlo modeling techniques because of the stochastic nature of radiation emission, transport and detection processes. Factors which have contributed to the wider use include improved models of radiation transport processes, the practicality of application with the development of acceleration schemes and the improved speed of computers. This paper presents derivation and methodological basis for this approach and critically reviews their areas of application in nuclear imaging. An ...

  6. Compact and Accurate Turbocharger Modelling for Engine Control

    Sorenson, Spencer C; Hendricks, Elbert; Magnússon, Sigurjón;

    2005-01-01

    With the current trend towards engine downsizing, the use of turbochargers to obtain extra engine power has become common. A great díffuculty in the use of turbochargers is in the modelling of the compressor map. In general this is done by inserting the compressor map directly into the engine ECU...... turbocharges with radial compressors for either Spark Ignition (SI) or diesel engines...

  7. Accurate numerical solutions for elastic-plastic models

    The accuracy of two integration algorithms is studied for the common engineering condition of a von Mises, isotropic hardening model under plane stress. Errors in stress predictions for given total strain increments are expressed with contour plots of two parameters: an angle in the pi plane and the difference between the exact and computed yield-surface radii. The two methods are the tangent-predictor/radial-return approach and the elastic-predictor/radial-corrector algorithm originally developed by Mendelson. The accuracy of a combined tangent-predictor/radial-corrector algorithm is also investigated

  8. Nonlinear thermal reduced model for Microwave Circuit Analysis

    Chang, Christophe; Sommet, Raphael; Quéré, Raymond; Dueme, Ph.

    2004-01-01

    With the constant increase of transistor power density, electro thermal modeling is becoming a necessity for accurate prediction of device electrical performances. For this reason, this paper deals with a methodology to obtain a precise nonlinear thermal model based on Model Order Reduction of a three dimensional thermal Finite Element (FE) description. This reduced thermal model is based on the Ritz vector approach which ensure the steady state solution in every case. An equi...

  9. Reduced cost mission design using surrogate models

    Feldhacker, Juliana D.; Jones, Brandon A.; Doostan, Alireza; Hampton, Jerrad

    2016-01-01

    This paper uses surrogate models to reduce the computational cost associated with spacecraft mission design in three-body dynamical systems. Sampling-based least squares regression is used to project the system response onto a set of orthogonal bases, providing a representation of the ΔV required for rendezvous as a reduced-order surrogate model. Models are presented for mid-field rendezvous of spacecraft in orbits in the Earth-Moon circular restricted three-body problem, including a halo orbit about the Earth-Moon L2 libration point (EML-2) and a distant retrograde orbit (DRO) about the Moon. In each case, the initial position of the spacecraft, the time of flight, and the separation between the chaser and the target vehicles are all considered as design inputs. The results show that sample sizes on the order of 102 are sufficient to produce accurate surrogates, with RMS errors reaching 0.2 m/s for the halo orbit and falling below 0.01 m/s for the DRO. A single function call to the resulting surrogate is up to two orders of magnitude faster than computing the same solution using full fidelity propagators. The expansion coefficients solved for in the surrogates are then used to conduct a global sensitivity analysis of the ΔV on each of the input parameters, which identifies the separation between the spacecraft as the primary contributor to the ΔV cost. Finally, the models are demonstrated to be useful for cheap evaluation of the cost function in constrained optimization problems seeking to minimize the ΔV required for rendezvous. These surrogate models show significant advantages for mission design in three-body systems, in terms of both computational cost and capabilities, over traditional Monte Carlo methods.

  10. An accurate halo model for fitting non-linear cosmological power spectra and baryonic feedback models

    Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.

    2015-12-01

    We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.

  11. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements.

    Ajay Seth

    Full Text Available The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1 elevation and 2 abduction of the scapula on an ellipsoidal thoracic surface, 3 upward rotation of the scapula normal to the thoracic surface, and 4 internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual's anthropometry. We compared the model to "gold standard" bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2 mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models.

  12. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements.

    Seth, Ajay; Matias, Ricardo; Veloso, António P; Delp, Scott L

    2016-01-01

    The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual's anthropometry. We compared the model to "gold standard" bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2 mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761

  13. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979

  14. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking. PMID:22658682

  15. Accurate Modeling of the Polysilicon-Insulator-Well (PIW) Capacitor in CMOS Technologies

    JAMASB, Shahriar; MOOSAVİ, Roya

    2015-01-01

    Abstract. A practical method enabling rapid development of an accurate device model for the PIW MOS capacitor is introduced. The simultaneous improvement in accuracy and development time can be achieved without having to perform extensive measurements on specialized test structures by taking advantage of the MOS transistor model parameters routinely extracted in support of analog circuit design activities. This method affords accurate modeling of the voltage coefficient of capacitance over th...

  16. Reducing the Need for Accurate Stream Flow Forecasting for Water Supply Planning by Augmenting Reservoir Operations with Seawater Desalination and Wastewater Recycling

    Bhushan, R.; Ng, T. L.

    2014-12-01

    Accurate stream flow forecasts are critical for reservoir operations for water supply planning. As the world urban population increases, the demand for water in cities is also increasing, making accurate forecasts even more important. However, accurate forecasting of stream flows is difficult owing to short- and long-term weather variations. We propose to reduce this need for accurate stream flow forecasts by augmenting reservoir operations with seawater desalination and wastewater recycling. We develop a robust operating policy for the joint operation of the three sources. With the joint model, we tap into the unlimited reserve of seawater through desalination, and make use of local supplies of wastewater through recycling. However, both seawater desalination and recycling are energy intensive and relatively expensive. Reservoir water on the other hand, is generally cheaper but is limited and variable in its availability, increasing the risk of water shortage during extreme climate events. We operate the joint system by optimizing it using a genetic algorithm to maximize water supply reliability and resilience while minimizing vulnerability subject to a budget constraint and for a given stream flow forecast. To compute the total cost of the system, we take into account the pumping cost of transporting reservoir water to its final destination, and the capital and operating costs of desalinating seawater and recycling wastewater. We produce results for different hydro climatic regions based on artificial stream flows we generate using a simple hydrological model and an autoregressive time series model. The artificial flows are generated from precipitation and temperature data from the Canadian Regional Climate model for present and future scenarios. We observe that the joint operation is able to effectively minimize the negative effects of stream flow forecast uncertainty on system performance at an overall cost that is not significantly greater than the cost of a

  17. Generalized Reduced Order Model Generation Project

    National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...

  18. PconsD: ultra rapid, accurate model quality assessment for protein structure prediction

    Skwark, M. J.; Elofsson, A.

    2013-01-01

    Clustering methods are often needed for accurately assessing the quality of modeled protein structures. Recent blind evaluation of quality assessment methods in CASP10 showed that there is very little difference between many different methods as far as ranking models and selecting best model are concerned. When comparing many models the computational cost of the model comparison can become significant. Here, we present PconsD, a very fast, stream-computing method for distance-driven model qua...

  19. Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images

    Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.

    1999-01-01

    Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.

  20. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    Leng, Wei [Chinese Academy of Sciences; Ju, Lili [University of South Carolina; Gunzburger, Max [Florida State University; Price, Stephen [Los Alamos National Laboratory; Ringler, Todd [Los Alamos National Laboratory,

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  1. A Reducing Resistance to Change Model

    Daniela Braduţanu

    2015-01-01

    The aim of this scientific paper is to present an original reducing resistance to change model. After analyzing the existent literature, I have concluded that the resistance to change subject has gained popularity over the years, but there are not too many models that could help managers implement more smoothly an organizational change process and at the same time, reduce effectively employees’ resistance. The proposed model is very helpful for managers and change agents who are c...

  2. Bilinear reduced order approximate model of parabolic distributed solar collectors

    Elmetennani, Shahrazed

    2015-07-01

    This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low dimensional bilinear state representation, enables the reproduction of the heat transfer dynamics along the collector tube for system analysis. Moreover, presented as a reduced order bilinear state space model, the well established control theory for this class of systems can be applied. The approximation efficiency has been proven by several simulation tests, which have been performed considering parameters of the Acurex field with real external working conditions. Model accuracy has been evaluated by comparison to the analytical solution of the hyperbolic distributed model and its semi discretized approximation highlighting the benefits of using the proposed numerical scheme. Furthermore, model sensitivity to the different parameters of the gaussian interpolation has been studied.

  3. Energy-accurate simulation models for evaluating the energy efficiency; Energieexakte Simulationsmodelle zur Bewertung der Energieeffizienz

    Blank, Frederic; Roth-Stielow, Joerg [Stuttgart Univ. (Germany). Inst. fuer Leistungselektronik und Elektrische Antriebe

    2011-07-01

    For the evaluation of the energy efficiency of electrical drive systems in start-stop operations, the amount of energy per cycle is used. This variable of comparison ''energy'' is determined by simulating the whole drive system with special simulation models. These models have to be energy-accurate in order to implement the significant losses. Two simulation models are presented, which were optimized for these simulations: models of a permanent synchronous motor and a frequency inverter. The models are parameterized with measurements and the calculations are verified. Using these models, motion cycles can be simulated and the necessary energy per cycle can be determined. (orig.)

  4. In-Situ Residual Tracking in Reduced Order Modelling

    Joseph C. Slater

    2002-01-01

    Full Text Available Proper orthogonal decomposition (POD based reduced-order modelling is demonstrated to be a weighted residual technique similar to Galerkin's method. Estimates of weighted residuals of neglected modes are used to determine relative importance of neglected modes to the model. The cumulative effects of neglected modes can be used to estimate error in the reduced order model. Thus, once the snapshots have been obtained under prescribed training conditions, the need to perform full-order simulations for comparison is eliminates. This has the potential to allow the analyst to initiate further training when the reduced modes are no longer sufficient to accurately represent the predominant phenomenon of interest. The response of a fluid moving at Mach 1.2 above a panel to a forced localized oscillation of the panel at and away from the training operating conditions is used to demonstrate the evaluation method.

  5. Development of an accurate cavitation coupled spray model for diesel engine simulation

    Highlights: • A new hybrid spray model was implemented into KIVA4 CFD code. • Cavitation sub model was coupled with classic KHRT model. • New model predicts better than classical spray models. • New model predicts spray and combustion characteristics with accuracy. - Abstract: The combustion process in diesel engines is essentially controlled by the dynamics of the fuel spray. Thus accurate modeling of spray process is vital to accurately model the combustion process in diesel engines. In this work, a new hybrid spray model was developed by coupling the cavitation induced spray sub model to KHRT spray model. This new model was implemented into KIVA4 CFD code. The new developed spray model was extensively validated against the experimental data of non-vaporizing and vaporizing spray obtained from constant volume combustion chamber (CVCC) available in literature. The results were compared on the basis of liquid length, spray penetration and spray images. The model was also validated against the engine combustion characteristics data like in-cylinder pressure and heat release rate. The new spray model very well captures both spray characteristics and combustion characteristics

  6. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, UV(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing UV, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that UV accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model

  7. An accurate and efficient system model of iterative image reconstruction in high-resolution pinhole SPECT for small animal research

    Accurate modeling of the photon acquisition process in pinhole SPECT is essential for optimizing resolution. In this work, the authors develop an accurate system model in which pinhole finite aperture and depth-dependent geometric sensitivity are explicitly included. To achieve high-resolution pinhole SPECT, the voxel size is usually set in the range of sub-millimeter so that the total number of image voxels increase accordingly. It is inevitably that a system matrix that models a variety of favorable physical factors will become extremely sophisticated. An efficient implementation for such an accurate system model is proposed in this research. We first use the geometric symmetries to reduce redundant entries in the matrix. Due to the sparseness of the matrix, only non-zero terms are stored. A novel center-to-radius recording rule is also developed to effectively describe the relation between a voxel and its related detectors at every projection angle. The proposed system matrix is also suitable for multi-threaded computing. Finally, the accuracy and effectiveness of the proposed system model is evaluated in a workstation equipped with two Quad-Core Intel X eon processors.

  8. Mining tandem mass spectral data to develop a more accurate mass error model for peptide identification.

    Fu, Yan; Gao, Wen; He, Simin; Sun, Ruixiang; Zhou, Hu; Zeng, Rong

    2007-01-01

    The assumption on the mass error distribution of fragment ions plays a crucial role in peptide identification by tandem mass spectra. Previous mass error models are the simplistic uniform or normal distribution with empirically set parameter values. In this paper, we propose a more accurate mass error model, namely conditional normal model, and an iterative parameter learning algorithm. The new model is based on two important observations on the mass error distribution, i.e. the linearity between the mean of mass error and the ion mass, and the log-log linearity between the standard deviation of mass error and the peak intensity. To our knowledge, the latter quantitative relationship has never been reported before. Experimental results demonstrate the effectiveness of our approach in accurately quantifying the mass error distribution and the ability of the new model to improve the accuracy of peptide identification. PMID:17990507

  9. Efficient and Accurate Log-Levy Approximations of Levy-Driven LIBOR Models

    Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David

    2012-01-01

    -driven LIBOR model and aim to develop accurate and efficient log-Lévy approximations for the dynamics of the rates. The approximations are based on the truncation of the drift term and on Picard approximation of suitable processes. Numerical experiments for forward-rate agreements, caps, swaptions and sticky...

  10. In-situ measurements of material thermal parameters for accurate LED lamp thermal modelling

    Vellvehi, M.; Perpina, X.; Jorda, X.; Werkhoven, R.J.; Kunen, J.M.G.; Jakovenko, J.; Bancken, P.; Bolt, P.J.

    2013-01-01

    This work deals with the extraction of key thermal parameters for accurate thermal modelling of LED lamps: air exchange coefficient around the lamp, emissivity and thermal conductivity of all lamp parts. As a case study, an 8W retrofit lamp is presented. To assess simulation results, temperature is

  11. Development of an Accurate Urban Modeling System Using CAD/GIS Data for Atmosphere Environmental Simulation

    Tomosato Takada; Kazuo Kashiyama

    2008-01-01

    This paper presents an urban modeling system using CAD/GIS data for atmosphere environ- mental simulation, such as wind flow and contaminant spread in urban area. The CAD data is used for the shape modeling for the high-storied buildings and civil structures with complicated shape since the data for that is not included in the 3D-GIS data accurately. The unstructured mesh based on the tetrahedron element is employed in order to express the urban structures with complicated shape accurately. It is difficult to un- derstand the quality of shape model and mesh by the conventional visualization technique. In this paper, the stereoscopic visualization using virtual reality (VR) technology is employed for the vedfication of the quality of shape model and mesh. The present system is applied to the atmosphere environmental simulation in ur- ban area and is shown to be an useful planning and design tool to investigate the atmosphere environmental problem.

  12. Accurate Monte Carlo modelling of the back compartments of SPECT cameras

    Today, new single photon emission computed tomography (SPECT) reconstruction techniques rely on accurate Monte Carlo (MC) simulations to optimize reconstructed images. However, existing MC scintillation camera models which usually include an accurate description of the collimator and crystal, lack correct implementation of the gamma camera's back compartments. In the case of dual isotope simultaneous acquisition (DISA), where backscattered photons from the highest energy isotope are detected in the imaging energy window of the second isotope, this approximation may induce simulation errors. Here, we investigate the influence of backscatter compartment modelling on the simulation accuracy of high-energy isotopes. Three models of a scintillation camera were simulated: a simple model (SM), composed only of a collimator and a NaI(Tl) crystal; an intermediate model (IM), adding a simplified description of the backscatter compartments to the previous model and a complete model (CM), accurately simulating the materials and geometries of the camera. The camera models were evaluated with point sources (67Ga, 99mTc, 111In, 123I, 131I and 18F) in air without a collimator, in air with a collimator and in water with a collimator. In the latter case, sensitivities and point-spread functions (PSFs) simulated in the photopeak window with the IM and CM are close to the measured values (error below 10.5%). In the backscatter energy window, however, the IM and CM overestimate the FWHM of the detected PSF by 52% and 23%, respectively, while the SM underestimates it by 34%. The backscatter peak fluence is also overestimated by 20% and 10% with the IM and CM, respectively, whereas it is underestimated by 60% with the SM. The results show that an accurate description of the backscatter compartments is required for SPECT simulations of high-energy isotopes (above 300 keV) when the backscatter energy window is of interest.

  13. Automated Image-Based Procedures for Accurate Artifacts 3D Modeling and Orthoimage Generation

    Marc Pierrot-Deseilligny

    2011-12-01

    Full Text Available The accurate 3D documentation of architectures and heritages is getting very common and required in different application contexts. The potentialities of the image-based approach are nowadays very well-known but there is a lack of reliable, precise and flexible solutions, possibly open-source, which could be used for metric and accurate documentation or digital conservation and not only for simple visualization or web-based applications. The article presents a set of photogrammetric tools developed in order to derive accurate 3D point clouds and orthoimages for the digitization of archaeological and architectural objects. The aim is also to distribute free solutions (software, methodologies, guidelines, best practices, etc. based on 3D surveying and modeling experiences, useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc. and according to several representations needs (2D technical documentation, 3D reconstruction, web visualization, etc..

  14. Reducing the invasiveness of modelling frameworks

    Donchyts, G.; Baart, F.

    2010-12-01

    There are several modelling frameworks available that allow for environmental models to exchange data with other models. Many efforts have been made in the past years promoting solutions aimed at integrating different numerical models with each other as well as at simplifying the way to set them up, entering the data, and running them. Meanwhile the development of many modeling frameworks concentrated on the interoperability of different model engines, several standards were introduced such as ESMF, OMS and OpenMI. One of the issues with applying modelling frameworks is the invasessness, the more the model has to know about the framework, the more intrussive it is. Another issue when applying modelling frameworks are that a lot of environmental models are written in procedural and in FORTRAN, which is one of the few languages that doesn't have a proper interface with other programming languages. Most modelling frameworks are written in object oriented languages like java/c# and the modelling framework in FORTRAN ESMF is also objected oriented. In this research we show how the application of domain driven, object oriented development techniques to environmental models can reduce the invasiveness of modelling frameworks. Our approach is based on four different steps: 1) application of OO techniques and reflection to the existing model to allow introspection. 2) programming language interoperability, between model written in a procedural programming language and modeling framework written in an object oriented programming language. 3) Domain mapping between data types used by model and other components being integrated 4) Connecting models using framework (wrapper) We compare coupling of an existing model as it was to the same model adapted using the four step approach. We connect both versions of the models using two different integrated modelling frameworks. As an example of a model we use the coastal morphological model XBeach. By adapting this model it allows for

  15. An accurate model for numerical prediction of piezoelectric energy harvesting from fluid structure interaction problems

    Piezoelectric energy harvesting (PEH) from ambient energy sources, particularly vibrations, has attracted considerable interest throughout the last decade. Since fluid flow has a high energy density, it is one of the best candidates for PEH. Indeed, a piezoelectric energy harvesting process from the fluid flow takes the form of natural three-way coupling of the turbulent fluid flow, the electromechanical effect of the piezoelectric material and the electrical circuit. There are some experimental and numerical studies about piezoelectric energy harvesting from fluid flow in literatures. Nevertheless, accurate modeling for predicting characteristics of this three-way coupling has not yet been developed. In the present study, accurate modeling for this triple coupling is developed and validated by experimental results. A new code based on this modeling in an openFOAM platform is developed. (paper)

  16. Particle Image Velocimetry Measurements in an Anatomically-Accurate Scaled Model of the Mammalian Nasal Cavity

    Rumple, Christopher; Krane, Michael; Richter, Joseph; Craven, Brent

    2013-11-01

    The mammalian nose is a multi-purpose organ that houses a convoluted airway labyrinth responsible for respiratory air conditioning, filtering of environmental contaminants, and chemical sensing. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of respiratory airflow and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture an anatomically-accurate transparent model for stereoscopic particle image velocimetry (SPIV) measurements. Challenges in the design and manufacture of an index-matched anatomical model are addressed. PIV measurements are presented, which are used to validate concurrent computational fluid dynamics (CFD) simulations of mammalian nasal airflow. Supported by the National Science Foundation.

  17. Bayesian reduced-order models for multiscale dynamical systems

    Koutsourelakis, P S

    2010-01-01

    While existing mathematical descriptions can accurately account for phenomena at microscopic scales (e.g. molecular dynamics), these are often high-dimensional, stochastic and their applicability over macroscopic time scales of physical interest is computationally infeasible or impractical. In complex systems, with limited physical insight on the coherent behavior of their constituents, the only available information is data obtained from simulations of the trajectories of huge numbers of degrees of freedom over microscopic time scales. This paper discusses a Bayesian approach to deriving probabilistic coarse-grained models that simultaneously address the problems of identifying appropriate reduced coordinates and the effective dynamics in this lower-dimensional representation. At the core of the models proposed lie simple, low-dimensional dynamical systems which serve as the building blocks of the global model. These approximate the latent, generating sources and parameterize the reduced-order dynamics. We d...

  18. Towards more accurate wind and solar power prediction by improving NWP model physics

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  19. The accurate and comprehensive model of thin fluid flows with inertia on curved substrates

    Roberts, A J; Li, Zhenquan

    1999-01-01

    Consider the 3D flow of a viscous Newtonian fluid upon a curved 2D substrate when the fluid film is thin as occurs in many draining, coating and biological flows. We derive a comprehensive model of the dynamics of the film, the model being expressed in terms of the film thickness and the average lateral velocity. Based upon centre manifold theory, we are assured that the model accurately includes the effects of the curvature of substrate, gravitational body force, fluid inertia and dissipatio...

  20. A Reduced High Frequency Transformer Model To Detect The Partial Discharge Locations

    El-Sayed M. El-Refaie

    2014-03-01

    Full Text Available Transformer modeling is the first step in improving partial discharge localization techniques. Different transformer models were used for such purpose. This paper presents a reduced transformer model that can be accurately used for the purpose of partial discharge localization. The model is investigated in alternative transient program (ATP draw for partial discharge localization application. A comparison between different transformer models is studied, the achieved results of the reduced model demonstrated high efficiency.

  1. An improved model for reduced-order physiological fluid flows

    San, Omer; 10.1142/S0219519411004666

    2012-01-01

    An improved one-dimensional mathematical model based on Pulsed Flow Equations (PFE) is derived by integrating the axial component of the momentum equation over the transient Womersley velocity profile, providing a dynamic momentum equation whose coefficients are smoothly varying functions of the spatial variable. The resulting momentum equation along with the continuity equation and pressure-area relation form our reduced-order model for physiological fluid flows in one dimension, and are aimed at providing accurate and fast-to-compute global models for physiological systems represented as networks of quasi one-dimensional fluid flows. The consequent nonlinear coupled system of equations is solved by the Lax-Wendroff scheme and is then applied to an open model arterial network of the human vascular system containing the largest fifty-five arteries. The proposed model with functional coefficients is compared with current classical one-dimensional theories which assume steady state Hagen-Poiseuille velocity pro...

  2. Protein Structure Idealization: How accurately is it possible to model protein structures with dihedral angles?

    Cui, Xuefeng; Li, Shuai Cheng; Bu, Dongbo; Alipanahi, Babak; Li, Ming

    2013-01-01

    Previous studies show that the same type of bond lengths and angles fit Gaussian distributions well with small standard deviations on high resolution protein structure data. The mean values of these Gaussian distributions have been widely used as ideal bond lengths and angles in bioinformatics. However, we are not aware of any research done to evaluate how accurately we can model protein structures with dihedral angles and ideal bond lengths and angles. Here, we introduce the protein structur...

  3. Blasting Vibration Safety Criterion Analysis with Equivalent Elastic Boundary: Based on Accurate Loading Model

    Qingwen Li; Lan Qiao; Gautam Dasgupta; Siwei Ma; Liping Wang; Jianghui Dong

    2015-01-01

    In the tunnel and underground space engineering, the blasting wave will attenuate from shock wave to stress wave to elastic seismic wave in the host rock. Also, the host rock will form crushed zone, fractured zone, and elastic seismic zone under the blasting loading and waves. In this paper, an accurate mathematical dynamic loading model was built. And the crushed zone as well as fractured zone was considered as the blasting vi...

  4. Accurate and interpretable nanoSAR models from genetic programming-based decision tree construction approaches.

    Oksel, Ceyda; Winkler, David A; Ma, Cai Y; Wilkins, Terry; Wang, Xue Z

    2016-09-01

    The number of engineered nanomaterials (ENMs) being exploited commercially is growing rapidly, due to the novel properties they exhibit. Clearly, it is important to understand and minimize any risks to health or the environment posed by the presence of ENMs. Data-driven models that decode the relationships between the biological activities of ENMs and their physicochemical characteristics provide an attractive means of maximizing the value of scarce and expensive experimental data. Although such structure-activity relationship (SAR) methods have become very useful tools for modelling nanotoxicity endpoints (nanoSAR), they have limited robustness and predictivity and, most importantly, interpretation of the models they generate is often very difficult. New computational modelling tools or new ways of using existing tools are required to model the relatively sparse and sometimes lower quality data on the biological effects of ENMs. The most commonly used SAR modelling methods work best with large datasets, are not particularly good at feature selection, can be relatively opaque to interpretation, and may not account for nonlinearity in the structure-property relationships. To overcome these limitations, we describe the application of a novel algorithm, a genetic programming-based decision tree construction tool (GPTree) to nanoSAR modelling. We demonstrate the use of GPTree in the construction of accurate and interpretable nanoSAR models by applying it to four diverse literature datasets. We describe the algorithm and compare model results across the four studies. We show that GPTree generates models with accuracies equivalent to or superior to those of prior modelling studies on the same datasets. GPTree is a robust, automatic method for generation of accurate nanoSAR models with important advantages that it works with small datasets, automatically selects descriptors, and provides significantly improved interpretability of models. PMID:26956430

  5. A rapid and accurate two-point ray tracing method in horizontally layered velocity model

    TIAN Yue; CHEN Xiao-fei

    2005-01-01

    A rapid and accurate method for two-point ray tracing in horizontally layered velocity model is presented in this paper. Numerical experiments show that this method provides stable and rapid convergence with high accuracies, regardless of various 1-D velocity structures, takeoff angles and epicentral distances. This two-point ray tracing method is compared with the pseudobending technique and the method advanced by Kim and Baag (2002). It turns out that the method in this paper is much more efficient and accurate than the pseudobending technique, but is only applicable to 1-D velocity model. Kim(s method is equivalent to ours for cases without large takeoff angles, but it fails to work when the takeoff angle is close to 90o. On the other hand, the method presented in this paper is applicable to cases with any takeoff angles with rapid and accurate convergence. Therefore, this method is a good choice for two-point ray tracing problems in horizontally layered velocity model and is efficient enough to be applied to a wide range of seismic problems.

  6. Accelerated gravitational-wave parameter estimation with reduced order modeling

    Canizares, Priscilla; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2014-01-01

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current parameter estimation approaches for such scenarios can lead to computationally intractable problems in practice. Therefore there is a pressing need for new, fast and accurate Bayesian inference techniques. In this letter we demonstrate that a reduced order modeling approach enables rapid parameter estimation studies. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of non-spinning binary neutron star inspirals can be sped up by a factor of 30 for the early advanced detectors' configurations. This speed-up will increase to about $150$ as the detectors improve their low-frequency limit to 10Hz, reducing to hours analyses which would otherwise take months to complete. Although thes...

  7. A Reducing Resistance to Change Model

    Daniela Braduţanu

    2015-10-01

    Full Text Available The aim of this scientific paper is to present an original reducing resistance to change model. After analyzing the existent literature, I have concluded that the resistance to change subject has gained popularity over the years, but there are not too many models that could help managers implement more smoothly an organizational change process and at the same time, reduce effectively employees’ resistance. The proposed model is very helpful for managers and change agents who are confronted with a high degree of resistance when trying to implement a new change, as well as for researches. The key contribution of this paper is that resistance is not necessarily bad and if used appropriately, it can actually represent an asset. Managers must use employees’ resistance.

  8. An Accurate Thermoviscoelastic Rheological Model for Ethylene Vinyl Acetate Based on Fractional Calculus

    Marco Paggi

    2015-01-01

    Full Text Available The thermoviscoelastic rheological properties of ethylene vinyl acetate (EVA used to embed solar cells have to be accurately described to assess the deformation and the stress state of photovoltaic (PV modules and their durability. In the present work, considering the stress as dependent on a noninteger derivative of the strain, a two-parameter model is proposed to approximate the power-law relation between the relaxation modulus and time for a given temperature level. Experimental validation with EVA uniaxial relaxation data at different constant temperatures proves the great advantage of the proposed approach over classical rheological models based on exponential solutions.

  9. Fast and accurate calculations for cumulative first-passage time distributions in Wiener diffusion models

    Blurton, Steven Paul; Kesselmeier, M.; Gondan, Matthias

    2012-01-01

    We propose an improved method for calculating the cumulative first-passage time distribution in Wiener diffusion models with two absorbing barriers. This distribution function is frequently used to describe responses and error probabilities in choice reaction time tasks. The present work extends...... related work on the density of first-passage times [Navarro, D.J., Fuss, I.G. (2009). Fast and accurate calculations for first-passage times in Wiener diffusion models. Journal of Mathematical Psychology, 53, 222-230]. Two representations exist for the distribution, both including infinite series. We...

  10. Accurate Analytic Results for the Steady State Distribution of the Eigen Model

    Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun

    2016-04-01

    Eigen model of molecular evolution is popular in studying complex biological and biomedical systems. Using the Hamilton-Jacobi equation method, we have calculated analytic equations for the steady state distribution of the Eigen model with a relative accuracy of O(1/N), where N is the length of genome. Our results can be applied for the case of small genome length N, as well as the cases where the direct numerics can not give accurate result, e.g., the tail of distribution.

  11. Accurate halo-model matter power spectra with dark energy, massive neutrinos and modified gravitational forces

    Mead, Alexander; Lombriser, Lucas; Peacock, John; Steele, Olivia; Winther, Hans

    2016-01-01

    We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead (2015b). We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo model method can predict the non-linear matter power spectrum measured from simulations of parameterised $w(a)$ dark energy models at the few per cent level for $k0.5\\,h\\mathrm{Mpc}^{-1}$. An updated version of our publicly available HMcode can be found at https://github.com/alexander-mead/HMcode

  12. Accurate halo-model matter power spectra with dark energy, massive neutrinos and modified gravitational forces

    Mead, A. J.; Heymans, C.; Lombriser, L.; Peacock, J. A.; Steele, O. I.; Winther, H. A.

    2016-06-01

    We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead et al. We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases, we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo-model method can predict the non-linear matter power spectrum measured from simulations of parametrized w(a) dark energy models at the few per cent level for k 0.5 h Mpc-1. An updated version of our publicly available HMCODE can be found at https://github.com/alexander-mead/hmcode.

  13. Accurate corresponding point search using sphere-attribute-image for statistical bone model generation

    Statistical deformable model based two-dimensional/three-dimensional (2-D/3-D) registration is a promising method for estimating the position and shape of patient bone in the surgical space. Since its accuracy depends on the statistical model capacity, we propose a method for accurately generating a statistical bone model from a CT volume. Our method employs the Sphere-Attribute-Image (SAI) and has improved the accuracy of corresponding point search in statistical model generation. At first, target bone surfaces are extracted as SAIs from the CT volume. Then the textures of SAIs are classified to some regions using Maximally-stable-extremal-regions methods. Next, corresponding regions are determined using Normalized cross-correlation (NCC). Finally, corresponding points in each corresponding region are determined using NCC. The application of our method to femur bone models was performed, and worked well in the experiments. (author)

  14. Accurate and efficient prediction of fine-resolution hydrologic and carbon dynamic simulations from coarse-resolution models

    Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning

    2016-02-01

    The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.

  15. An accurate simulation model for single-photon avalanche diodes including important statistical effects

    An accurate and complete circuit simulation model for single-photon avalanche diodes (SPADs) is presented. The derived model is not only able to simulate the static DC and dynamic AC behaviors of an SPAD operating in Geiger-mode, but also can emulate the second breakdown and the forward bias behaviors. In particular, it considers important statistical effects, such as dark-counting and after-pulsing phenomena. The developed model is implemented using the Verilog-A description language and can be directly performed in commercial simulators such as Cadence Spectre. The Spectre simulation results give a very good agreement with the experimental results reported in the open literature. This model shows a high simulation accuracy and very fast simulation rate. (semiconductor devices)

  16. Improvement of a land surface model for accurate prediction of surface energy and water balances

    In order to predict energy and water balances between the biosphere and atmosphere accurately, sophisticated schemes to calculate evaporation and adsorption processes in the soil and cloud (fog) water deposition on vegetation were implemented in the one-dimensional atmosphere-soil-vegetation model including CO2 exchange process (SOLVEG2). Performance tests in arid areas showed that the above schemes have a significant effect on surface energy and water balances. The framework of the above schemes incorporated in the SOLVEG2 and instruction for running the model are documented. With further modifications of the model to implement the carbon exchanges between the vegetation and soil, deposition processes of materials on the land surface, vegetation stress-growth-dynamics etc., the model is suited to evaluate an effect of environmental loads to ecosystems by atmospheric pollutants and radioactive substances under climate changes such as global warming and drought. (author)

  17. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method

  18. Development of accurate contact force models for use with Discrete Element Method (DEM) modelling of bulk fruit handling processes

    Dintwa, Edward

    2006-01-01

    This thesis is primarily concerned with the development of accurate, simplified and validated contact force models for the discrete element modelling (DEM) of fruit bulk handling systems. The DEM is essentially a numerical technique to model a system of particles interacting with one another and with the system boundaries through collisions. The specific area of application envisaged is in postharvest agriculture, where DEM could be used in simulation of many unit operations with bulk fruit,...

  19. Reduced Complexity Channel Models for IMT-Advanced Evaluation

    Yu Zhang

    2009-01-01

    Full Text Available Accuracy and complexity are two crucial aspects of the applicability of a channel model for wideband multiple input multiple output (MIMO systems. For small number of antenna element pairs, correlation-based models have lower computational complexity while the geometry-based stochastic models (GBSMs can provide more accurate modeling of real radio propagation. This paper investigates several potential simplifications of the GBSM to reduce the complexity with minimal impact on accuracy. In addition, we develop a set of broadband metrics which enable a thorough investigation of the differences between the GBSMs and the simplified models. The impact of various random variables which are employed by the original GBSM on the system level simulation are also studied. Both simulation results and a measurement campaign show that complexity can be reduced significantly with a negligible loss of accuracy in the proposed metrics. As an example, in the presented scenarios, the computational time can be reduced by up to 57% while keeping the relative deviation of 5% outage capacity within 5%.

  20. Accurate tissue area measurements with considerably reduced radiation dose achieved by patient-specific CT scan parameters

    Brandberg, J.; Bergelin, E.; Sjostrom, L.;

    2008-01-01

    for muscle tissue. Image noise was quantified by standard deviation measurements. The area deviation was radiation dose of the low-dose technique was reduced to 2-3% for diameters of 31-35 cm and to 7.5-50% for diameters of 36-47 cm...... as compared with the integral dose by the standard diagnostic technique. The CT numbers of muscle tissue remained unchanged with reduced radiation dose. Image noise was on average 20.9 HU (Hounsfield units) for subjects with diameters of 31-35 cm and 11.2 HU for subjects with diameters in the range of 36...

  1. Reduced order modeling of wall turbulence

    Moin, Parviz

    2015-11-01

    Modeling turbulent flow near a wall is a pacing item in computational fluid dynamics for aerospace applications and geophysical flows. Gradual progress has been made in statistical modeling of near wall turbulence using the Reynolds averaged equations of motion, an area of research where John Lumley has made numerous seminal contributions. More recently, Lumley and co-workers pioneered dynamical systems modeling of near wall turbulence, and demonstrated that the experimentally observed turbulence dynamics can be predicted using low dimensional dynamical systems. The discovery of minimal flow unit provides further evidence that the near wall turbulence is amenable to reduced order modeling. The underlying rationale for potential success in using low dimensional dynamical systems theory is based on the fact that the Reynolds number is low in close proximity to the wall. Presumably for the same reason, low dimensional models are expected to be successful in modeling of the laminar/turbulence transition region. This has been shown recently using dynamic mode decomposition. Furthermore, it is shown that the near wall flow structure and statistics in the late and non-linear transition region is strikingly similar to that in higher Reynolds number fully developed turbulence. In this presentation, I will argue that the accumulated evidence suggests that wall modeling for LES using low dimensional dynamical systems is a profitable avenue to pursue. The main challenge would be the numerical integration of such wall models in LES methodology.

  2. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756

  3. A complete and accurate surface-potential based large-signal model for compound semiconductor HEMTs

    A complete and accurate surface potential based large-signal model for compound semiconductor HEMTs is presented. A surface potential equation resembling the one used in conventional MOSFET models is achieved. The analytic solutions from the traditional surface potential theory that developed in MOSFET models are inherited. For core model derivation, a novel method is used to realize a direct application of the standard surface potential model of MOSFETs for HEMT modeling, without breaking the mathematic structure. The high-order derivatives of I—V/C—V remain continuous, making the model suitable for RF large-signal applications. Furthermore, the self-heating effects and the transconductance dispersion are also modelled. The model has been verified through comparison with measured DC IV, Gummel symmetry test, CV, minimum noise figure, small-signal S - parameters up to 66 GHz and single-tone input power sweep at 29 GHz for a 4 × 75 μm × 0.1 μm InGaAs/GaAs power pHEMT, fabricated at a commercial foundry. (semiconductor devices)

  4. Reducing Spatial Data Complexity for Classification Models

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  5. Reducing Spatial Data Complexity for Classification Models

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  6. Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models

    Stovgaard Kasper

    2010-08-01

    Full Text Available Abstract Background Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function, which is paramount for structure determination based on statistical inference. Results We present a method for the efficient calculation of accurate SAXS curves based on the Debye formula and a set of scattering form factors for dummy atom representations of amino acids. Such a method avoids the computationally costly iteration over all atoms. We estimated the form factors using generated data from a set of high quality protein structures. No ad hoc scaling or correction factors are applied in the calculation of the curves. Two coarse-grained representations of protein structure were investigated; two scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, TorusDBN. This resulted in a significant improvement in the decoy recognition performance. In conclusion, the presented method shows great promise for

  7. A Biomechanical Model of the Scapulothoracic Joint to Accurately Capture Scapular Kinematics during Shoulder Movements

    Seth, Ajay; Matias, Ricardo; António P Veloso; Delp, Scott L.

    2016-01-01

    The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic join...

  8. Accurate Modeling of a Transverse Flux Permanent Magnet Generator Using 3D Finite Element Analysis

    Hosseini, Seyedmohsen; Moghani, Javad Shokrollahi; Jensen, Bogi Bech

    2011-01-01

    This paper presents an accurate modeling method that is applied to a single-sided outer-rotor transverse flux permanent magnet generator. The inductances and the induced electromotive force for a typical generator are calculated using the magnetostatic three-dimensional finite element method. A new...... method is then proposed that reveals the behavior of the generator under any load. Finally, torque calculations are carried out using three dimensional finite element analyses. It is shown that although in the single-phase generator the cogging torque is very high, this can be improved significantly by...... combining three single-phase modules into a three-phase generator....

  9. Applying an accurate spherical model to gamma-ray burst afterglow observations

    Leventis, K.; van der Horst, A. J.; van Eerten, H. J.; Wijers, R. A. M. J.

    2013-05-01

    We present results of model fits to afterglow data sets of GRB 970508, GRB 980703 and GRB 070125, characterized by long and broad-band coverage. The model assumes synchrotron radiation (including self-absorption) from a spherical adiabatic blast wave and consists of analytic flux prescriptions based on numerical results. For the first time it combines the accuracy of hydrodynamic simulations through different stages of the outflow dynamics with the flexibility of simple heuristic formulas. The prescriptions are especially geared towards accurate description of the dynamical transition of the outflow from relativistic to Newtonian velocities in an arbitrary power-law density environment. We show that the spherical model can accurately describe the data only in the case of GRB 970508, for which we find a circumburst medium density n ∝ r-2. We investigate in detail the implied spectra and physical parameters of that burst. For the microphysics we show evidence for equipartition between the fraction of energy density carried by relativistic electrons and magnetic field. We also find that for the blast wave to be adiabatic, the fraction of electrons accelerated at the shock has to be smaller than 1. We present best-fitting parameters for the afterglows of all three bursts, including uncertainties in the parameters of GRB 970508, and compare the inferred values to those obtained by different authors.

  10. Particle Image Velocimetry Measurements in Anatomically-Accurate Models of the Mammalian Nasal Cavity

    Rumple, C.; Richter, J.; Craven, B. A.; Krane, M.

    2012-11-01

    A summary of the research being carried out by our multidisciplinary team to better understand the form and function of the nose in different mammalian species that include humans, carnivores, ungulates, rodents, and marine animals will be presented. The mammalian nose houses a convoluted airway labyrinth, where two hallmark features of mammals occur, endothermy and olfaction. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of airflow and respiratory and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture transparent, anatomically-accurate models for stereo particle image velocimetry (SPIV) measurements of nasal airflow. Challenges in the design and manufacture of index-matched anatomical models are addressed and preliminary SPIV measurements are presented. Such measurements will constitute a validation database for concurrent computational fluid dynamics (CFD) simulations of mammalian respiration and olfaction. Supported by the National Science Foundation.

  11. Cumulative atomic multipole moments complement any atomic charge model to obtain more accurate electrostatic properties

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1992-01-01

    The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.

  12. Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm

    Huang, Yanhua; Gu, Lizhi

    2015-09-01

    The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and

  13. Using the Neumann series expansion for assembling Reduced Order Models

    Nasisi S.

    2014-06-01

    Full Text Available An efficient method to remove the limitation in selecting the master degrees of freedom in a finite element model by means of a model order reduction is presented. A major difficulty of the Guyan reduction and IRS method (Improved Reduced System is represented by the need of appropriately select the master and slave degrees of freedom for the rate of convergence to be high. This study approaches the above limitation by using a particular arrangement of the rows and columns of the assembled matrices K and M and employing a combination between the IRS method and a variant of the analytical selection of masters presented in (Shah, V. N., Raymund, M., Analytical selection of masters for the reduced eigenvalue problem, International Journal for Numerical Methods in Engineering 18 (1 1982 in case first lowest frequencies had to be sought. One of the most significant characteristics of the approach is the use of the Neumann series expansion that motivates this particular arrangement of the matrices’ entries. The method shows a higher rate of convergence when compared to the standard IRS and very accurate results for the lowest reduced frequencies. To show the effectiveness of the proposed method two testing structures and the human vocal tract model employed in (Vampola, T., Horacek, J., Svec, J. G., FE modeling of human vocal tract acoustics. Part I: Prodution of Czech vowels, Acta Acustica United with Acustica 94 (3 2008 are presented.

  14. A reduced model for shock and detonation waves. I. The inert case

    Stoltz, G.

    2006-01-01

    We present a model of mesoparticles, very much in the Dissipative Particle Dynamics spirit, in which a molecule is replaced by a particle with an internal thermodynamic degree of freedom (temperature or energy). The model is shown to give quantitavely accurate results for the simulation of shock waves in a crystalline polymer, and opens the way to a reduced model of detonation waves.

  15. Validation of an Accurate Three-Dimensional Helical Slow-Wave Circuit Model

    Kory, Carol L.

    1997-01-01

    The helical slow-wave circuit embodies a helical coil of rectangular tape supported in a metal barrel by dielectric support rods. Although the helix slow-wave circuit remains the mainstay of the traveling-wave tube (TWT) industry because of its exceptionally wide bandwidth, a full helical circuit, without significant dimensional approximations, has not been successfully modeled until now. Numerous attempts have been made to analyze the helical slow-wave circuit so that the performance could be accurately predicted without actually building it, but because of its complex geometry, many geometrical approximations became necessary rendering the previous models inaccurate. In the course of this research it has been demonstrated that using the simulation code, MAFIA, the helical structure can be modeled with actual tape width and thickness, dielectric support rod geometry and materials. To demonstrate the accuracy of the MAFIA model, the cold-test parameters including dispersion, on-axis interaction impedance and attenuation have been calculated for several helical TWT slow-wave circuits with a variety of support rod geometries including rectangular and T-shaped rods, as well as various support rod materials including isotropic, anisotropic and partially metal coated dielectrics. Compared with experimentally measured results, the agreement is excellent. With the accuracy of the MAFIA helical model validated, the code was used to investigate several conventional geometric approximations in an attempt to obtain the most computationally efficient model. Several simplifications were made to a standard model including replacing the helical tape with filaments, and replacing rectangular support rods with shapes conforming to the cylindrical coordinate system with effective permittivity. The approximate models are compared with the standard model in terms of cold-test characteristics and computational time. The model was also used to determine the sensitivity of various

  16. LogGPO: An accurate communication model for performance prediction of MPI programs

    CHEN WenGuang; ZHAI JiDong; ZHANG Jin; ZHENG WeiMin

    2009-01-01

    Message passing interface (MPI) is the de facto standard in writing parallel scientific applications on distributed memory systems. Performance prediction of MPI programs on current or future parallel sys-terns can help to find system bottleneck or optimize programs. To effectively analyze and predict per-formance of a large and complex MPI program, an efficient and accurate communication model is highly needed. A series of communication models have been proposed, such as the LogP model family, which assume that the sending overhead, message transmission, and receiving overhead of a communication is not overlapped and there is a maximum overlap degree between computation and communication. However, this assumption does not always hold for MPI programs because either sending or receiving overhead introduced by MPI implementations can decrease potential overlap for large messages. In this paper, we present a new communication model, named LogGPO, which captures the potential overlap between computation with communication of MPI programs. We design and implement a trace-driven simulator to verify the LogGPO model by predicting performance of point-to-point communication and two real applications CG and Sweep3D. The average prediction errors of LogGPO model are 2.4% and 2.0% for these two applications respectively, while the average prediction errors of LogGP model are 38.3% and 9.1% respectively.

  17. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement.

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-01-01

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the 'phase to 3D coordinates transformation' are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553

  18. Physical modeling of real-world slingshots for accurate speed predictions

    Yeats, Bob

    2016-01-01

    We discuss the physics and modeling of latex-rubber slingshots. The goal is to get accurate speed predictions inspite of the significant real world difficulties of force drift, force hysteresis, rubber ageing, and the very non- linear, non-ideal, force vs. pull distance curves of slingshot rubber bands. Slingshots are known to shoot faster under some circumstances when the bands are tapered rather than having constant width and stiffness. We give both qualitative understanding and numerical predictions of this effect. We consider two models. The first is based on conservation of energy and is easier to implement, but cannot determine the speeds along the rubber bands without making assumptions. The second, treats the bands as a series of mass points subject to being pulled by immediately adjacent mass points according to how much the rubber has been stretched on the two adjacent sides. This is a classic many-body F=ma problem but convergence requires using a particular numerical technique. It gives accurate p...

  19. Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques

    Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.

  20. Simple and accurate modelling of the gravitational potential produced by thick and thin exponential disks

    Smith, Rory; Candlish, Graeme N; Fellhauer, Michael; Gibson, Bradley K

    2015-01-01

    We present accurate models of the gravitational potential produced by a radially exponential disk mass distribution. The models are produced by combining three separate Miyamoto-Nagai disks. Such models have been used previously to model the disk of the Milky Way, but here we extend this framework to allow its application to disks of any mass, scalelength, and a wide range of thickness from infinitely thin to near spherical (ellipticities from 0 to 0.9). The models have the advantage of simplicity of implementation, and we expect faster run speeds over a double exponential disk treatment. The potentials are fully analytical, and differentiable at all points. The mass distribution of our models deviates from the radial mass distribution of a pure exponential disk by <0.4% out to 4 disk scalelengths, and <1.9% out to 10 disk scalelengths. We tabulate fitting parameters which facilitate construction of exponential disks for any scalelength, and a wide range of disk thickness (a user-friendly, web-based int...

  1. Accurate and efficient modeling of the detector response in small animal multi-head PET systems

    In fully three-dimensional PET imaging, iterative image reconstruction techniques usually outperform analytical algorithms in terms of image quality provided that an appropriate system model is used. In this study we concentrate on the calculation of an accurate system model for the YAP-(S)PET II small animal scanner, with the aim to obtain fully resolution- and contrast-recovered images at low levels of image roughness. For this purpose we calculate the system model by decomposing it into a product of five matrices: (1) a detector response component obtained via Monte Carlo simulations, (2) a geometric component which describes the scanner geometry and which is calculated via a multi-ray method, (3) a detector normalization component derived from the acquisition of a planar source, (4) a photon attenuation component calculated from x-ray computed tomography data, and finally, (5) a positron range component is formally included. This system model factorization allows the optimization of each component in terms of computation time, storage requirements and accuracy. The main contribution of this work is a new, efficient way to calculate the detector response component for rotating, planar detectors, that consists of a GEANT4 based simulation of a subset of lines of flight (LOFs) for a single detector head whereas the missing LOFs are obtained by using intrinsic detector symmetries. Additionally, we introduce and analyze a probability threshold for matrix elements of the detector component to optimize the trade-off between the matrix size in terms of non-zero elements and the resulting quality of the reconstructed images. In order to evaluate our proposed system model we reconstructed various images of objects, acquired according to the NEMA NU 4-2008 standard, and we compared them to the images reconstructed with two other system models: a model that does not include any detector response component and a model that approximates analytically the depth of interaction

  2. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish

    2016-04-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  3. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    Matias I Maturana

    2016-04-01

    Full Text Available Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants. Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF, i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy.

  4. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina

    Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish

    2016-01-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  5. Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation

    Poddar, Banibrata; Giurgiutiu, Victor

    2016-04-01

    Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.

  6. Development of accurate inelastic analysis models for materials constituting penetrations in reactor vessel

    Evaluation of structural integrity of lower head penetrations in reactor vessels is required for investigating the scenario of severe accidents in nuclear power plants under the loss of core-cooling capacity. Materials are exposed to temperatures much higher than experienced in normal operation and capability of evaluating material behavior under such circumstances needs to be developed for attaining reliable results. Inelastic deformation behavior changes with temperature significantly and its consideration has a critical importance in the development of inelastic constitutive model for application to such situations. A number of tensile tests have been performed on three materials constituting the lower-head penetrations, i.e. JIS SQV2A, SUS316 and NCF600, and the results were used for development of accurate inelastic constitutive models for these materials. The models based on the combination of initial yield stress, hardening and softening characteristics were found to be successful in describing deformation behavior of these materials under a wide range of temperature between room temperature and 1100degC along with the strain rates covering three orders of magnitude. Ways to generalize the models into varying temperature condition have also been presented. (author)

  7. An accurate and efficient Lagrangian sub-grid model for multi-particle dispersion

    Toschi, Federico; Mazzitelli, Irene; Lanotte, Alessandra S.

    2014-11-01

    Many natural and industrial processes involve the dispersion of particle in turbulent flows. Despite recent theoretical progresses in the understanding of particle dynamics in simple turbulent flows, complex geometries often call for numerical approaches based on eulerian Large Eddy Simulation (LES). One important issue related to the Lagrangian integration of tracers in under-resolved velocity fields is connected to the lack of spatial correlations at unresolved scales. Here we propose a computationally efficient Lagrangian model for the sub-grid velocity of tracers dispersed in statistically homogeneous and isotropic turbulent flows. The model incorporates the multi-scale nature of turbulent temporal and spatial correlations that are essential to correctly reproduce the dynamics of multi-particle dispersion. The new model is able to describe the Lagrangian temporal and spatial correlations in clouds of particles. In particular we show that pairs and tetrads dispersion compare well with results from Direct Numerical Simulations of statistically isotropic and homogeneous 3d turbulence. This model may offer an accurate and efficient way to describe multi-particle dispersion in under resolved turbulent velocity fields such as the one employed in eulerian LES. This work is part of the research programmes FP112 of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). We acknowledge support from the EU COST Action MP0806.

  8. Do Ecological Niche Models Accurately Identify Climatic Determinants of Species Ranges?

    Searcy, Christopher A; Shaffer, H Bradley

    2016-04-01

    Defining species' niches is central to understanding their distributions and is thus fundamental to basic ecology and climate change projections. Ecological niche models (ENMs) are a key component of making accurate projections and include descriptions of the niche in terms of both response curves and rankings of variable importance. In this study, we evaluate Maxent's ranking of environmental variables based on their importance in delimiting species' range boundaries by asking whether these same variables also govern annual recruitment based on long-term demographic studies. We found that Maxent-based assessments of variable importance in setting range boundaries in the California tiger salamander (Ambystoma californiense; CTS) correlate very well with how important those variables are in governing ongoing recruitment of CTS at the population level. This strong correlation suggests that Maxent's ranking of variable importance captures biologically realistic assessments of factors governing population persistence. However, this result holds only when Maxent models are built using best-practice procedures and variables are ranked based on permutation importance. Our study highlights the need for building high-quality niche models and provides encouraging evidence that when such models are built, they can reflect important aspects of a species' ecology. PMID:27028071

  9. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782

  10. A general pairwise interaction model provides an accurate description of in vivo transcription factor binding sites.

    Marc Santolini

    Full Text Available The identification of transcription factor binding sites (TFBSs on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs, in which each DNA base pair contributes independently to the transcription factor (TF binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM, a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting

  11. SMARTIES: Spheroids Modelled Accurately with a Robust T-matrix Implementation for Electromagnetic Scattering

    Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.

    2016-03-01

    SMARTIES calculates the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. This suite of MATLAB codes provides a fully documented implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. Included are scripts that cover a range of scattering problems relevant to nanophotonics and plasmonics, including calculation of far-field scattering and absorption cross-sections for fixed incidence orientation, orientation-averaged cross-sections and scattering matrix, surface-field calculations as well as near-fields, wavelength-dependent near-field and far-field properties, and access to lower-level functions implementing the T-matrix calculations, including the T-matrix elements which may be calculated more accurately than with competing codes.

  12. Spiral CT scanning plan to generate accurate Fe models of the human femur

    In spiral computed tomography (CT), source rotation, patient translation, and data acquisition are continuously conducted. Settings of the detector collimation and the table increment affect the image quality in terms of spatial and contrast resolution. This study assessed and measured the efficacy of spiral CT in those applications where the accurate reconstruction of bone morphology is critical: custom made prosthesis design or three dimensional modelling of the mechanical behaviour of long bones. Results show that conventional CT grants the highest accuracy. Spiral CT with D=5 mm and P=1,5 in the regions where the morphology is more regular, slightly degrades the image quality but allows to acquire at comparable cost an higher number of images increasing the longitudinal resolution of the acquired data set. (author)

  13. An Efficient Hybrid DSMC/MD Algorithm for Accurate Modeling of Micro Gas Flows

    Liang, Tengfei

    2013-01-01

    Aiming at simulating micro gas flows with accurate boundary conditions, an efficient hybrid algorithmis developed by combining themolecular dynamics (MD) method with the direct simulationMonte Carlo (DSMC)method. The efficiency comes from the fact that theMD method is applied only within the gas-wall interaction layer, characterized by the cut-off distance of the gas-solid interaction potential, to resolve accurately the gas-wall interaction process, while the DSMC method is employed in the remaining portion of the flow field to efficiently simulate rarefied gas transport outside the gas-wall interaction layer. A unique feature about the present scheme is that the coupling between the two methods is realized by matching the molecular velocity distribution function at the DSMC/MD interface, hence there is no need for one-toone mapping between a MD gas molecule and a DSMC simulation particle. Further improvement in efficiency is achieved by taking advantage of gas rarefaction inside the gas-wall interaction layer and by employing the "smart-wall model" proposed by Barisik et al. The developed hybrid algorithm is validated on two classical benchmarks namely 1-D Fourier thermal problem and Couette shear flow problem. Both the accuracy and efficiency of the hybrid algorithm are discussed. As an application, the hybrid algorithm is employed to simulate thermal transpiration coefficient in the free-molecule regime for a system with atomically smooth surface. Result is utilized to validate the coefficients calculated from the pure DSMC simulation with Maxwell and Cercignani-Lampis gas-wall interaction models. ©c 2014 Global-Science Press.

  14. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  15. Discrete state model and accurate estimation of loop entropy of RNA secondary structures.

    Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie

    2008-03-28

    Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html. PMID:18376982

  16. A Reduced High Frequency Transformer Model To Detect The Partial Discharge Locations

    El-Sayed M. El-Refaie; El-Sayed H. Shehab El-Dein

    2014-01-01

    Transformer modeling is the first step in improving partial discharge localization techniques. Different transformer models were used for such purpose. This paper presents a reduced transformer model that can be accurately used for the purpose of partial discharge localization. The model is investigated in alternative transient program (ATP) draw for partial discharge localization application. A comparison between different transformer models is studied, the achieved results of...

  17. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    Kostelich Eric J

    2011-12-01

    Full Text Available Abstract Background Data assimilation refers to methods for updating the state vector (initial condition of a complex spatiotemporal model (such as a numerical weather model by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter, previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck.

  18. SPARC: Mass Models for 175 Disk Galaxies with Spitzer Photometry and Accurate Rotation Curves

    Lelli, Federico; Schombert, James M

    2016-01-01

    We introduce SPARC (Spitzer Photometry & Accurate Rotation Curves): a sample of 175 nearby galaxies with new surface photometry at 3.6 um and high-quality rotation curves from previous HI/Halpha studies. SPARC spans a broad range of morphologies (S0 to Irr), luminosities (~5 dex), and surface brightnesses (~4 dex). We derive [3.6] surface photometry and study structural relations of stellar and gas disks. We find that both the stellar mass-HI mass relation and the stellar radius-HI radius relation have significant intrinsic scatter, while the HI mass-radius relation is extremely tight. We build detailed mass models and quantify the ratio of baryonic-to-observed velocity (Vbar/Vobs) for different characteristic radii and values of the stellar mass-to-light ratio (M/L) at [3.6]. Assuming M/L=0.5 Msun/Lsun (as suggested by stellar population models) we find that (i) the gas fraction linearly correlates with total luminosity, (ii) the transition from star-dominated to gas-dominated galaxies roughly correspond...

  19. An Approach to More Accurate Model Systems for Purple Acid Phosphatases (PAPs).

    Bernhardt, Paul V; Bosch, Simone; Comba, Peter; Gahan, Lawrence R; Hanson, Graeme R; Mereacre, Valeriu; Noble, Christopher J; Powell, Annie K; Schenk, Gerhard; Wadepohl, Hubert

    2015-08-01

    The active site of mammalian purple acid phosphatases (PAPs) have a dinuclear iron site in two accessible oxidation states (Fe(III)2 and Fe(III)Fe(II)), and the heterovalent is the active form, involved in the regulation of phosphate and phosphorylated metabolite levels in a wide range of organisms. Therefore, two sites with different coordination geometries to stabilize the heterovalent active form and, in addition, with hydrogen bond donors to enable the fixation of the substrate and release of the product, are believed to be required for catalytically competent model systems. Two ligands and their dinuclear iron complexes have been studied in detail. The solid-state structures and properties, studied by X-ray crystallography, magnetism, and Mössbauer spectroscopy, and the solution structural and electronic properties, investigated by mass spectrometry, electronic, nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), and Mössbauer spectroscopies and electrochemistry, are discussed in detail in order to understand the structures and relative stabilities in solution. In particular, with one of the ligands, a heterovalent Fe(III)Fe(II) species has been produced by chemical oxidation of the Fe(II)2 precursor. The phosphatase reactivities of the complexes, in particular, also of the heterovalent complex, are reported. These studies include pH-dependent as well as substrate concentration dependent studies, leading to pH profiles, catalytic efficiencies and turnover numbers, and indicate that the heterovalent diiron complex discussed here is an accurate PAP model system. PMID:26196255

  20. Blasting Vibration Safety Criterion Analysis with Equivalent Elastic Boundary: Based on Accurate Loading Model

    Qingwen Li

    2015-01-01

    Full Text Available In the tunnel and underground space engineering, the blasting wave will attenuate from shock wave to stress wave to elastic seismic wave in the host rock. Also, the host rock will form crushed zone, fractured zone, and elastic seismic zone under the blasting loading and waves. In this paper, an accurate mathematical dynamic loading model was built. And the crushed zone as well as fractured zone was considered as the blasting vibration source thus deducting the partial energy for cutting host rock. So this complicated dynamic problem of segmented differential blasting was regarded as an equivalent elastic boundary problem by taking advantage of Saint-Venant’s Theorem. At last, a 3D model in finite element software FLAC3D accepted the constitutive parameters, uniformly distributed mutative loading, and the cylindrical attenuation law to predict the velocity curves and effective tensile curves for calculating safety criterion formulas of surrounding rock and tunnel liner after verifying well with the in situ monitoring data.

  1. New possibilities of accurate particle characterisation by applying direct boundary models to analytical centrifugation

    Walter, Johannes; Thajudeen, Thaseem; Süß, Sebastian; Segets, Doris; Peukert, Wolfgang

    2015-04-01

    Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles.

  2. Quad-Band Bowtie Antenna Design for Wireless Communication System Using an Accurate Equivalent Circuit Model

    Mohammed Moulay

    2015-01-01

    Full Text Available A novel configuration of quad-band bowtie antenna suitable for wireless application is proposed based on accurate equivalent circuit model. The simple configuration and low profile nature of the proposed antenna lead to easy multifrequency operation. The proposed antenna is designed to satisfy specific bandwidth specifications for current communication systems including the Bluetooth (frequency range 2.4–2.485 GHz and bands of the Unlicensed National Information Infrastructure (U-NII low band (frequency range 5.15–5.35 GHz and U-NII mid band (frequency range 5.47–5.725 GHz and used for mobile WiMAX (frequency range 3.3–3.6 GHz. To validate the proposed equivalent circuit model, the simulation results are compared with those obtained by the moments method of Momentum software, the finite integration technique of CST Microwave studio, and the finite element method of HFSS software. An excellent agreement is achieved for all the designed antennas. The analysis of the simulated results confirms the successful design of quad-band bowtie antenna.

  3. Developing an Accurate CFD Based Gust Model for the Truss Braced Wing Aircraft

    Bartels, Robert E.

    2013-01-01

    The increased flexibility of long endurance aircraft having high aspect ratio wings necessitates attention to gust response and perhaps the incorporation of gust load alleviation. The design of civil transport aircraft with a strut or truss-braced high aspect ratio wing furthermore requires gust response analysis in the transonic cruise range. This requirement motivates the use of high fidelity nonlinear computational fluid dynamics (CFD) for gust response analysis. This paper presents the development of a CFD based gust model for the truss braced wing aircraft. A sharp-edged gust provides the gust system identification. The result of the system identification is several thousand time steps of instantaneous pressure coefficients over the entire vehicle. This data is filtered and downsampled to provide the snapshot data set from which a reduced order model is developed. A stochastic singular value decomposition algorithm is used to obtain a proper orthogonal decomposition (POD). The POD model is combined with a convolution integral to predict the time varying pressure coefficient distribution due to a novel gust profile. Finally the unsteady surface pressure response of the truss braced wing vehicle to a one-minus-cosine gust, simulated using the reduced order model, is compared with the full CFD.

  4. ACCURATE UNIVERSAL MODELS FOR THE MASS ACCRETION HISTORIES AND CONCENTRATIONS OF DARK MATTER HALOS

    A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance ΛCDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and ΛCDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the ΛCDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass, when

  5. Inflation model building with an accurate measure of e-folding

    Chongchitnan, Sirichai

    2016-01-01

    We revisit the problem of measuring the number of e-folding during inflation. It has become standard practice to take the logarithmic growth of the scale factor as a measure of the amount of inflation. However, this is only an approximation for the true amount of inflation required to solve the horizon and flatness problems. The aim of this work is to quantify the error in this approximation, and show how it can be avoided. We present an alternative framework for inflation model building using the inverse Hubble radius, aH, as the key parameter. We show that in this formalism, the correct number of e-folding arises naturally as a measure of inflation. As an application, we present an interesting model in which the entire inflationary dynamics can be solved analytically and exactly, and, in special cases, reduces to the familiar class of power-law models.

  6. Reduced order model of draft tube flow

    Swirling flow with compact coherent structures is very good candidate for proper orthogonal decomposition (POD), i.e. for decomposition into eigenmodes, which are the cornerstones of the flow field. Present paper focuses on POD of steady flows, which correspond to different operating points of Francis turbine draft tube flow. Set of eigenmodes is built using a limited number of snapshots from computational simulations. Resulting reduced order model (ROM) describes whole operating range of the draft tube. ROM enables to interpolate in between the operating points exploiting the knowledge about significance of particular eigenmodes and thus reconstruct the velocity field in any operating point within the given range. Practical example, which employs axisymmetric simulations of the draft tube flow, illustrates accuracy of ROM in regions without vortex breakdown together with need for higher resolution of the snapshot database close to location of sudden flow changes (e.g. vortex breakdown). ROM based on POD interpolation is very suitable tool for insight into flow physics of the draft tube flows (especially energy transfers in between different operating points), for supply of data for subsequent stability analysis or as an initialization database for advanced flow simulations

  7. The accurate simulation of the tension test for stainless steel sheet: the plasticity model

    Full text: The overall aim of this research project is to achieve the accurate simulation of a hydroforming process chain, in this case the manufacturing of a metal bellow. The work is done in cooperation with the project group for numerical research at the computer centre of the University of Karlsruhe, which is responsible for the simulation itself, while the Institute for Metal Forming Technology (IFU) of the University of Stuttgart is responsible for the material modeling and the resulting differential equations to describe the material behavior. Hydroforming technology uses highly compressed fluid media (up to 4200 bar) to form the basic, mostly metallic material. One hydroforming field is tube hydroforming (THF), which uses tubes or extrusions as basic material. The forming conditions created by hydroforming are quite different from those originated by other processes as for example deep drawing. That's why today's available simulation software is not always able to show satisfying results when a hydroforming process is simulated. The partners of this project try to solve this problem with the FDEM simulation software, developed by W. Schoenauer at the University of Karlsruhe, Germany. It was designed to solve systems of partial differential equations, which in this project are delivered by the IFU. The manufacturing of a metal bellow by hydroforming leads to tensile stress in longitudinal and tangential direction and to bend load due to the shifting and rollforming process. Therefore as a first step, the standardized tensile test is simulated. For plastic deformation a material model developed by D. Banabic is used. It describes the plastic behavior of orthotropic sheet metal. For elastic deformation Hooke's law for isotropic materials is used. In permanent iteration with the simulation the used material model has to be checked for its validity and must be modified if necessary. Refs. 3 (author)

  8. Bacteriophage Infection of Model Metal Reducing Bacteria

    Weber, K. A.; Bender, K. S.; Gandhi, K.; Coates, J. D.

    2008-12-01

    filtered through a 0.22 μ m sterile nylon filter, stained with phosphotungstic acid (PTA), and examined using transmission electron microscopy (TEM). TEM revealed the presence of viral like particles in the culture exposed to mytomycin C. Together these results suggest an active infection with a lysogenic bacteriophage in the model metal reducing bacteria, Geobacter spp., which could affect metabolic physiology and subsequently metal reduction in environmental systems.

  9. Quantitative evaluation of gas entrainment by numerical simulation with accurate physics model

    In the design study on a large-scale sodium-cooled fast reactor (JSFR), the reactor vessel is compactified to reduce the construction costs and enhance the economical competitiveness. However, such a reactor vessel induces higher coolant flows in the vessel and causes several thermal-hydraulics issues, e.g. gas entrainment (GE) phenomenon. The GE in the JSFR may occur at the cover gas-coolant interface in the vessel by a strong vortex at the interface. This type of GE has been studied experimentally, numerically and theoretically. Therefore, the onset condition of the GE can be evaluated conservatively. However, to clarify the negative influences of the GE on the JSFR, not only the onset condition of the GE but also the entrained gas (bubble) flow rate has to be evaluated. As long as we know, studies on the entrained gas flow rates are quite limited in both experimental and numerical fields. In this study, the authors performs numerical simulations to investigate the entrained gas amount in a hollow vortex experiment (a cylindrical vessel experiment). To simulate interfacial deformations accurately, a high-precision numerical simulation algorithm for gas-liquid two-phase flows is employed. In the first place, fine cells are applied to the region near the center of the vortex to reproduce the steep radial gradient of the circumferential velocity in this region. Then, the entrained gas flow rates are evaluated in the simulation results and are compared to the experimental data. As a result, the numerical simulation gives somewhat larger entrained gas flow rate than the experiment. However, both the numerical simulation and experiment show the entrained gas flow rates which are proportional to the outlet water velocity. In conclusion, it is confirmed that the developed numerical simulation algorithm can be applied to the quantitative evaluation of the GE. (authors)

  10. Studies of accurate multi-component lattice Boltzmann models on benchmark cases required for engineering applications

    Otomo, Hiroshi; Li, Yong; Dressler, Marco; Staroselsky, Ilya; Zhang, Raoyang; Chen, Hudong

    2016-01-01

    We present recent developments in lattice Boltzmann modeling for multi-component flows, implemented on the platform of a general purpose, arbitrary geometry solver PowerFLOW. Presented benchmark cases demonstrate the method's accuracy and robustness necessary for handling real world engineering applications at practical resolution and computational cost. The key requirements for such approach are that the relevant physical properties and flow characteristics do not strongly depend on numerics. In particular, the strength of surface tension obtained using our new approach is independent of viscosity and resolution, while the spurious currents are significantly suppressed. Using a much improved surface wetting model, undesirable numerical artifacts including thin film and artificial droplet movement on inclined wall are significantly reduced.

  11. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    Wu, Bo; Chen, Xiaofei

    2016-08-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  12. Precise and accurate assessment of uncertainties in model parameters from stellar interferometry. Application to stellar diameters

    Lachaume, Regis; Rabus, Markus; Jordan, Andres

    2015-08-01

    In stellar interferometry, the assumption that the observables can be seen as Gaussian, independent variables is the norm. In particular, neither the optical interferometry FITS (OIFITS) format nor the most popular fitting software in the field, LITpro, offer means to specify a covariance matrix or non-Gaussian uncertainties. Interferometric observables are correlated by construct, though. Also, the calibration by an instrumental transfer function ensures that the resulting observables are not Gaussian, even if uncalibrated ones happened to be so.While analytic frameworks have been published in the past, they are cumbersome and there is no generic implementation available. We propose here a relatively simple way of dealing with correlated errors without the need to extend the OIFITS specification or making some Gaussian assumptions. By repeatedly picking at random which interferograms, which calibrator stars, and which are the errors on their diameters, and performing the data processing on the bootstrapped data, we derive a sampling of p(O), the multivariate probability density function (PDF) of the observables O. The results can be stored in a normal OIFITS file. Then, given a model m with parameters P predicting observables O = m(P), we can estimate the PDF of the model parameters f(P) = p(m(P)) by using a density estimation of the observables' PDF p.With observations repeated over different baselines, on nights several days apart, and with a significant set of calibrators systematic errors are de facto taken into account. We apply the technique to a precise and accurate assessment of stellar diameters obtained at the Very Large Telescope Interferometer with PIONIER.

  13. Accurate modeling of cache replacement policies in a Data-Grid.

    Otoo, Ekow J.; Shoshani, Arie

    2003-01-23

    Caching techniques have been used to improve the performance gap of storage hierarchies in computing systems. In data intensive applications that access large data files over wide area network environment, such as a data grid,caching mechanism can significantly improve the data access performance under appropriate workloads. In a data grid, it is envisioned that local disk storage resources retain or cache the data files being used by local application. Under a workload of shared access and high locality of reference, the performance of the caching techniques depends heavily on the replacement policies being used. A replacement policy effectively determines which set of objects must be evicted when space is needed. Unlike cache replacement policies in virtual memory paging or database buffering, developing an optimal replacement policy for data grids is complicated by the fact that the file objects being cached have varying sizes and varying transfer and processing costs that vary with time. We present an accurate model for evaluating various replacement policies and propose a new replacement algorithm referred to as ''Least Cost Beneficial based on K backward references (LCB-K).'' Using this modeling technique, we compare LCB-K with various replacement policies such as Least Frequently Used (LFU), Least Recently Used (LRU), Greedy DualSize (GDS), etc., using synthetic and actual workload of accesses to and from tertiary storage systems. The results obtained show that (LCB-K) and (GDS) are the most cost effective cache replacement policies for storage resource management in data grids.

  14. Accurate Locally Conservative Discretizations for Modeling Multiphase Flow in Porous Media on General Hexahedra Grids

    Wheeler, M.F.

    2010-09-06

    For many years there have been formulations considered for modeling single phase ow on general hexahedra grids. These include the extended mixed nite element method, and families of mimetic nite di erence methods. In most of these schemes either no rate of convergence of the algorithm has been demonstrated both theoret- ically and computationally or a more complicated saddle point system needs to be solved for an accurate solution. Here we describe a multipoint ux mixed nite element (MFMFE) method [5, 2, 3]. This method is motivated from the multipoint ux approximation (MPFA) method [1]. The MFMFE method is locally conservative with continuous ux approximations and is a cell-centered scheme for the pressure. Compared to the MPFA method, the MFMFE has a variational formulation, since it can be viewed as a mixed nite element with special approximating spaces and quadrature rules. The framework allows han- dling of hexahedral grids with non-planar faces by applying trilinear mappings from physical elements to reference cubic elements. In addition, there are several multi- scale and multiphysics extensions such as the mortar mixed nite element method that allows the treatment of non-matching grids [4]. Extensions to the two-phase oil-water ow are considered. We reformulate the two- phase model in terms of total velocity, capillary velocity, water pressure, and water saturation. We choose water pressure and water saturation as primary variables. The total velocity is driven by the gradient of the water pressure and total mobility. Iterative coupling scheme is employed for the coupled system. This scheme allows treatments of di erent time scales for the water pressure and water saturation. In each time step, we rst solve the pressure equation using the MFMFE method; we then Center for Subsurface Modeling, The University of Texas at Austin, Austin, TX 78712; mfw@ices.utexas.edu. yCenter for Subsurface Modeling, The University of Texas at Austin, Austin, TX 78712; gxue

  15. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm3) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm3, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm3, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm, and 1

  16. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  17. A semi-implicit, second-order-accurate numerical model for multiphase underexpanded volcanic jets

    S. Carcano

    2013-11-01

    Full Text Available An improved version of the PDAC (Pyroclastic Dispersal Analysis Code, Esposti Ongaro et al., 2007 numerical model for the simulation of multiphase volcanic flows is presented and validated for the simulation of multiphase volcanic jets in supersonic regimes. The present version of PDAC includes second-order time- and space discretizations and fully multidimensional advection discretizations in order to reduce numerical diffusion and enhance the accuracy of the original model. The model is tested on the problem of jet decompression in both two and three dimensions. For homogeneous jets, numerical results are consistent with experimental results at the laboratory scale (Lewis and Carlson, 1964. For nonequilibrium gas–particle jets, we consider monodisperse and bidisperse mixtures, and we quantify nonequilibrium effects in terms of the ratio between the particle relaxation time and a characteristic jet timescale. For coarse particles and low particle load, numerical simulations well reproduce laboratory experiments and numerical simulations carried out with an Eulerian–Lagrangian model (Sommerfeld, 1993. At the volcanic scale, we consider steady-state conditions associated with the development of Vulcanian and sub-Plinian eruptions. For the finest particles produced in these regimes, we demonstrate that the solid phase is in mechanical and thermal equilibrium with the gas phase and that the jet decompression structure is well described by a pseudogas model (Ogden et al., 2008. Coarse particles, on the other hand, display significant nonequilibrium effects, which associated with their larger relaxation time. Deviations from the equilibrium regime, with maximum velocity and temperature differences on the order of 150 m s−1 and 80 K across shock waves, occur especially during the rapid acceleration phases, and are able to modify substantially the jet dynamics with respect to the homogeneous case.

  18. Towards a More Accurate Solar Power Forecast By Improving NWP Model Physics

    Köhler, C.; Lee, D.; Steiner, A.; Ritter, B.

    2014-12-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the uncertainties associated with the large share of weather-dependent power sources. Precise power forecast, well-timed energy trading on the stock market, and electrical grid stability can be maintained. The research project EWeLiNE is a collaboration of the German Weather Service (DWD), the Fraunhofer Institute (IWES) and three German transmission system operators (TSOs). Together, wind and photovoltaic (PV) power forecasts shall be improved by combining optimized NWP and enhanced power forecast models. The conducted work focuses on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. Not only the representation of the model cloud characteristics, but also special events like Sahara dust over Germany and the solar eclipse in 2015 are treated and their effect on solar power accounted for. An overview of the EWeLiNE project and results of the ongoing research will be presented.

  19. Reduced form models of bond portfolios

    Matti Koivu; Teemu Pennanen

    2010-01-01

    We derive simple return models for several classes of bond portfolios. With only one or two risk factors our models are able to explain most of the return variations in portfolios of fixed rate government bonds, inflation linked government bonds and investment grade corporate bonds. The underlying risk factors have natural interpretations which make the models well suited for risk management and portfolio design.

  20. PSI/TM-Coffee: a web server for fast and accurate multiple sequence alignments of regular and transmembrane proteins using homology extension on reduced databases.

    Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming

    2016-07-01

    The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. PMID:27106060

  1. Rapid nonlinear analysis for electrothermal microgripper using reduced order model based Krylov subspace

    The conventional numerical analysis methods could not perform the rapid system-level simulation to MEMS especially when containing sensing and testing integrated circuit. Using reduced-order model can simulate the behavior characteristic of multiphysical energy domain models including nonlinear analysis. This paper set up the reduced-order model of electrothermal microgripper using Krylov subspace projection method. The system functions were assembled through finite element analysis using Ansys. We took the structure-electro-thermal analysis to microgripper finite element model and reduced model order through second-order Krylov subspace projection method based on Arnoldi process. The simulation result from electrothermal reduced order model of microgripper is accurate compared with finite element analysis and even more has a few computing consuming

  2. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California 90024 (United States)

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  3. Causal transmission in reduced-form models

    Vassili Bazinas; Bent Nielsen

    2015-01-01

    We propose a method to explore the causal transmission of a catalyst variable through two endogenous variables of interest. The method is based on the reduced-form system formed from the conditional distribution of the two endogenous variables given the catalyst. The method combines elements from instru- mental variable analysis and Cholesky decomposition of structural vector autoregressions. We give conditions for uniqueness of the causal transmission.

  4. Test of the standard model at low energy: accurate measurements of the branching rates of 62Ga; accurate measurements of the half-life of 38Ca

    Precise measurements of Fermi superallowed 0+ → 0+ β decays provide a powerful tool to study the weak interaction properties in the framework of the Standard Model (SM). Collectively, the comparative half-lives (ft) of these transitions allow a sensitive probe of the CVC (Conserved Vector Current) hypothesis and contribute to the most demanding test of the unitarity of the quarks-mixing CKM matrix top-row, by providing, so far, the most accurate determination of its dominant element (Vud). Until recently, an apparent departure from unity enhanced a doubt on the validity of the minimal SM and thus stimulated a considerable effort in order to extend the study to other Fermi emitters available. The 62Ga and 38Ca are among key nuclei to achieve these precision tests and verify the reliability of the corrections applied to the experimental ft-values. The 62Ga β-decay was investigated at the IGISOL separator, with an experimental setup composed of 3 EUROBALL Clovers for γ-ray detection. Very weak intensity (62Zn. The newly established analog branching-ratio (B.RA equals 99.893(24) %) was used to compute the universal Ft-value (62Ga). The latter turned out to be in good agreement with the 12 well-known cases. Compatibility between the upper limit set here on the term (δIM) and the theoretical prediction suggests that the isospin-symmetry-breaking correction is indeed large for the heavy (A ≥ 62) β-emitters. The study of the 38Ca decay was performed at the CERN-ISOLDE facility. Injection of fluorine into the ion source, in order to chemically select the isotopes of interest, assisted by the REXTRAP Penning trap facility and a time-of-flight analysis, enabled us to eliminate efficiently the troublesome 38mK. For the first time, the 38Ca half-life is measured with a highly purified radioactive sample. The preliminary result obtained, T1/2(38Ca) 445.8(10) ms, improves the precision on the half-life as determined from previous measurements by a factor close to 10

  5. Reducing the Ising model to matchings

    Huber, Mark

    2009-01-01

    Canonical paths is one of the most powerful tools available to show that a Markov chain is rapidly mixing, thereby enabling approximate sampling from complex high dimensional distributions. Two success stories for the canonical paths method are chains for drawing matchings in a graph, and a chain for a version of the Ising model called the subgraphs world. In this paper, it is shown that a subgraphs world draw can be obtained by taking a draw from matchings on a graph that is linear in the size of the original graph. This provides a partial answer to why canonical paths works so well for both problems, as well as providing a new source of algorithms for the Ising model. For instance, this new reduction immediately yields a fully polynomial time approximation scheme for the Ising model on a bounded degree graph when the magnitization is bounded away from 0.

  6. A simple and accurate model for Love wave based sensors: Dispersion equation and mass sensitivity

    Jiansheng Liu

    2014-01-01

    Dispersion equation is an important tool for analyzing propagation properties of acoustic waves in layered structures. For Love wave (LW) sensors, the dispersion equation with an isotropic-considered substrate is too rough to get accurate solutions; the full dispersion equation with a piezoelectric-considered substrate is too complicated to get simple and practical expressions for optimizing LW-based sensors. In this work, a dispersion equation is introduced for Love waves in a layered struct...

  7. Accurate SPICE Modeling of Poly-silicon Resistor in 40nm CMOS Technology Process for Analog Circuit Simulation

    Sun Lijie

    2015-01-01

    Full Text Available In this paper, the SPICE model of poly resistor is accurately developed based on silicon data. To describe the non-linear R-V trend, the new correlation in temperature and voltage is found in non-silicide poly-silicon resistor. A scalable model is developed on the temperature-dependent characteristics (TDC and the temperature-dependent voltage characteristics (TDVC from the R-V data. Besides, the parasitic capacitance between poly and substrate are extracted from real silicon structure in replacing conventional simulation data. The capacitance data are tested through using on-wafer charge-induced-injection error-free charge-based capacitance measurement (CIEF-CBCM technique which is driven by non-overlapping clock generation circuit. All modeling test structures are designed and fabricated through using 40nm CMOS technology process. The new SPICE model of poly-silicon resistor is more accurate to silicon for analog circuit simulation.

  8. Small pores in soils: Is the physico-chemical environment accurately reflected in biogeochemical models ?

    Weber, Tobias K. D.; Riedel, Thomas

    2015-04-01

    Free water is a prerequesite to chemical reactions and biological activity in earth's upper crust essential to life. The void volume between the solid compounds provides space for water, air, and organisms that thrive on the consumption of minerals and organic matter thereby regulating soil carbon turnover. However, not all water in the pore space in soils and sediments is in its liquid state. This is a result of the adhesive forces which reduce the water activity in small pores and charged mineral surfaces. This water has a lower tendency to react chemically in solution as this additional binding energy lowers its activity. In this work, we estimated the amount of soil pore water that is thermodynamically different from a simple aqueous solution. The quantity of soil pore water with properties different to liquid water was found to systematically increase with increasing clay content. The significance of this is that the grain size and surface area apparently affects the thermodynamic state of water. This implies that current methods to determine the amount of water content, traditionally determined from bulk density or gravimetric water content after drying at 105°C overestimates the amount of free water in a soil especially at higher clay content. Our findings have consequences for biogeochemical processes in soils, e.g. nutrients may be contained in water which is not free which could enhance preservation. From water activity measurements on a set of various soils with 0 to 100 wt-% clay, we can show that 5 to 130 mg H2O per g of soil can generally be considered as unsuitable for microbial respiration. These results may therefore provide a unifying explanation for the grain size dependency of organic matter preservation in sedimentary environments and call for a revised view on the biogeochemical environment in soils and sediments. This could allow a different type of process oriented modelling.

  9. Fast and Accurate Icepak-PSpice Co-Simulation of IGBTs under Short-Circuit with an Advanced PSpice Model

    Wu, Rui; Iannuzzo, Francesco; Wang, Huai;

    2014-01-01

    A basic problem in the IGBT short-circuit failure mechanism study is to obtain realistic temperature distribution inside the chip, which demands accurate electrical simulation to obtain power loss distribution as well as detailed IGBT geometry and material information. This paper describes an...... unprecedented fast and accurate approach to electro-thermal simulation of power IGBTs suitable to simulate normal as well as abnormal conditions based on an advanced physics-based PSpice model together with ANSYS/Icepak FEM thermal simulator in a closed loop. Through this approach, significantly faster...... simulation speed with respect to conventional double-physics simulations, together with very accurate results can be achieved. A case study is given which presents the detailed electrical and thermal simulation results of an IGBT module under short circuit conditions. Furthermore, thermal maps in the case of...

  10. Surface electron density models for accurate ab initio molecular dynamics with electronic friction

    Novko, D.; Blanco-Rey, M.; Alducin, M.; Juaristi, J. I.

    2016-06-01

    Ab initio molecular dynamics with electronic friction (AIMDEF) is a valuable methodology to study the interaction of atomic particles with metal surfaces. This method, in which the effect of low-energy electron-hole (e-h) pair excitations is treated within the local density friction approximation (LDFA) [Juaristi et al., Phys. Rev. Lett. 100, 116102 (2008), 10.1103/PhysRevLett.100.116102], can provide an accurate description of both e-h pair and phonon excitations. In practice, its applicability becomes a complicated task in those situations of substantial surface atoms displacements because the LDFA requires the knowledge at each integration step of the bare surface electron density. In this work, we propose three different methods of calculating on-the-fly the electron density of the distorted surface and we discuss their suitability under typical surface distortions. The investigated methods are used in AIMDEF simulations for three illustrative adsorption cases, namely, dissociated H2 on Pd(100), N on Ag(111), and N2 on Fe(110). Our AIMDEF calculations performed with the three approaches highlight the importance of going beyond the frozen surface density to accurately describe the energy released into e-h pair excitations in case of large surface atom displacements.

  11. Effective and accurate approach for modeling of commensurate–incommensurate transition in krypton monolayer on graphite

    Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system

  12. An accurate elasto-plastic frictional tangential force displacement model for granular-flow simulations: Displacement-driven formulation

    Zhang, Xiang; Vu-Quoc, Loc

    2007-07-01

    We present in this paper the displacement-driven version of a tangential force-displacement (TFD) model that accounts for both elastic and plastic deformations together with interfacial friction occurring in collisions of spherical particles. This elasto-plastic frictional TFD model, with its force-driven version presented in [L. Vu-Quoc, L. Lesburg, X. Zhang. An accurate tangential force-displacement model for granular-flow simulations: contacting spheres with plastic deformation, force-driven formulation, Journal of Computational Physics 196(1) (2004) 298-326], is consistent with the elasto-plastic frictional normal force-displacement (NFD) model presented in [L. Vu-Quoc, X. Zhang. An elasto-plastic contact force-displacement model in the normal direction: displacement-driven version, Proceedings of the Royal Society of London, Series A 455 (1991) 4013-4044]. Both the NFD model and the present TFD model are based on the concept of additive decomposition of the radius of contact area into an elastic part and a plastic part. The effect of permanent indentation after impact is represented by a correction to the radius of curvature. The effect of material softening due to plastic flow is represented by a correction to the elastic moduli. The proposed TFD model is accurate, and is validated against nonlinear finite element analyses involving plastic flows in both the loading and unloading conditions. The proposed consistent displacement-driven, elasto-plastic NFD and TFD models are designed for implementation in computer codes using the discrete-element method (DEM) for granular-flow simulations. The model is shown to be accurate and is validated against nonlinear elasto-plastic finite-element analysis.

  13. On the fast convergence modeling and accurate calculation of PV output energy for operation and planning studies

    Highlights: • A comprehensive modeling framework for photovoltaic power plants is presented. • Parameters for various modules are obtained using weather and manufacturer’s data. • A fast and accurate algorithm calculates the five-parameter model of PV module. • The output energy results are closer to measured data compared to SAM and RETScreen. • The overall plant model is recommended for simulation in optimal planning problems. - Abstract: Optimal planning of energy systems greatly relies upon the models utilized for system components. In this paper, a thorough modeling framework for photovoltaic (PV) power plants is developed for application to operation and planning studies. The model is a precise and flexible one that reflects all the effective environmental and weather parameters on the performance of PV module and inverter, as the main components of a PV power plant. These parameters are surface radiation, ambient temperature and wind speed. The presented model can be used to estimate the plant’s output energy for any time period and operating condition. Using a simple iterative process, the presented method demonstrates fast and accurate convergence by merely using the limited information provided by manufacturers. The results obtained by the model are verified by the results of System Advisor Model (SAM) and RETScreen in various operational scenarios. Furthermore, comparison of the simulation results with a real power plant outputs and the comparative statistical error analysis confirm that our calculation procedure merits over SAM and RETScreen, as modern and popular commercial PV simulation tools

  14. Fast and Accurate Recurrent Neural Network Acoustic Models for Speech Recognition

    SAK, Haşim; Senior, Andrew; Rao, Kanishka; Beaufays, Françoise

    2015-01-01

    We have recently shown that deep Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform feed forward deep neural networks (DNNs) as acoustic models for speech recognition. More recently, we have shown that the performance of sequence trained context dependent (CD) hidden Markov model (HMM) acoustic models using such LSTM RNNs can be equaled by sequence trained phone models initialized with connectionist temporal classification (CTC). In this paper, we present techniques tha...

  15. Multi Sensor Data Integration for AN Accurate 3d Model Generation

    Chhatkuli, S.; Satoh, T.; Tachibana, K.

    2015-05-01

    The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other's weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  16. Parameterized Reduced Order Modeling of Misaligned Stacked Disks Rotor Assemblies

    Ganine, Vladislav; Laxalde, Denis; Michalska, Hannah; Pierre, Christophe

    2011-01-01

    Light and flexible rotating parts of modern turbine engines operating at supercritical speeds necessitate application of more accurate but rather computationally expensive 3D FE modeling techniques. Stacked disks misalignment due to manufacturing variability in the geometry of individual components constitutes a particularly important aspect to be included in the analysis because of its impact on system dynamics. A new parametric model order reduction algorithm is presented to achieve this go...

  17. Credit Risk Modelling Under the Reduced Form Approach

    Cãlin Adrian Cantemir; Popovici Oana Cristina

    2012-01-01

    Credit risk is one of the most important aspects that need to be considered by financial institutions involved in credit-granting. It is defined as the risk of loss that arises from a borrower who does not make payments as promised. For modelling credit risk there are two main approaches: the structural models and the reduced form models. The purpose of this paper is to review the evolution of reduced form models from the pioneering days of Jarrow and Turnbull to present

  18. Towards more accurate isoscapes encouraging results from wine, water and marijuana data/model and model/model comparisons.

    West, J. B.; Ehleringer, J. R.; Cerling, T.

    2006-12-01

    Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across

  19. An approach to estimating and extrapolating model error based on inverse problem methods: towards accurate numerical weather prediction

    Model error is one of the key factors restricting the accuracy of numerical weather prediction (NWP). Considering the continuous evolution of the atmosphere, the observed data (ignoring the measurement error) can be viewed as a series of solutions of an accurate model governing the actual atmosphere. Model error is represented as an unknown term in the accurate model, thus NWP can be considered as an inverse problem to uncover the unknown error term. The inverse problem models can absorb long periods of observed data to generate model error correction procedures. They thus resolve the deficiency and faultiness of the NWP schemes employing only the initial-time data. In this study we construct two inverse problem models to estimate and extrapolate the time-varying and spatial-varying model errors in both the historical and forecast periods by using recent observations and analogue phenomena of the atmosphere. Numerical experiment on Burgers' equation has illustrated the substantial forecast improvement using inverse problem algorithms. The proposed inverse problem methods of suppressing NWP errors will be useful in future high accuracy applications of NWP. (geophysics, astronomy, and astrophysics)

  20. Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3

    Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.

    2016-04-01

    Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.

  1. The Impact of Accurate Extinction Measurements for X-ray Spectral Models

    Smith, Randall K; Corrales, Lia

    2016-01-01

    Interstellar extinction includes both absorption and scattering of photons from interstellar gas and dust grains, and it has the effect of altering a source's spectrum and its total observed intensity. However, while multiple absorption models exist, there are no useful scattering models in standard X-ray spectrum fitting tools, such as XSPEC. Nonetheless, X-ray halos, created by scattering from dust grains, are detected around even moderately absorbed sources and the impact on an observed source spectrum can be significant, if modest, compared to direct absorption. By convolving the scattering cross section with dust models, we have created a spectral model as a function of energy, type of dust, and extraction region that can be used with models of direct absorption. This will ensure the extinction model is consistent and enable direct connections to be made between a source's X-ray spectral fits and its UV/optical extinction.

  2. GLOBAL THRESHOLD AND REGION-BASED ACTIVE CONTOUR MODEL FOR ACCURATE IMAGE SEGMENTATION

    Nuseiba M. Altarawneh; Suhuai Luo; Brian Regan; Changming Sun; Fucang Jia

    2014-01-01

    In this contribution, we develop a novel global threshold-based active contour model. This model deploys a new edge-stopping function to control the direction of the evolution and to stop the evolving contour at weak or blurred edges. An implementation of the model requires the use of selective binary and Gaussian filtering regularized level set (SBGFRLS) method. The method uses either a selective local or global segmentation property. It penalizes the level set function to force ...

  3. EXAMINING THE MOVEMENTS OF MOBILE NODES IN THE REAL WORLD TO PRODUCE ACCURATE MOBILITY MODELS

    TANWEER ALAM

    2010-09-01

    Full Text Available All communication occurs through a wireless median in an ad hoc network. Ad hoc networks are dynamically created and maintained by the individual nodes comprising the network. Random Waypoint Mobility Model is a model that includes pause times between changes in destination and speed. To produce a real-world environment within which an ad hoc network can be formed among a set of nodes, there is a need for the development of realistic, generic and comprehensive mobility models. In this paper, we examine the movements of entities in the real world and present the production of mobility model in an ad hoc network.

  4. Fault Tolerance for Industrial Actuators in Absence of Accurate Models and Hardware Redundancy

    Papageorgiou, Dimitrios; Blanke, Mogens; Niemann, Hans Henrik;

    2015-01-01

    This paper investigates Fault-Tolerant Control for closed-loop systems where only coarse models are available and there is lack of actuator and sensor redundancies. The problem is approached in the form of a typical servomotor in closed-loop. A linear model is extracted from input/output data...

  5. Accurate calculation of binding energies for molecular clusters - Assessment of different models

    Friedrich, Joachim; Fiedler, Benjamin

    2016-06-01

    In this work we test different strategies to compute high-level benchmark energies for medium-sized molecular clusters. We use the incremental scheme to obtain CCSD(T)/CBS energies for our test set and carefully validate the accuracy for binding energies by statistical measures. The local errors of the incremental scheme are benchmark values are ΔE = - 278.01 kJ/mol for (H2O)10, ΔE = - 221.64 kJ/mol for (HF)10, ΔE = - 45.63 kJ/mol for (CH4)10, ΔE = - 19.52 kJ/mol for (H2)20 and ΔE = - 7.38 kJ/mol for (H2)10 . Furthermore we test state-of-the-art wave-function-based and DFT methods. Our benchmark data will be very useful for critical validations of new methods. We find focal-point-methods for estimating CCSD(T)/CBS energies to be highly accurate and efficient. For foQ-i3CCSD(T)-MP2/TZ we get a mean error of 0.34 kJ/mol and a standard deviation of 0.39 kJ/mol.

  6. A reduced order model of a quadruped walking system

    Trot walking has recently been studied by several groups because of its stability and realizability. In the trot, diagonally opposed legs form pairs. While one pair of legs provides support, the other pair of legs swings forward in preparation for the next step. In this paper, we propose a reduced order model for the trot walking. The reduced order model is derived by using two dominant modes of the closed loop system in which the local feedback at each joint is implemented. It is shown by numerical examples that the obtained reduced order model can well approximate the original higher order model. (author)

  7. Highly Accurate Tree Models Derived from Terrestrial Laser Scan Data: A Method Description

    Jan Hackenberg

    2014-05-01

    Full Text Available This paper presents a method for fitting cylinders into a point cloud, derived from a terrestrial laser-scanned tree. Utilizing high scan quality data as the input, the resulting models describe the branching structure of the tree, capable of detecting branches with a diameter smaller than a centimeter. The cylinders are stored as a hierarchical tree-like data structure encapsulating parent-child neighbor relations and incorporating the tree’s direction of growth. This structure enables the efficient extraction of tree components, such as the stem or a single branch. The method was validated both by applying a comparison of the resulting cylinder models with ground truth data and by an analysis between the input point clouds and the models. Tree models were accomplished representing more than 99% of the input point cloud, with an average distance from the cylinder model to the point cloud within sub-millimeter accuracy. After validation, the method was applied to build two allometric models based on 24 tree point clouds as an example of the application. Computation terminated successfully within less than 30 min. For the model predicting the total above ground volume, the coefficient of determination was 0.965, showing the high potential of terrestrial laser-scanning for forest inventories.

  8. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Wicke, Jason; Dumas, Geneviève A

    2014-06-01

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. PMID:24735506

  9. Accurate Fabrication of Hydroxyapatite Bone Models with Porous Scaffold Structures by Using Stereolithography

    Maeda, Chiaki; Tasaki, Satoko; Kirihara, Soshu, E-mail: c-maeda@jwri.osaka-u.ac.jp [Joining and Welding Research Institute, Osaka University, 11-1 Mihogaoka, Ibaraki City, Osaka 567-0047 (Japan)

    2011-05-15

    Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.

  10. Accurate Fabrication of Hydroxyapatite Bone Models with Porous Scaffold Structures by Using Stereolithography

    Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.

  11. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners.

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-01-01

    Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model. PMID:27077855

  12. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners

    Xuemiao Xu

    2016-04-01

    Full Text Available Exterior orientation parameters’ (EOP estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model.

  13. HIGH ACCURATE LOW COMPLEX FACE DETECTION BASED ON KL TRANSFORM AND YCBCR GAUSSIAN MODEL

    Epuru Nithish Kumar

    2013-05-01

    Full Text Available This paper presents a skin color model for face detection based on YCbCr Gauss model and KL transform. The simple gauss model and the region model of the skin color are designed in both KL color space and YCbCr space according to clustering. Skin regions are segmented using optimal threshold value obtained from adaptive algorithm. The segmentation results are then used to eliminate likely skin region in the gauss-likelihood image. Different morphological processes are then used to eliminate noise from binary image. In order to locate the face, the obtained regions are grouped out with simple detection algorithms. The proposed algorithm works well for complex background and many faces.

  14. Empirical approaches to more accurately predict benthic-pelagic coupling in biogeochemical ocean models

    Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus

    2016-04-01

    The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?

  15. Restricted Collapsed Draw: Accurate Sampling for Hierarchical Chinese Restaurant Process Hidden Markov Models

    Makino, Takaki; Takei, Shunsuke; Sato, Issei; Mochihashi, Daichi

    2011-01-01

    We propose a restricted collapsed draw (RCD) sampler, a general Markov chain Monte Carlo sampler of simultaneous draws from a hierarchical Chinese restaurant process (HCRP) with restriction. Models that require simultaneous draws from a hierarchical Dirichlet process with restriction, such as infinite Hidden markov models (iHMM), were difficult to enjoy benefits of \\markerg{the} HCRP due to combinatorial explosion in calculating distributions of coupled draws. By constructing a proposal of se...

  16. Parameterized reduced order modeling of misaligned stacked disks rotor assemblies

    Ganine, Vladislav; Laxalde, Denis; Michalska, Hannah; Pierre, Christophe

    2011-01-01

    Light and flexible rotating parts of modern turbine engines operating at supercritical speeds necessitate application of more accurate but rather computationally expensive 3D FE modeling techniques. Stacked disks misalignment due to manufacturing variability in the geometry of individual components constitutes a particularly important aspect to be included in the analysis because of its impact on system dynamics. A new parametric model order reduction algorithm is presented to achieve this goal at affordable computational costs. It is shown that the disks misalignment leads to significant changes in nominal system properties that manifest themselves as additional blocks coupling neighboring spatial harmonics in Fourier space. Consequently, the misalignment effects can no longer be accurately modeled as equivalent forces applied to a nominal unperturbed system. The fact that the mode shapes become heavily distorted by extra harmonic content renders the nominal modal projection-based methods inaccurate and thus numerically ineffective in the context of repeated analysis of multiple misalignment realizations. The significant numerical bottleneck is removed by employing an orthogonal projection onto the subspace spanned by first few Fourier harmonic basis vectors. The projected highly sparse systems are shown to accurately approximate the specific misalignment effects, to be inexpensive to solve using direct sparse methods and easy to parameterize with a small set of measurable eccentricity and tilt angle parameters. Selected numerical examples on an industrial scale model are presented to illustrate the accuracy and efficiency of the algorithm implementation.

  17. Accurate modeling of a DOI capable small animal PET scanner using GATE

    In this work we developed a Monte Carlo (MC) model of the Sedecal Argus pre-clinical PET scanner, using GATE (Geant4 Application for Tomographic Emission). This is a dual-ring scanner which features DOI compensation by means of two layers of detector crystals (LYSO and GSO). Geometry of detectors and sources, pulses readout and selection of coincidence events were modeled with GATE, while a separate code was developed in order to emulate the processing of digitized data (for example, customized time windows and data flow saturation), the final binning of the lines of response and to reproduce the data output format of the scanner's acquisition software. Validation of the model was performed by modeling several phantoms used in experimental measurements, in order to compare the results of the simulations. Spatial resolution, sensitivity, scatter fraction, count rates and NECR were tested. Moreover, the NEMA NU-4 phantom was modeled in order to check for the image quality yielded by the model. Noise, contrast of cold and hot regions and recovery coefficient were calculated and compared using images of the NEMA phantom acquired with our scanner. The energy spectrum of coincidence events due to the small amount of 176Lu in LYSO crystals, which was suitably included in our model, was also compared with experimental measurements. Spatial resolution, sensitivity and scatter fraction showed an agreement within 7%. Comparison of the count rates curves resulted satisfactory, being the values within the uncertainties, in the range of activities practically used in research scans. Analysis of the NEMA phantom images also showed a good agreement between simulated and acquired data, within 9% for all the tested parameters. This work shows that basic MC modeling of this kind of system is possible using GATE as a base platform; extension through suitably written customized code allows for an adequate level of accuracy in the results. Our careful validation against experimental

  18. Modelling of Limestone Dissolution in Wet FGD Systems: The Importance of an Accurate Particle Size Distribution

    Kiil, Søren; Johnsson, Jan Erik; Dam-Johansen, Kim

    1999-01-01

    In wet flue gas desulphurisation (FGD) plants, the most common sorbent is limestone. Over the past 25 years, many attempts to model the transient dissolution of limestone particles in aqueous solutions have been performed, due to the importance for the development of reliable FGD simu-lation tools...... Danish limestone types with very different particle size distributions (PSDs). All limestones were of a high purity. Model predictions were found to be qualitatively in good agreement with experimental data without any use of adjustable parameters. Deviations between measurements and simulations were...... attributed primarily to the PSD measurements of the limestone particles, which were used as model inputs. The PSDs, measured using a laser diffrac-tion-based Malvern analyser, were probably not representative of the limestone samples because agglomeration phenomena took place when the particles were...

  19. An accurate, fast and stable material model for shape memory alloys

    Shape memory alloys possess several features that make them interesting for industrial applications. However, due to their complex and thermo-mechanically coupled behavior, direct use of shape memory alloys in engineering construction is problematic. There is thus a demand for tools to achieve realistic, predictive simulations that are numerically robust when computing complex, coupled load states, are fast enough to calculate geometries of industrial interest, and yield realistic and reliable results without the use of fitting curves. In this paper a new and numerically fast material model for shape memory alloys is presented. It is based solely on energetic quantities, which thus creates a quite universal approach. In the beginning, a short derivation is given before it is demonstrated how this model can be easily calibrated by means of tension tests. Then, several examples of engineering applications under mechanical and thermal loads are presented to demonstrate the numerical stability and high computation speed of the model. (paper)

  20. Toward Accurate Modeling of the Effect of Ion-Pair Formation on Solute Redox Potential.

    Qu, Xiaohui; Persson, Kristin A

    2016-09-13

    A scheme to model the dependence of a solute redox potential on the supporting electrolyte is proposed, and the results are compared to experimental observations and other reported theoretical models. An improved agreement with experiment is exhibited if the effect of the supporting electrolyte on the redox potential is modeled through a concentration change induced via ion pair formation with the salt, rather than by only considering the direct impact on the redox potential of the solute itself. To exemplify the approach, the scheme is applied to the concentration-dependent redox potential of select molecules proposed for nonaqueous flow batteries. However, the methodology is general and enables rational computational electrolyte design through tuning of the operating window of electrochemical systems by shifting the redox potential of its solutes; including potentially both salts as well as redox active molecules. PMID:27500744

  1. High-order accurate finite-volume formulations for the pressure gradient force in layered ocean models

    Engwirda, Darren; Marshall, John

    2016-01-01

    The development of a set of high-order accurate finite-volume formulations for evaluation of the pressure gradient force in layered ocean models is described. A pair of new schemes are presented, both based on an integration of the contact pressure force about the perimeter of an associated momentum control-volume. The two proposed methods differ in their choice of control-volume geometries. High-order accurate numerical integration techniques are employed in both schemes to account for non-linearities in the underlying equation-of-state definitions and thermodynamic profiles, and details of an associated vertical interpolation and quadrature scheme are discussed in detail. Numerical experiments are used to confirm the consistency of the two formulations, and it is demonstrated that the new methods maintain hydrostatic and thermobaric equilibrium in the presence of strongly-sloping layer-wise geometry, non-linear equation-of-state definitions and non-uniform vertical stratification profiles. Additionally, one...

  2. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036, Spain and Servei de Física Mèdica i Protecció Radiològica, Institut Català d’Oncologia, L’Hospitalet de Llobregat 08907 (Spain); Roé, Nuria [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036 (Spain); Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Complexo Hospitalario Universitario de Santiago de Compostela 15706, Spain and Grupo de Imagen Molecular, Instituto de Investigacións Sanitarias de Santiago de Compostela (IDIS), Galicia 15782 (Spain); Falcon, Carles; Ros, Domènec [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 08036, Spain and CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Pavía, Javier [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 080836 (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); and Servei de Medicina Nuclear, Hospital Clínic, Barcelona 08036 (Spain)

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  3. Accurate reduction of a model of circadian rhythms by delayed quasi steady state assumptions

    Vejchodský, Tomáš

    2014-01-01

    Roč. 139, č. 4 (2014), s. 577-585. ISSN 0862-7959 Grant ostatní: European Commission(XE) StochDetBioModel(328008) Institutional support: RVO:67985840 Keywords : biochemical networks * gene regulatory networks * oscillating systems * periodic solution Subject RIV: BA - General Mathematics http://hdl.handle.net/10338.dmlcz/144135

  4. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what

  5. Analysis of computational models for an accurate study of electronic excitations in GFP

    Schwabe, Tobias; Beerepoot, Maarten; Olsen, Jógvan Magnus Haugaard; Kongsted, Jacob

    2015-01-01

    Using the chromophore of the green fluorescent protein (GFP), the performance of a hybrid RI-CC2 / polarizable embedding (PE) model is tested against a quantum chemical cluster pproach. Moreover, the effect of the rest of the protein environment is studied by systematically increasing the size of...

  6. A fast and accurate SystemC-AMS model for PLL

    Ma, K.; Leuken, R. van; Vidojkovic, M.; Romme, J.; Rampu, S.; Pflug, H.; Huang, L.; Dolmans, G.

    2011-01-01

    PLLs have become an important part of electrical systems. When designing a PLL, an efficient and reliable simulation platform for system evaluation is needed. However, the closed loop simulation of a PLL is time consuming. To address this problem, in this paper, a new PLL model containing both digit

  7. A Framework for Accurate Geospatial Modeling of Recharge and Discharge Maps using Image Ranking and Machine Learning

    Yahja, A.; Kim, C.; Lin, Y.; Bajcsy, P.

    2008-12-01

    This paper addresses the problem of accurate estimation of geospatial models from a set of groundwater recharge & discharge (R&D) maps and from auxiliary remote sensing and terrestrial raster measurements. The motivation for our work is driven by the cost of field measurements, and by the limitations of currently available physics-based modeling techniques that do not include all relevant variables and allow accurate predictions only at coarse spatial scales. The goal is to improve our understanding of the underlying physical phenomena and increase the accuracy of geospatial models--with a combination of remote sensing, field measurements and physics-based modeling. Our approach is to process a set of R&D maps generated from interpolated sparse field measurements using existing physics-based models, and identify the R&D map that would be the most suitable for extracting a set of rules between the auxiliary variables of interest and the R&D map labels. We implemented this approach by ranking R&D maps using information entropy and mutual information criteria, and then by deriving a set of rules using a machine learning technique, such as the decision tree method. The novelty of our work is in developing a general framework for building geospatial models with the ultimate goal of minimizing cost and maximizing model accuracy. The framework is demonstrated for groundwater R&D rate models but could be applied to other similar studies, for instance, to understanding hypoxia based on physics-based models and remotely sensed variables. Furthermore, our key contribution is in designing a ranking method for R&D maps that allows us to analyze multiple plausible R&D maps with a different number of zones which was not possible in our earlier prototype of the framework called Spatial Pattern to Learn. We will present experimental results using examples R&D and other maps from an area in Wisconsin.

  8. An accurate two-phase approximate solution to the acute viral infection model

    Perelson, Alan S [Los Alamos National Laboratory

    2009-01-01

    During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.

  9. Accurate Modeling of The Siemens S7 SCADA Protocol For Intrusion Detection And Digital Forensic

    Amit Kleinmann

    2014-09-01

    Full Text Available The Siemens S7 protocol is commonly used in SCADA systems for communications between a Human Machine Interface (HMI and the Programmable Logic Controllers (PLCs. This paper presents a model-based Intrusion Detection Systems (IDS designed for S7 networks. The approach is based on the key observation that S7 traffic to and from a specific PLC is highly periodic; as a result, each HMI-PLC channel can be modeled using its own unique Deterministic Finite Automaton (DFA. The resulting DFA-based IDS is very sensitive and is able to flag anomalies such as a message appearing out of its position in the normal sequence or a message referring to a single unexpected bit. The intrusion detection approach was evaluated on traffic from two production systems. Despite its high sensitivity, the system had a very low false positive rate - over 99.82% of the traffic was identified as normal.

  10. Accurate Simulation of 802.11 Indoor Links: A "Bursty" Channel Model Based on Real Measurements

    Agüero Ramón

    2010-01-01

    Full Text Available We propose a novel channel model to be used for simulating indoor wireless propagation environments. An extensive measurement campaign was carried out to assess the performance of different transport protocols over 802.11 links. This enabled us to better adjust our approach, which is based on an autoregressive filter. One of the main advantages of this proposal lies in its ability to reflect the "bursty" behavior which characterizes indoor wireless scenarios, having a great impact on the behavior of upper layer protocols. We compare this channel model, integrated within the Network Simulator (ns-2 platform, with other traditional approaches, showing that it is able to better reflect the real behavior which was empirically assessed.

  11. Features of creation of highly accurate models of triumphal pylons for archaeological reconstruction

    Grishkanich, A. S.; Sidorov, I. S.; Redka, D. N.

    2015-12-01

    Cited a measuring operation for determining the geometric characteristics of objects in space and geodetic survey objects on the ground. In the course of the work, data were obtained on a relative positioning of the pylons in space. There are deviations from verticality. In comparison with traditional surveying this testing method is preferable because it allows you to get in semi-automated mode, the CAD model of the object is high for subsequent analysis that is more economical-ly advantageous.

  12. Morphometric analysis of Russian Plain's small lakes on the base of accurate digital bathymetric models

    Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana

    2016-04-01

    Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.

  13. Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum

    Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.

    2013-02-01

    Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.

  14. The Reduced RUM as a Logit Model: Parameterization and Constraints.

    Chiu, Chia-Yi; Köhn, Hans-Friedrich

    2016-06-01

    Cognitive diagnosis models (CDMs) for educational assessment are constrained latent class models. Examinees are assigned to classes of intellectual proficiency defined in terms of cognitive skills called attributes, which an examinee may or may not have mastered. The Reduced Reparameterized Unified Model (Reduced RUM) has received considerable attention among psychometricians. Markov Chain Monte Carlo (MCMC) or Expectation Maximization (EM) are typically used for estimating the Reduced RUM. Commercial implementations of the EM algorithm are available in the latent class analysis (LCA) routines of Latent GOLD and Mplus, for example. Fitting the Reduced RUM with an LCA routine requires that it be reparameterized as a logit model, with constraints imposed on the parameters. For models involving two attributes, these have been worked out. However, for models involving more than two attributes, the parameterization and the constraints are nontrivial and currently unknown. In this article, the general parameterization of the Reduced RUM as a logit model involving any number of attributes and the associated parameter constraints are derived. As a practical illustration, the LCA routine in Mplus is used for fitting the Reduced RUM to two synthetic data sets and to a real-world data set; for comparison, the results obtained by using the MCMC implementation in OpenBUGS are also provided. PMID:25838247

  15. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  16. An accurate higher order displacement model with shear and normal deformations effects for functionally graded plates

    Jha, D.K., E-mail: dkjha@barc.gov.in [Civil Engineering Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India); Kant, Tarun [Department of Civil Engineering, Indian Institute of Technology Bombay, Powai, Mumbai 400 076 (India); Srinivas, K. [Civil Engineering Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India); Singh, R.K. [Reactor Safety Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India)

    2013-12-15

    Highlights: • We model through-thickness variation of material properties in functionally graded (FG) plates. • Effect of material grading index on deformations, stresses and natural frequency of FG plates is studied. • Effect of higher order terms in displacement models is studied for plate statics. • The benchmark solutions for the static analysis and free vibration of thick FG plates are presented. -- Abstract: Functionally graded materials (FGMs) are the potential candidates under consideration for designing the first wall of fusion reactors with a view to make best use of potential properties of available materials under severe thermo-mechanical loading conditions. A higher order shear and normal deformations plate theory is employed for stress and free vibration analyses of functionally graded (FG) elastic, rectangular, and simply (diaphragm) supported plates. Although FGMs are highly heterogeneous in nature, they are generally idealized as continua with mechanical properties changing smoothly with respect to spatial coordinates. The material properties of FG plates are assumed here to vary through thickness of plate in a continuous manner. Young's modulii and material densities are considered to be varying continuously in thickness direction according to volume fraction of constituents which are mathematically modeled here as exponential and power law functions. The effects of variation of material properties in terms of material gradation index on deformations, stresses and natural frequency of FG plates are investigated. The accuracy of present numerical solutions has been established with respect to exact three-dimensional (3D) elasticity solutions and the other models’ solutions available in literature.

  17. Generation of Accurate Lateral Boundary Conditions for a Surface-Water Groundwater Interaction Model

    Khambhammettu, P.; Tsou, M.; Panday, S. M.; Kool, J.; Wei, X.

    2010-12-01

    The 106 mile long Peace River in Florida flows south from Lakeland to Charlotte Harbor and has a drainage basin of approximately 2,350 square miles. A long-term decline in stream flows and groundwater potentiometric levels has been observed in the region. Long-term trends in rainfall, along with effects of land use changes on runoff, surface-water storage, recharge and evapotranspiration patterns, and increased groundwater and surface-water withdrawals have contributed to this decline. The South West Florida Water Management District (SWFWMD) has funded the development of the Peace River Integrated Model (PRIM) to assess the effects of land use, water use, and climatic changes on stream flows and to evaluate the effectiveness of various management alternatives for restoring stream flows. The PRIM was developed using MODHMS, a fully integrated surface-water groundwater flow and transport simulator developed by HydroGeoLogic, Inc. The development of the lateral boundary conditions (groundwater inflow and outflow) for the PRIM in both historical and predictive contexts is discussed in this presentation. Monthly-varying specified heads were used to define the lateral boundary conditions for the PRIM. These head values were derived from the coarser Southern District Groundwater Model (SDM). However, there were discrepancies between the simulated SDM heads and measured heads: the likely causes being spatial (use of a coarser grid) and temporal (monthly average pumping rates and recharge rates) approximations in the regional SDM. Finer re-calibration of the SDM was not feasible, therefore, an innovative approach was adopted to remove the discrepancies. In this approach, point discrepancies/residuals between the observed and simulated heads were kriged with an appropriate variogram to generate a residual surface. This surface was then added to the simulated head surface of the SDM to generate a corrected head surface. This approach preserves the trends associated with

  18. A simple and accurate numerical network flow model for bionic micro heat exchangers

    Pieper, M.; Klein, P. [Fraunhofer Institute (ITWM), Kaiserslautern (Germany)

    2011-05-15

    Heat exchangers are often associated with drawbacks like a large pressure drop or a non-uniform flow distribution. Recent research shows that bionic structures can provide possible improvements. We considered a set of such structures that were designed with M. Hermann's FracTherm {sup registered} algorithm. In order to optimize and compare them with conventional heat exchangers, we developed a numerical method to determine their performance. We simulated the flow in the heat exchanger applying a network model and coupled these results with a finite volume method to determine the heat distribution in the heat exchanger. (orig.)

  19. Reduced Lorenz models for anomalous transport and profile resilience

    Rypdal, K.; Garcia, Odd Erik

    2007-01-01

    resilience of the profile. Particular emphasis is put on the diffusionless limit, where these equations reduce to a simple dynamical system depending only on one single forcing parameter. This model is studied numerically, stressing experimentally observable signatures, and some of the perils of dimension-reducing...

  20. Considering mask pellicle effect for more accurate OPC model at 45nm technology node

    Wang, Ching-Heng; Liu, Qingwei; Zhang, Liguo

    2008-11-01

    Now it comes to the 45nm technology node, which should be the first generation of the immersion micro-lithography. And the brand-new lithography tool makes many optical effects, which can be ignored at 90nm and 65nm nodes, now have significant impact on the pattern transmission process from design to silicon. Among all the effects, one that needs to be pay attention to is the mask pellicle effect's impact on the critical dimension variation. With the implement of hyper-NA lithography tools, light transmits the mask pellicle vertically is not a good approximation now, and the image blurring induced by the mask pellicle should be taken into account in the computational microlithography. In this works, we investigate how the mask pellicle impacts the accuracy of the OPC model. And we will show that considering the extremely tight critical dimension control spec for 45nm generation node, to take the mask pellicle effect into the OPC model now becomes necessary.

  1. Accurate modeling of SiPM detectors coupled to FE electronics for timing performance analysis

    Ciciriello, F.; Corsi, F.; Licciulli, F.; Marzocca, C. [DEE-Politecnico di Bari, Via Orabona 4, I-70125 Bari (Italy); Matarrese, G., E-mail: matarrese@deemail.poliba.it [DEE-Politecnico di Bari, Via Orabona 4, I-70125 Bari (Italy); Del Guerra, A.; Bisogni, M.G. [Department of Physics, University of Pisa, Largo Bruno Pontecorvo 3, I-56127 Pisa (Italy)

    2013-08-01

    It has already been shown how the shape of the current pulse produced by a SiPM in response to an incident photon is sensibly affected by the characteristics of the front-end electronics (FEE) used to read out the detector. When the application requires to approach the best theoretical time performance of the detection system, the influence of all the parasitics associated to the coupling SiPM–FEE can play a relevant role and must be adequately modeled. In particular, it has been reported that the shape of the current pulse is affected by the parasitic inductance of the wiring connection between SiPM and FEE. In this contribution, we extend the validity of a previously presented SiPM model to account for the wiring inductance. Various combinations of the main performance parameters of the FEE (input resistance and bandwidth) have been simulated in order to evaluate their influence on the time accuracy of the detection system, when the time pick-off of each single event is extracted by means of a leading edge discriminator (LED) technique.

  2. Combined model of non-conformal layer growth for accurate optical simulation of thin-film silicon solar cells

    Sever, M.; Lipovsek, B.; Krc, J.; Campa, A.; Topic, M. [University of Ljubljana, Faculty of Electrical Engineering Trzaska cesta 25, Ljubljana 1000 (Slovenia); Sanchez Plaza, G. [Technical University of Valencia, Valencia Nanophotonics Technology Center (NTC) Valencia 46022 (Spain); Haug, F.J. [Ecole Polytechnique Federale de Lausanne EPFL, Institute of Microengineering IMT, Photovoltaics and Thin-Film Electronics Laboratory, Neuchatel 2000 (Switzerland); Duchamp, M. [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons Institute for Microstructure Research, Research Centre Juelich, Juelich D-52425 (Germany); Soppe, W. [ECN-Solliance, High Tech Campus 5, Eindhoven 5656 AE (Netherlands)

    2013-12-15

    In thin-film silicon solar cells textured interfaces are introduced, leading to improved antireflection and light trapping capabilities of the devices. Thin-layers are deposited on surface-textured substrates or superstrates and the texture is translated to internal interfaces. For accurate optical modelling of the thin-film silicon solar cells it is important to define and include the morphology of textured interfaces as realistic as possible. In this paper we present a model of thin-layer growth on textured surfaces which combines two growth principles: conformal and isotropic one. With the model we can predict the morphology of subsequent internal interfaces in thin-film silicon solar cells based on the known morphology of the substrate or superstrate. Calibration of the model for different materials grown under certain conditions is done on various cross-sectional scanning electron microscopy images of realistic devices. Advantages over existing growth modelling approaches are demonstrated - one of them is the ability of the model to predict and omit the textures with high possibility of defective regions formation inside the Si absorber layers. The developed model of layer growth is used in rigorous 3-D optical simulations employing the COMSOL simulator. A sinusoidal texture of the substrate is optimised for the case of a micromorph silicon solar cell. More than a 50 % increase in short-circuit current density of the bottom cell with respect to the flat case is predicted, considering the defect-free absorber layers. The developed approach enables accurate prediction and powerful design of current-matched top and bottom cell.

  3. Extrapolation of Urn Models via Poissonization: Accurate Measurements of the Microbial Unknown

    Lladser, Manuel; Reeder, Jens; 10.1371/journal.pone.0021105

    2011-01-01

    The availability of high-throughput parallel methods for sequencing microbial communities is increasing our knowledge of the microbial world at an unprecedented rate. Though most attention has focused on determining lower-bounds on the alpha-diversity i.e. the total number of different species present in the environment, tight bounds on this quantity may be highly uncertain because a small fraction of the environment could be composed of a vast number of different species. To better assess what remains unknown, we propose instead to predict the fraction of the environment that belongs to unsampled classes. Modeling samples as draws with replacement of colored balls from an urn with an unknown composition, and under the sole assumption that there are still undiscovered species, we show that conditionally unbiased predictors and exact prediction intervals (of constant length in logarithmic scale) are possible for the fraction of the environment that belongs to unsampled classes. Our predictions are based on a P...

  4. Accurate programmable electrocardiogram generator using a dynamical model implemented on a microcontroller

    Chien Chang, Jia-Ren; Tai, Cheng-Chi

    2006-07-01

    This article reports on the design and development of a complete, programmable electrocardiogram (ECG) generator, which can be used for the testing, calibration and maintenance of electrocardiograph equipment. A modified mathematical model, developed from the three coupled ordinary differential equations of McSharry et al. [IEEE Trans. Biomed. Eng. 50, 289, (2003)], was used to locate precisely the positions of the onset, termination, angle, and duration of individual components in an ECG. Generator facilities are provided so the user can adjust the signal amplitude, heart rate, QRS-complex slopes, and P- and T-wave settings. The heart rate can be adjusted in increments of 1BPM (beats per minute), from 20to176BPM, while the amplitude of the ECG signal can be set from 0.1to400mV with a 0.1mV resolution. Experimental results show that the proposed concept and the resulting system are feasible.

  5. How to build accurate macroscopic models of actinide ions in aqueous solvents?

    Classical molecular dynamics (MD) based on parameterized force fields allow one to simulate large molecular systems on significantly long simulation times (usually, at the ns scale and above). Hence, they provide statistically relevant sampled sets of data, which may then be post-processed to estimate specific properties. However, the study of the ligand coordination dynamics around heavy ions requires the use of sophisticated force fields accounting for in particular polarization phenomena, as well as for the charge-transfer effects affecting ion/ligand interactions, which are shown to be significant in several heavy element systems. Our current efforts focus on the development of force-field models for radionuclides, with the intention of pushing as far as possible the accuracy of all competing interactions between the various elements present in solution, that is the metal, the ligands, the solvent, and the counter-ions

  6. Communication: Accurate higher-order van der Waals coefficients between molecules from a model dynamic multipole polarizability

    Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C8 and C10 between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C8 and 7% for C10. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry

  7. Communication: Accurate higher-order van der Waals coefficients between molecules from a model dynamic multipole polarizability

    Tao, Jianmin, E-mail: jianmin.tao@temple.edu [Department of Physics, Temple University, Philadelphia, Pennsylvania 19122 (United States); Rappe, Andrew M. [Department of Chemistry, University of Pennsylvania, Philadelphia, Pennsylvania 19104-6323 (United States)

    2016-01-21

    Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C{sub 6} alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C{sub 8} and C{sub 10} between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C{sub 8} and 7% for C{sub 10}. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.

  8. A fuzzy-logic-based approach to accurate modeling of a double gate MOSFET for nanoelectronic circuit design

    The double gate (DG) silicon MOSFET with an extremely short-channel length has the appropriate features to constitute the devices for nanoscale circuit design. To develop a physical model for extremely scaled DG MOSFETs, the drain current in the channel must be accurately determined under the application of drain and gate voltages. However, modeling the transport mechanism for the nanoscale structures requires the use of overkill methods and models in terms of their complexity and computation time (self-consistent, quantum computations, ...). Therefore, new methods and techniques are required to overcome these constraints. In this paper, a new approach based on the fuzzy logic computation is proposed to investigate nanoscale DG MOSFETs. The proposed approach has been implemented in a device simulator to show the impact of the proposed approach on the nanoelectronic circuit design. The approach is general and thus is suitable for any type of nanoscale structure investigation problems in the nanotechnology industry. (semiconductor devices)

  9. A fuzzy-logic-based approach to accurate modeling of a double gate MOSFET for nanoelectronic circuit design

    F. Djeffal; A. Ferdi; M. Chahdi

    2012-01-01

    The double gate (DG) silicon MOSFET with an extremely short-channel length has the appropriate features to constitute the devices for nanoscale circuit design.To develop a physical model for extremely scaled DG MOSFETs,the drain current in the channel must be accurately determined under the application of drain and gate voltages.However,modeling the transport mechanism for the nanoscale structures requires the use of overkill methods and models in terms of their complexity and computation time (self-consistent,quantum computations ).Therefore,new methods and techniques are required to overcome these constraints.In this paper,a new approach based on the fuzzy logic computation is proposed to investigate nanoscale DG MOSFETs.The proposed approach has been implemented in a device simulator to show the impact of the proposed approach on the nanoelectronic circuit design.The approach is general and thus is suitable for any type ofnanoscale structure investigation problems in the nanotechnology industry.

  10. The human skin/chick chorioallantoic membrane model accurately predicts the potency of cosmetic allergens.

    Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S

    2009-04-01

    The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin. PMID:19054059

  11. Wind-tunnel tests and modeling indicate that aerial dispersant delivery operations are highly accurate

    Hoffman, C.; Fritz, B. [United States Dept. of Agriculture, College Station, TX (United States); Nedwed, T. [ExxonMobil Upstream Research Co., Houston, TX (United States); Coolbaugh, T. [ExxonMobil Research and Engineering Co., Fairfax, VA (United States); Huber, C.A. [CAH Inc., Williamsburg, VA (United States)

    2009-07-01

    Oil dispersants are used to accelerate the dispersion of floating oil slicks. This study was conducted to select application equipment that will help to optimize the application oil dispersants from aircraft. Oil spill responders have a broad range of oil dispersants at their disposal because the physical and chemical interaction between the oil and dispersant is critical to successful mitigation. In order to make efficient use of dispersants, it is important to evaluate how each one atomizes once released from an aircraft. The specific goal of this study was to evaluate current spray nozzles used to spray oil dispersants from aircraft. The United States Department of Agriculture's high-speed wind tunnel facility in College Station, Texas was used to determine droplet size distributions generated by dispersant delivery nozzles at wind speeds similar to those used in aerial dispersant applications. Droplet distribution was quantified using a laser particle size analyzer. Wind-tunnel tests were conducted using water, Corexit 9500 and 9527 as well as a new dispersant gel being developed by ExxonMobil. The measured drop-size distributions were then used in an agriculture spray model to predict the delivery efficiency and swath width of dispersant delivered at flight speeds and altitudes commonly used for dispersant application. It was concluded that current practices for aerial application of dispersants lead to very efficient application. 19 refs., 5 tabs., 10 figs.

  12. Wind-tunnel tests and modeling indicate that aerial dispersant delivery operations are highly accurate

    Oil dispersants are used to accelerate the dispersion of floating oil slicks. This study was conducted to select application equipment that will help to optimize the application oil dispersants from aircraft. Oil spill responders have a broad range of oil dispersants at their disposal because the physical and chemical interaction between the oil and dispersant is critical to successful mitigation. In order to make efficient use of dispersants, it is important to evaluate how each one atomizes once released from an aircraft. The specific goal of this study was to evaluate current spray nozzles used to spray oil dispersants from aircraft. The United States Department of Agriculture's high-speed wind tunnel facility in College Station, Texas was used to determine droplet size distributions generated by dispersant delivery nozzles at wind speeds similar to those used in aerial dispersant applications. Droplet distribution was quantified using a laser particle size analyzer. Wind-tunnel tests were conducted using water, Corexit 9500 and 9527 as well as a new dispersant gel being developed by ExxonMobil. The measured drop-size distributions were then used in an agriculture spray model to predict the delivery efficiency and swath width of dispersant delivered at flight speeds and altitudes commonly used for dispersant application. It was concluded that current practices for aerial application of dispersants lead to very efficient application. 19 refs., 5 tabs., 10 figs

  13. A Comparison of Digital Elevation Models to Accurately Predict Stream Locations

    Trowbridge, Spencer

    Three separate digital elevation models (DEMs) were compared in their ability to predict stream locations. The first DEM from the Shuttle Radar Topography Mission had a resolution of 90 meters, the second DEM from the National Elevation Dataset had a resolution of 30 meters, and the third DEM was created from Light Detection and Ranging (LiDAR) data and had a resolution of 4.34 meters. Ultimately, stream locations were created from these DEMs and compared to the National Hydrography Dataset (NHD) and stream channels traced from aerial photographs. Each bank of the named streams of the Papillion Creek Watershed were traced and samples were obtained that represent error in the placement of the derived stream locations. Measurements were taken from the centerline of the traced stream channels to where orthogonal transects intersected with the derived stream channel of the DEMs and the streams of the NHD. This study found that DEMs with differing resolutions will delineate stream channels differently and that without human assistance in processing elevation data, the finest resolution DEM was not the best at reproducing stream locations.

  14. GPS satellite and receiver instrumental biases estimation using least squares method for accurate ionosphere modelling

    G Sasibhushana Rao

    2007-10-01

    The positional accuracy of the Global Positioning System (GPS)is limited due to several error sources.The major error is ionosphere.By augmenting the GPS,the Category I (CAT I)Precision Approach (PA)requirements can be achieved.The Space-Based Augmentation System (SBAS)in India is known as GPS Aided Geo Augmented Navigation (GAGAN).One of the prominent errors in GAGAN that limits the positional accuracy is instrumental biases.Calibration of these biases is particularly important in achieving the CAT I PA landings.In this paper,a new algorithm is proposed to estimate the instrumental biases by modelling the TEC using 4th order polynomial.The algorithm uses values corresponding to a single station for one month period and the results confirm the validity of the algorithm.The experimental results indicate that the estimation precision of the satellite-plus-receiver instrumental bias is of the order of ± 0.17 nsec.The observed mean bias error is of the order − 3.638 nsec and − 4.71 nsec for satellite 1 and 31 respectively.It is found that results are consistent over the period.

  15. Reduced order modeling of some fluid flows of industrial interest

    Alonso, D; Terragni, F; Velazquez, A; Vega, J M, E-mail: josemanuel.vega@upm.es [E.T.S.I. Aeronauticos, Universidad Politecnica de Madrid, 28040 Madrid (Spain)

    2012-06-01

    Some basic ideas are presented for the construction of robust, computationally efficient reduced order models amenable to be used in industrial environments, combined with somewhat rough computational fluid dynamics solvers. These ideas result from a critical review of the basic principles of proper orthogonal decomposition-based reduced order modeling of both steady and unsteady fluid flows. In particular, the extent to which some artifacts of the computational fluid dynamics solvers can be ignored is addressed, which opens up the possibility of obtaining quite flexible reduced order models. The methods are illustrated with the steady aerodynamic flow around a horizontal tail plane of a commercial aircraft in transonic conditions, and the unsteady lid-driven cavity problem. In both cases, the approximations are fairly good, thus reducing the computational cost by a significant factor. (review)

  16. Reduced order modeling of some fluid flows of industrial interest

    Some basic ideas are presented for the construction of robust, computationally efficient reduced order models amenable to be used in industrial environments, combined with somewhat rough computational fluid dynamics solvers. These ideas result from a critical review of the basic principles of proper orthogonal decomposition-based reduced order modeling of both steady and unsteady fluid flows. In particular, the extent to which some artifacts of the computational fluid dynamics solvers can be ignored is addressed, which opens up the possibility of obtaining quite flexible reduced order models. The methods are illustrated with the steady aerodynamic flow around a horizontal tail plane of a commercial aircraft in transonic conditions, and the unsteady lid-driven cavity problem. In both cases, the approximations are fairly good, thus reducing the computational cost by a significant factor. (review)

  17. What makes an accurate and reliable subject-specific finite element model? A case study of an elephant femur.

    Panagiotopoulou, O; Wilshin, S D; Rayfield, E J; Shefelbine, S J; Hutchinson, J R

    2012-02-01

    Finite element modelling is well entrenched in comparative vertebrate biomechanics as a tool to assess the mechanical design of skeletal structures and to better comprehend the complex interaction of their form-function relationships. But what makes a reliable subject-specific finite element model? To approach this question, we here present a set of convergence and sensitivity analyses and a validation study as an example, for finite element analysis (FEA) in general, of ways to ensure a reliable model. We detail how choices of element size, type and material properties in FEA influence the results of simulations. We also present an empirical model for estimating heterogeneous material properties throughout an elephant femur (but of broad applicability to FEA). We then use an ex vivo experimental validation test of a cadaveric femur to check our FEA results and find that the heterogeneous model matches the experimental results extremely well, and far better than the homogeneous model. We emphasize how considering heterogeneous material properties in FEA may be critical, so this should become standard practice in comparative FEA studies along with convergence analyses, consideration of element size, type and experimental validation. These steps may be required to obtain accurate models and derive reliable conclusions from them. PMID:21752810

  18. Accurate Modeling of the Cubic and Antiferrodistortive Phases of SrTiO3 with Screened Hybrid Density Functional Theory

    El-Mellouhi, Fadwa; Lucero, Melissa J; Scuseria, Gustavo E

    2011-01-01

    We have calculated the properties of SrTiO3 (STO) using a wide array of density functionals ranging from standard semi-local functionals to modern range-separated hybrids, combined with several basis sets of varying size/quality. We show how these combination's predictive ability varies signi?cantly, both for STO's cubic and antiferrodistortive (AFD) phases, with the greatest variation in functional/basis set e?cacy seen in modeling the AFD phase. The screened hybrid functionals we utilized predict the structural properties of both phases in very good agreement with experiment, especially if used with large (but still computationally tractable) basis sets. The most accurate results presented in this study, namely those from HSE06/modi?ed-def2-TZVP, stand as the most accurate modeling of STO to date when compared to the literature; these results agree well with experimental structural and electronic properties as well as providing insight into the band structure alteration during the phase transition.

  19. Accurate prediction of interference minima in linear molecular harmonic spectra by a modified two-center model

    Xin, Cui; Di-Yu, Zhang; Gao, Chen; Ji-Gen, Chen; Si-Liang, Zeng; Fu-Ming, Guo; Yu-Jun, Yang

    2016-03-01

    We demonstrate that the interference minima in the linear molecular harmonic spectra can be accurately predicted by a modified two-center model. Based on systematically investigating the interference minima in the linear molecular harmonic spectra by the strong-field approximation (SFA), it is found that the locations of the harmonic minima are related not only to the nuclear distance between the two main atoms contributing to the harmonic generation, but also to the symmetry of the molecular orbital. Therefore, we modify the initial phase difference between the double wave sources in the two-center model, and predict the harmonic minimum positions consistent with those simulated by SFA. Project supported by the National Basic Research Program of China (Grant No. 2013CB922200) and the National Natural Science Foundation of China (Grant Nos. 11274001, 11274141, 11304116, 11247024, and 11034003), and the Jilin Provincial Research Foundation for Basic Research, China (Grant Nos. 20130101012JC and 20140101168JC).

  20. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  1. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    Shiyao Wang

    2016-02-01

    Full Text Available A high-performance differential global positioning system (GPS  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU/dead reckoning (DR data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  2. A method for accurate modelling of the crystal response function at a crystal sub-level applied to PET reconstruction

    Stute, S.; Benoit, D.; Martineau, A.; Rehfeld, N. S.; Buvat, I.

    2011-02-01

    Positron emission tomography (PET) images suffer from low spatial resolution and signal-to-noise ratio. Accurate modelling of the effects affecting resolution within iterative reconstruction algorithms can improve the trade-off between spatial resolution and signal-to-noise ratio in PET images. In this work, we present an original approach for modelling the resolution loss introduced by physical interactions between and within the crystals of the tomograph and we investigate the impact of such modelling on the quality of the reconstructed images. The proposed model includes two components: modelling of the inter-crystal scattering and penetration (interC) and modelling of the intra-crystal count distribution (intraC). The parameters of the model were obtained using a Monte Carlo simulation of the Philips GEMINI GXL response. Modelling was applied to the raw line-of-response geometric histograms along the four dimensions and introduced in an iterative reconstruction algorithm. The impact of modelling interC, intraC or combined interC and intraC on spatial resolution, contrast recovery and noise was studied using simulated phantoms. The feasibility of modelling interC and intraC in two clinical 18F-NaF scans was also studied. Measurements on Monte Carlo simulated data showed that, without any crystal interaction modelling, the radial spatial resolution in air varied from 5.3 mm FWHM at the centre of the field-of-view (FOV) to 10 mm at 266 mm from the centre. Resolution was improved with interC modelling (from 4.4 mm in the centre to 9.6 mm at the edge), or with intraC modelling only (from 4.8 mm in the centre to 4.3 mm at the edge), and it became stationary across the FOV (4.2 mm FWHM) when combining interC and intraC modelling. This improvement in resolution yielded significant contrast enhancement, e.g. from 65 to 76% and 55.5 to 68% for a 6.35 mm radius sphere with a 3.5 sphere-to-background activity ratio at 55 and 215 mm from the centre of the FOV, respectively

  3. A method for accurate modelling of the crystal response function at a crystal sub-level applied to PET reconstruction

    Positron emission tomography (PET) images suffer from low spatial resolution and signal-to-noise ratio. Accurate modelling of the effects affecting resolution within iterative reconstruction algorithms can improve the trade-off between spatial resolution and signal-to-noise ratio in PET images. In this work, we present an original approach for modelling the resolution loss introduced by physical interactions between and within the crystals of the tomograph and we investigate the impact of such modelling on the quality of the reconstructed images. The proposed model includes two components: modelling of the inter-crystal scattering and penetration (interC) and modelling of the intra-crystal count distribution (intraC). The parameters of the model were obtained using a Monte Carlo simulation of the Philips GEMINI GXL response. Modelling was applied to the raw line-of-response geometric histograms along the four dimensions and introduced in an iterative reconstruction algorithm. The impact of modelling interC, intraC or combined interC and intraC on spatial resolution, contrast recovery and noise was studied using simulated phantoms. The feasibility of modelling interC and intraC in two clinical 18F-NaF scans was also studied. Measurements on Monte Carlo simulated data showed that, without any crystal interaction modelling, the radial spatial resolution in air varied from 5.3 mm FWHM at the centre of the field-of-view (FOV) to 10 mm at 266 mm from the centre. Resolution was improved with interC modelling (from 4.4 mm in the centre to 9.6 mm at the edge), or with intraC modelling only (from 4.8 mm in the centre to 4.3 mm at the edge), and it became stationary across the FOV (4.2 mm FWHM) when combining interC and intraC modelling. This improvement in resolution yielded significant contrast enhancement, e.g. from 65 to 76% and 55.5 to 68% for a 6.35 mm radius sphere with a 3.5 sphere-to-background activity ratio at 55 and 215 mm from the centre of the FOV, respectively

  4. Fast and accurate Monte Carlo modeling of a kilovoltage X-ray therapy unit using a photon-source approximation for treatment planning in complex media

    B Zeinali-Rafsanjani

    2015-01-01

    Full Text Available To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL, percentage depth doses (PDDs and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam.

  5. Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests.

    Gao, Yaozong; Shao, Yeqin; Lian, Jun; Wang, Andrew Z; Chen, Ronald C; Shen, Dinggang

    2016-06-01

    Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a non-local external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation. PMID:26800531

  6. SU-E-T-475: An Accurate Linear Model of Tomotherapy MLC-Detector System for Patient Specific Delivery QA

    Purpose: An accurate leaf fluence model can be used in applications such as patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is known that the total fluence is not a linear combination of individual leaf fluence due to leakage-transmission, tongue-and-groove, and source occlusion effect. Here we propose a method to model the nonlinear effects as linear terms thus making the MLC-detector system a linear system. Methods: A leaf pattern basis (LPB) consisting of no-leaf-open, single-leaf-open, double-leaf-open and triple-leaf-open patterns are chosen to represent linear and major nonlinear effects of leaf fluence as a linear system. An arbitrary leaf pattern can be expressed as (or decomposed to) a linear combination of the LPB either pulse by pulse or weighted by dwelling time. The exit detector responses to the LPB are obtained by processing returned detector signals resulting from the predefined leaf patterns for each jaw setting. Through forward transformation, detector signal can be predicted given a delivery plan. An equivalent leaf open time (LOT) sinogram containing output variation information can also be inversely calculated from the measured detector signals. Twelve patient plans were delivered in air. The equivalent LOT sinograms were compared with their planned sinograms. Results: The whole calibration process was done in 20 minutes. For two randomly generated leaf patterns, 98.5% of the active channels showed differences within 0.5% of the local maximum between the predicted and measured signals. Averaged over the twelve plans, 90% of LOT errors were within +/−10 ms. The LOT systematic error increases and shows an oscillating pattern when LOT is shorter than 50 ms. Conclusion: The LPB method models the MLC-detector response accurately, which improves patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is sensitive enough to detect systematic LOT errors as small as 10 ms

  7. On Modeling CPU Utilization of MapReduce Applications

    Rizvandi, Nikzad Babaii; Zomaya, Albert Y

    2012-01-01

    In this paper, we present an approach to predict the total CPU utilization in terms of CPU clock tick of applications when running on MapReduce framework. Our approach has two key phases: profiling and modeling. In the profiling phase, an application is run several times with different sets of MapReduce configuration parameters to profile total CPU clock tick of the application on a given platform. In the modeling phase, multi linear regression is used to map the sets of MapReduce configuration parameters (number of Mappers, number of Reducers, size of File System (HDFS) and the size of input file) to total CPU clock ticks of the application. This derived model can be used for predicting total CPU requirements of the same application when using MapReduce framework on the same platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. Three standard applications (WordCount, Exim Mainlog parsing and Terasort) are used to evaluate our modeling technique on pseu...

  8. Use of a clay modeling task to reduce chocolate craving.

    Andrade, Jackie; Pears, Sally; May, Jon; Kavanagh, David J

    2012-06-01

    Elaborated Intrusion theory (EI theory; Kavanagh, Andrade, & May, 2005) posits two main cognitive components in craving: associative processes that lead to intrusive thoughts about the craved substance or activity, and elaborative processes supporting mental imagery of the substance or activity. We used a novel visuospatial task to test the hypothesis that visual imagery plays a key role in craving. Experiment 1 showed that spending 10 min constructing shapes from modeling clay (plasticine) reduced participants' craving for chocolate compared with spending 10 min 'letting your mind wander'. Increasing the load on verbal working memory using a mental arithmetic task (counting backwards by threes) did not reduce craving further. Experiment 2 compared effects on craving of a simpler verbal task (counting by ones) and clay modeling. Clay modeling reduced overall craving strength and strength of craving imagery, and reduced the frequency of thoughts about chocolate. The results are consistent with EI theory, showing that craving is reduced by loading the visuospatial sketchpad of working memory but not by loading the phonological loop. Clay modeling might be a useful self-help tool to help manage craving for chocolate, snacks and other foods. PMID:22369958

  9. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. PMID:24375512

  10. Comment on ''Accurate analytic model potentials for D2 and H2 based on the perturbed-Morse-oscillator model''

    Huffaker and Cohen (ref.1) claim that the perturbed-Morse-oscillator (PMO) model, for the potential energy function for hydrogen, gives very high accuracy results; surpassing that of the RKR potential. A more efficient approach to formulating analytical functions based on the PMO model is given, and some defects of the PMO model are discussed