WorldWideScience

Sample records for models provide accurate

  1. Monte Carlo modeling provides accurate calibration factors for radionuclide activity meters

    International Nuclear Information System (INIS)

    Zagni, F.; Cicoria, G.; Lucconi, G.; Infantino, A.; Lodi, F.; Marengo, M.

    2014-01-01

    Accurate determination of calibration factors for radionuclide activity meters is crucial for quantitative studies and in the optimization step of radiation protection, as these detectors are widespread in radiopharmacy and nuclear medicine facilities. In this work we developed the Monte Carlo model of a widely used activity meter, using the Geant4 simulation toolkit. More precisely the “PENELOPE” EM physics models were employed. The model was validated by means of several certified sources, traceable to primary activity standards, and other sources locally standardized with spectrometry measurements, plus other experimental tests. Great care was taken in order to accurately reproduce the geometrical details of the gas chamber and the activity sources, each of which is different in shape and enclosed in a unique container. Both relative calibration factors and ionization current obtained with simulations were compared against experimental measurements; further tests were carried out, such as the comparison of the relative response of the chamber for a source placed at different positions. The results showed a satisfactory level of accuracy in the energy range of interest, with the discrepancies lower than 4% for all the tested parameters. This shows that an accurate Monte Carlo modeling of this type of detector is feasible using the low-energy physics models embedded in Geant4. The obtained Monte Carlo model establishes a powerful tool for first instance determination of new calibration factors for non-standard radionuclides, for custom containers, when a reference source is not available. Moreover, the model provides an experimental setup for further research and optimization with regards to materials and geometrical details of the measuring setup, such as the ionization chamber itself or the containers configuration. - Highlights: • We developed a Monte Carlo model of a radionuclide activity meter using Geant4. • The model was validated using several

  2. Accurate market price formation model with both supply-demand and trend-following for global food prices providing policy recommendations.

    Science.gov (United States)

    Lagi, Marco; Bar-Yam, Yavni; Bertrand, Karla Z; Bar-Yam, Yaneer

    2015-11-10

    Recent increases in basic food prices are severely affecting vulnerable populations worldwide. Proposed causes such as shortages of grain due to adverse weather, increasing meat consumption in China and India, conversion of corn to ethanol in the United States, and investor speculation on commodity markets lead to widely differing implications for policy. A lack of clarity about which factors are responsible reinforces policy inaction. Here, for the first time to our knowledge, we construct a dynamic model that quantitatively agrees with food prices. The results show that the dominant causes of price increases are investor speculation and ethanol conversion. Models that just treat supply and demand are not consistent with the actual price dynamics. The two sharp peaks in 2007/2008 and 2010/2011 are specifically due to investor speculation, whereas an underlying upward trend is due to increasing demand from ethanol conversion. The model includes investor trend following as well as shifting between commodities, equities, and bonds to take advantage of increased expected returns. Claims that speculators cannot influence grain prices are shown to be invalid by direct analysis of price-setting practices of granaries. Both causes of price increase, speculative investment and ethanol conversion, are promoted by recent regulatory changes-deregulation of the commodity markets, and policies promoting the conversion of corn to ethanol. Rapid action is needed to reduce the impacts of the price increases on global hunger.

  3. Fishing site mapping using local knowledge provides accurate and ...

    African Journals Online (AJOL)

    Accurate fishing ground maps are necessary for fisheries monitoring. In Velondriake locally managed marine area (LMMA) we observed that the nomenclature of shared fishing sites (FS) is villages dependent. Additionally, the level of illiteracy makes data collection more complicated, leading to data collectors improvising ...

  4. Anatomically accurate, finite model eye for optical modeling.

    Science.gov (United States)

    Liou, H L; Brennan, N A

    1997-08-01

    There is a need for a schematic eye that models vision accurately under various conditions such as refractive surgical procedures, contact lens and spectacle wear, and near vision. Here we propose a new model eye close to anatomical, biometric, and optical realities. This is a finite model with four aspheric refracting surfaces and a gradient-index lens. It has an equivalent power of 60.35 D and an axial length of 23.95 mm. The new model eye provides spherical aberration values within the limits of empirical results and predicts chromatic aberration for wavelengths between 380 and 750 nm. It provides a model for calculating optical transfer functions and predicting optical performance of the eye.

  5. Accurate modeling of parallel scientific computations

    Science.gov (United States)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  6. Estimating the Value of New Technologies That Provide More Accurate Drug Adherence Information to Providers for Their Patients with Schizophrenia.

    Science.gov (United States)

    Shafrin, Jason; Schwartz, Taylor T; Lakdawalla, Darius N; Forma, Felicia M

    2016-11-01

    Nonadherence to antipsychotic medication among patients with schizophrenia results in poor symptom management and increased health care and other costs. Despite its health impact, medication adherence remains difficult to accurately assess. New technologies offer the possibility of real-time patient monitoring data on adherence, which may in turn improve clinical decision making. However, the economic benefit of accurate patient drug adherence information (PDAI) has yet to be evaluated. To quantify how more accurate PDAI can generate value to payers by improving health care provider decision making in the treatment of patients with schizophrenia. A 3-step decision tree modeling framework was used to measure the effect of PDAI on annual costs (2016 U.S. dollars) for patients with schizophrenia who initiated therapy with an atypical antipsychotic. The first step classified patients using 3 attributes: adherence to antipsychotic medication, medication tolerance, and response to therapy conditional on medication adherence. The prevalence of each characteristic was determined from claims database analysis and literature reviews. The second step modeled the effect of PDAI on provider treatment decisions based on health care providers' survey responses to schizophrenia case vignettes. In the survey, providers were randomized to vignettes with access to PDAI and with no access. In the third step, the economic implications of alternative provider decisions were identified from published peer-reviewed studies. The simulation model calculated the total economic value of PDAI as the difference between expected annual patient total cost corresponding to provider decisions made with or without PDAI. In claims data, 75.3% of patients with schizophrenia were found to be nonadherent to their antipsychotic medications. Review of the literature revealed that 7% of patients cannot tolerate medication, and 72.9% would respond to antipsychotic medication if adherent. Survey responses by

  7. Accurate diode behavioral model with reverse recovery

    Science.gov (United States)

    Banáš, Stanislav; Divín, Jan; Dobeš, Josef; Paňko, Václav

    2018-01-01

    This paper deals with the comprehensive behavioral model of p-n junction diode containing reverse recovery effect, applicable to all standard SPICE simulators supporting Verilog-A language. The model has been successfully used in several production designs, which require its full complexity, robustness and set of tuning parameters comparable with standard compact SPICE diode model. The model is like standard compact model scalable with area and temperature and can be used as a stand-alone diode or as a part of more complex device macro-model, e.g. LDMOS, JFET, bipolar transistor. The paper briefly presents the state of the art followed by the chapter describing the model development and achieved solutions. During precise model verification some of them were found non-robust or poorly converging and replaced by more robust solutions, demonstrated in the paper. The measurement results of different technologies and different devices compared with a simulation using the new behavioral model are presented as the model validation. The comparison of model validation in time and frequency domains demonstrates that the implemented reverse recovery effect with correctly extracted parameters improves the model simulation results not only in switching from ON to OFF state, which is often published, but also its impedance/admittance frequency dependency in GHz range. Finally the model parameter extraction and the comparison with SPICE compact models containing reverse recovery effect is presented.

  8. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    Science.gov (United States)

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  9. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  10. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  11. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  12. An Accurate and Dynamic Computer Graphics Muscle Model

    Science.gov (United States)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  13. Cost Calculation Model for Logistics Service Providers

    Directory of Open Access Journals (Sweden)

    Zoltán Bokor

    2012-11-01

    Full Text Available The exact calculation of logistics costs has become a real challenge in logistics and supply chain management. It is essential to gain reliable and accurate costing information to attain efficient resource allocation within the logistics service provider companies. Traditional costing approaches, however, may not be sufficient to reach this aim in case of complex and heterogeneous logistics service structures. So this paper intends to explore the ways of improving the cost calculation regimes of logistics service providers and show how to adopt the multi-level full cost allocation technique in logistics practice. After determining the methodological framework, a sample cost calculation scheme is developed and tested by using estimated input data. Based on the theoretical findings and the experiences of the pilot project it can be concluded that the improved costing model contributes to making logistics costing more accurate and transparent. Moreover, the relations between costs and performances also become more visible, which enhances the effectiveness of logistics planning and controlling significantly

  14. Activity assays and immunoassays for plasma Renin and prorenin: information provided and precautions necessary for accurate measurement

    DEFF Research Database (Denmark)

    Campbell, Duncan J; Nussberger, Juerg; Stowasser, Michael

    2009-01-01

    into focus the differences in information provided by activity assays and immunoassays for renin and prorenin measurement and has drawn attention to the need for precautions to ensure their accurate measurement. CONTENT: Renin activity assays and immunoassays provide related but different information...... provided by these assays and of the precautions necessary to ensure their accuracy....

  15. An accurate and simple large signal model of HEMT

    DEFF Research Database (Denmark)

    Liu, Qing

    1989-01-01

    A large-signal model of discrete HEMTs (high-electron-mobility transistors) has been developed. It is simple and suitable for SPICE simulation of hybrid digital ICs. The model parameters are extracted by using computer programs and data provided by the manufacturer. Based on this model, a hybrid...

  16. Accurate modeling and evaluation of microstructures in complex materials

    Science.gov (United States)

    Tahmasebi, Pejman

    2018-02-01

    Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.

  17. Compact and Accurate Turbocharger Modelling for Engine Control

    DEFF Research Database (Denmark)

    Sorenson, Spencer C; Hendricks, Elbert; Magnússon, Sigurjón

    2005-01-01

    With the current trend towards engine downsizing, the use of turbochargers to obtain extra engine power has become common. A great díffuculty in the use of turbochargers is in the modelling of the compressor map. In general this is done by inserting the compressor map directly into the engine ECU...... (Engine Control Unit) as a table. This method uses a great deal of memory space and often requires on-line interpolation and thus a large amount of CPU time. In this paper a more compact, accurate and rapid method of dealing with the compressor modelling problem is presented and is applicable to all...... turbocharges with radial compressors for either Spark Ignition (SI) or diesel engines...

  18. Accurate Modeling of Buck Converters with Magnetic-Core Inductors

    DEFF Research Database (Denmark)

    Astorino, Antonio; Antonini, Giulio; Swaminathan, Madhavan

    2015-01-01

    In this paper, a modeling approach for buck converters with magnetic-core inductors is presented. Due to the high nonlinearity of magnetic materials, the frequency domain analysis of such circuits is not suitable for an accurate description of their behaviour. Hence, in this work, a timedomain...... model of buck converters with magnetic-core inductors in a SimulinkR environment is proposed. As an example, the presented approach is used to simulate an eight-phase buck converter. The simulation results show that an unexpected system behaviour in terms of current ripple amplitude needs the inductor core...

  19. Can Raters with Reduced Job Descriptive Information Provide Accurate Position Analysis Questionnaire (PAQ) Ratings?

    Science.gov (United States)

    Friedman, Lee; Harvey, Robert J.

    1986-01-01

    Job-naive raters provided with job descriptive information made Position Analysis Questionnaire (PAQ) ratings which were validated against ratings of job analysts who were also job content experts. None of the reduced job descriptive information conditions enabled job-naive raters to obtain either acceptable levels of convergent validity with…

  20. Chewing simulation with a physically accurate deformable model.

    Science.gov (United States)

    Pascale, Andra Maria; Ruge, Sebastian; Hauth, Steffen; Kordaß, Bernd; Linsen, Lars

    2015-01-01

    Nowadays, CAD/CAM software is being used to compute the optimal shape and position of a new tooth model meant for a patient. With this possible future application in mind, we present in this article an independent and stand-alone interactive application that simulates the human chewing process and the deformation it produces in the food substrate. Chewing motion sensors are used to produce an accurate representation of the jaw movement. The substrate is represented by a deformable elastic model based on the finite linear elements method, which preserves physical accuracy. Collision detection based on spatial partitioning is used to calculate the forces that are acting on the deformable model. Based on the calculated information, geometry elements are added to the scene to enhance the information available for the user. The goal of the simulation is to present a complete scene to the dentist, highlighting the points where the teeth came into contact with the substrate and giving information about how much force acted at these points, which therefore makes it possible to indicate whether the tooth is being used incorrectly in the mastication process. Real-time interactivity is desired and achieved within limits, depending on the complexity of the employed geometric models. The presented simulation is a first step towards the overall project goal of interactively optimizing tooth position and shape under the investigation of a virtual chewing process using real patient data (Fig 1).

  1. A new, accurate predictive model for incident hypertension.

    Science.gov (United States)

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-11-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures. The primary study population consisted of 1605 normotensive individuals aged 20-79 years with 5-year follow-up from the population-based study, that is the Study of Health in Pomerania (SHIP). The initial set was randomly split into a training and a testing set. We used a probabilistic graphical model applying a Bayesian network to create a predictive model for incident hypertension and compared the predictive performance with the established Framingham risk score for hypertension. Finally, the model was validated in 2887 participants from INTER99, a Danish community-based intervention study. In the training set of SHIP data, the Bayesian network used a small subset of relevant baseline features including age, mean arterial pressure, rs16998073, serum glucose and urinary albumin concentrations. Furthermore, we detected relevant interactions between age and serum glucose as well as between rs16998073 and urinary albumin concentrations [area under the receiver operating characteristic (AUC 0.76)]. The model was confirmed in the SHIP validation set (AUC 0.78) and externally replicated in INTER99 (AUC 0.77). Compared to the established Framingham risk score for hypertension, the predictive performance of the new model was similar in the SHIP validation set and moderately better in INTER99. Data mining procedures identified a predictive model for incident hypertension, which included innovative and easy-to-measure variables. The findings promise great applicability in screening settings and clinical practice.

  2. Accurate modeling of defects in graphene transport calculations

    Science.gov (United States)

    Linhart, Lukas; Burgdörfer, Joachim; Libisch, Florian

    2018-01-01

    We present an approach for embedding defect structures modeled by density functional theory into large-scale tight-binding simulations. We extract local tight-binding parameters for the vicinity of the defect site using Wannier functions. In the transition region between the bulk lattice and the defect the tight-binding parameters are continuously adjusted to approach the bulk limit far away from the defect. This embedding approach allows for an accurate high-level treatment of the defect orbitals using as many as ten nearest neighbors while keeping a small number of nearest neighbors in the bulk to render the overall computational cost reasonable. As an example of our approach, we consider an extended graphene lattice decorated with Stone-Wales defects, flower defects, double vacancies, or silicon substitutes. We predict distinct scattering patterns mirroring the defect symmetries and magnitude that should be experimentally accessible.

  3. An Accurate Model of Mercury's Spin-Orbit Motion

    Science.gov (United States)

    Rambaux, Nicolas; Bois, Eric

    2005-01-01

    Our work deals with the physical and dynamical causes that induce librations around an equilibrium state defined by the 3:2 spin-orbit resonance of Mercury. In order to integrate the spin-orbit motion of Mercury we have used our gravitational model of the solar System including the Moon's spin-orbit motion. This model called SONYR (acronym of Spin-Orbit N-bodY Relativistic) was previously built by Bois Journet and Vokrouhlicky in accordance with the requirements of the Lunar Laser Ranging observational accuracy. Using the model we have identified and evaluated the main perturbations acting on the spin-orbit motion of Mercury such as the planetary interactions and the dynamical figure of the planet. Moreover the complete rotation of Mercury exhibits two proper frequencies namely 15.825 and 1089 years and one secular variation of 271043 years. Besides we have computed in the Hermean librations the impact of the variation of the greatest principal moment of inertia C/MR2 on the obliquity and on the libration in longitude (1.4 and 0.4 milliarseconds respectively for an increase of 1% on the C/MR2 value). We think that these accurate relations are also significant and useful in the context of the two upcoming missions BepiColombo and MESSENGER.

  4. Measuring physical inactivity: do current measures provide an accurate view of "sedentary" video game time?

    Science.gov (United States)

    Fullerton, Simon; Taylor, Anne W; Dal Grande, Eleonora; Berry, Narelle

    2014-01-01

    Measures of screen time are often used to assess sedentary behaviour. Participation in activity-based video games (exergames) can contribute to estimates of screen time, as current practices of measuring it do not consider the growing evidence that playing exergames can provide light to moderate levels of physical activity. This study aimed to determine what proportion of time spent playing video games was actually spent playing exergames. Data were collected via a cross-sectional telephone survey in South Australia. Participants aged 18 years and above (n = 2026) were asked about their video game habits, as well as demographic and socioeconomic factors. In cases where children were in the household, the video game habits of a randomly selected child were also questioned. Overall, 31.3% of adults and 79.9% of children spend at least some time playing video games. Of these, 24.1% of adults and 42.1% of children play exergames, with these types of games accounting for a third of all time that adults spend playing video games and nearly 20% of children's video game time. A substantial proportion of time that would usually be classified as "sedentary" may actually be spent participating in light to moderate physical activity.

  5. Daily FOUR score assessment provides accurate prognosis of long-term outcome in out-of-hospital cardiac arrest.

    Science.gov (United States)

    Weiss, N; Venot, M; Verdonk, F; Chardon, A; Le Guennec, L; Llerena, M C; Raimbourg, Q; Taldir, G; Luque, Y; Fagon, J-Y; Guerot, E; Diehl, J-L

    2015-05-01

    The accurate prediction of outcome after out-of-hospital cardiac arrest (OHCA) is of major importance. The recently described Full Outline of UnResponsiveness (FOUR) is well adapted to mechanically ventilated patients and does not depend on verbal response. To evaluate the ability of FOUR assessed by intensivists to accurately predict outcome in OHCA. We prospectively identified patients admitted for OHCA with a Glasgow Coma Scale below 8. Neurological assessment was performed daily. Outcome was evaluated at 6 months using Glasgow-Pittsburgh Cerebral Performance Categories (GP-CPC). Eighty-five patients were included. At 6 months, 19 patients (22%) had a favorable outcome, GP-CPC 1-2, and 66 (78%) had an unfavorable outcome, GP-CPC 3-5. Compared to both brainstem responses at day 3 and evolution of Glasgow Coma Scale, evolution of FOUR score over the three first days was able to predict unfavorable outcome more precisely. Thus, absence of improvement or worsening from day 1 to day 3 of FOUR had 0.88 (0.79-0.97) specificity, 0.71 (0.66-0.76) sensitivity, 0.94 (0.84-1.00) PPV and 0.54 (0.49-0.59) NPV to predict unfavorable outcome. Similarly, the brainstem response of FOUR score at 0 evaluated at day 3 had 0.94 (0.89-0.99) specificity, 0.60 (0.50-0.70) sensitivity, 0.96 (0.92-1.00) PPV and 0.47 (0.37-0.57) NPV to predict unfavorable outcome. The absence of improvement or worsening from day 1 to day 3 of FOUR evaluated by intensivists provides an accurate prognosis of poor neurological outcome in OHCA. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  6. New process model proves accurate in tests on catalytic reformer

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar-Rodriguez, E.; Ancheyta-Juarez, J. (Inst. Mexicano del Petroleo, Mexico City (Mexico))

    1994-07-25

    A mathematical model has been devised to represent the process that takes place in a fixed-bed, tubular, adiabatic catalytic reforming reactor. Since its development, the model has been applied to the simulation of a commercial semiregenerative reformer. The development of mass and energy balances for this reformer led to a model that predicts both concentration and temperature profiles along the reactor. A comparison of the model's results with experimental data illustrates its accuracy at predicting product profiles. Simple steps show how the model can be applied to simulate any fixed-bed catalytic reformer.

  7. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  8. Accurate wind farm development and operation. Advanced wake modelling

    Energy Technology Data Exchange (ETDEWEB)

    Brand, A.; Bot, E.; Ozdemir, H. [ECN Unit Wind Energy, P.O. Box 1, NL 1755 ZG Petten (Netherlands); Steinfeld, G.; Drueke, S.; Schmidt, M. [ForWind, Center for Wind Energy Research, Carl von Ossietzky Universitaet Oldenburg, D-26129 Oldenburg (Germany); Mittelmeier, N. REpower Systems SE, D-22297 Hamburg (Germany))

    2013-11-15

    The ability is demonstrated to calculate wind farm wakes on the basis of ambient conditions that were calculated with an atmospheric model. Specifically, comparisons are described between predicted and observed ambient conditions, and between power predictions from three wind farm wake models and power measurements, for a single and a double wake situation. The comparisons are based on performance indicators and test criteria, with the objective to determine the percentage of predictions that fall within a given range about the observed value. The Alpha Ventus site is considered, which consists of a wind farm with the same name and the met mast FINO1. Data from the 6 REpower wind turbines and the FINO1 met mast were employed. The atmospheric model WRF predicted the ambient conditions at the location and the measurement heights of the FINO1 mast. May the predictability of the wind speed and the wind direction be reasonable if sufficiently sized tolerances are employed, it is fairly impossible to predict the ambient turbulence intensity and vertical shear. Three wind farm wake models predicted the individual turbine powers: FLaP-Jensen and FLaP-Ainslie from ForWind Oldenburg, and FarmFlow from ECN. The reliabilities of the FLaP-Ainslie and the FarmFlow wind farm wake models are of equal order, and higher than FLaP-Jensen. Any difference between the predictions from these models is most clear in the double wake situation. Here FarmFlow slightly outperforms FLaP-Ainslie.

  9. Velocity potential formulations of highly accurate Boussinesq-type models

    DEFF Research Database (Denmark)

    Bingham, Harry B.; Madsen, Per A.; Fuhrman, David R.

    2009-01-01

    is of interest because it reduces the computational effort by approximately a factor of two and facilitates a coupling to other potential flow solvers. A new shoaling enhancement operator is introduced to derive new models (in both formulations) with a velocity profile which is always consistent...... structures. Coast. Eng. 53, 929-945) are re-derived in a more general framework which establishes the correct relationship between the model in a velocity formulation and a velocity potential formulation. Although most work with this model has used the velocity formulation, the potential formulation...... with the kinematic bottom boundary condition. The true behaviour of the velocity potential formulation with respect to linear shoaling is given for the first time, correcting errors made by Jamois et al. (Jamois, E., Fuhrman, D.R., Bingham, H.B., Molin, B., 2006. Wave-structure interactions and nonlinear wave...

  10. Double Layered Sheath in Accurate HV XLPE Cable Modeling

    DEFF Research Database (Denmark)

    Gudmundsdottir, Unnur Stella; Silva, J. De; Bak, Claus Leth

    2010-01-01

    This paper discusses modelling of high voltage AC underground cables. For long cables, when crossbonding points are present, not only the coaxial mode of propagation is excited during transient phenomena, but also the intersheath mode. This causes inaccurate simulation results for high frequency ...

  11. Accurate Simulation of Transient Landscape Evolution by Eliminating Numerical Diffusion: The TTLEM 1.0 Model

    Science.gov (United States)

    Govers, G.; Campforts, B.; Schwanghart, W.

    2016-12-01

    Landscape evolution models (LEM) allow studying the earth surface response to a changing climatic and tectonic forcing. While much effort has been devoted to the development of LEMs that simulate a wide range of processes, the numerical accuracy of these models has received much less attention. Most LEMs use first order accurate numerical methods that suffer from substantial numerical diffusion. Numerical diffusion particularly affects the solution of the advection equation and thus the simulation of retreating landforms such as cliffs and river knickpoints with potential unquantified consequences for the integrated response of the simulated landscape. Here we present TTLEM, a spatially explicit, raster based LEM for the study of fluvially eroding landscapes in TopoToolbox 2. TTLEM prevents numerical diffusion by implementing a higher order flux limiting total volume method that is total variation diminishing (TVD-TVM) and solves the partial differential equations of river incision and tectonic displacement. We show that the choice of the TVD-TVM to simulate river incision significantly influences the evolution of simulated landscapes and the spatial and temporal variability of catchment wide erosion rates. Furthermore, a 2D TVD-TVM accurately simulates the evolution of landscapes affected by lateral tectonic displacement, a process whose simulation is hitherto largely limited to LEMs with flexible spatial discretization. By providing accurate numerical schemes on rectangular grids, TTLEM is a widely accessible LEM that is compatible with GIS analysis functions from the TopoToolbox interface. The model code can be downloaded at: https://github.com/wschwanghart/topotoolbox

  12. Artificial Intelligence for Constructing Accurate, Low-Cost Models and

    Science.gov (United States)

    2005-01-01

    models are a virtual representation of a robotic car constructed from the Lego ™ MindStorms ® Robotics Invention System. The baseline vehicle is shown in...sampled using a Lego ® RCX computer coupled with a high accuracy voltage measuring interface from Lego Dacta®. Data were captured using LabVIEW® Software... Lego ® RCX computer coupled with a high accuracy voltage measuring interface from Lego Dacta®. Data were captured using LabVIEW® Software from

  13. Can phenological models predict tree phenology accurately in the future? The unrevealed hurdle of endodormancy break.

    Science.gov (United States)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2016-10-01

    The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. © 2016 John Wiley & Sons Ltd.

  14. Innovative technologies to accurately model waves and moored ship motions

    CSIR Research Space (South Africa)

    van der Molen, W

    2010-09-01

    Full Text Available swells that could excite low-frequency ship motions. The paddles are driven by signal-generation software capable of creating short crested waves with set- down compensation to simulate second-order boundary conditions, thereby forming the theoretical... mass and weight distribution. The vertical placement of blocks is calibrated such that the centre of gravity is at the correct height, while the horizontal placement is chosen such that the moments of inertia for pitch and roll are correct. The model...

  15. Accurate Antenna Models in Ground Penetrating Radar Diffraction Tomography

    DEFF Research Database (Denmark)

    Meincke, Peter; Kim, Oleksiy S.

    2002-01-01

    Linear inversion schemes based on the concept of diffraction tomography have proven successful for ground penetrating radar (GPR) imaging. In many GPR surveys, the antennas of the GPR are located close to the air-soil interface and, therefore, it is important to incorporate the presence of this i......Linear inversion schemes based on the concept of diffraction tomography have proven successful for ground penetrating radar (GPR) imaging. In many GPR surveys, the antennas of the GPR are located close to the air-soil interface and, therefore, it is important to incorporate the presence...... to investigate the validity of this model. We extend that formulation to hold for arbitrary antennas. For simplicity, the 2.5D case is considered, that is, it is assumed that the scattering object in the soil is invariant in one direction, which, for instance, is the case for a pipe. The arbitrary antennas...

  16. Fluorescence imaging in vivo: raster scanned point-source imaging provides more accurate quantification than broad beam geometries.

    Science.gov (United States)

    Pogue, Brian W; Gibbs, Summer L; Chen, Bin; Savellano, Mark

    2004-02-01

    Two fluorescence imaging systems were compared for their ability to quantify mean fluorescence intensity from surface-weighted imaging of tissue. A broad beam CCD camera system was compared to a point sampling system that raster scans to create the image. The effects of absorption and scattering in the background tissue volume were shown to be similar in their effect upon the signal, but the effect of the three-dimensional shape of the tissue was shown to be a significant distortion upon the signal. Spherical phantoms with Intralipid and blood for absorber and scatterer were used with a fixed concentration of aluminum phthalocyanine fluorophore to illustrate that the mean intensity observed with the broad beam system increased with size, while the mean intensity observed with the raster scanned system was not as significantly affected. Similar results were observed in vivo with mice injected with the fluorophore and imaged multiple times to observe the pharmacokinetics of the drug. The fluorescence in the tumor observed with the broad beam system was higher than that observed with the raster scanned system. Based upon the phantom and animal observations in this study, it should be concluded that using broad beam fluorescence imaging systems to quantify fluorescence in vivo may be problematic when comparing tissues with different three dimensional characteristics. In particular, the ratio of fluorescence from tumor to normal tissue can yield inaccurate results when the tumor is large. However, similar measurements with a narrow beam system that is raster scanned to create the images are not as significantly affected by the three dimensional shape of the tissue. Raster scanned imaging appears to provide a more uniform and accurate way to quantify fluorescence signals from distributed tissues in vivo.

  17. Microbiome Data Accurately Predicts the Postmortem Interval Using Random Forest Regression Models

    Directory of Open Access Journals (Sweden)

    Aeriel Belk

    2018-02-01

    Full Text Available Death investigations often include an effort to establish the postmortem interval (PMI in cases in which the time of death is uncertain. The postmortem interval can lead to the identification of the deceased and the validation of witness statements and suspect alibis. Recent research has demonstrated that microbes provide an accurate clock that starts at death and relies on ecological change in the microbial communities that normally inhabit a body and its surrounding environment. Here, we explore how to build the most robust Random Forest regression models for prediction of PMI by testing models built on different sample types (gravesoil, skin of the torso, skin of the head, gene markers (16S ribosomal RNA (rRNA, 18S rRNA, internal transcribed spacer regions (ITS, and taxonomic levels (sequence variants, species, genus, etc.. We also tested whether particular suites of indicator microbes were informative across different datasets. Generally, results indicate that the most accurate models for predicting PMI were built using gravesoil and skin data using the 16S rRNA genetic marker at the taxonomic level of phyla. Additionally, several phyla consistently contributed highly to model accuracy and may be candidate indicators of PMI.

  18. Accurate analytical modeling of junctionless DG-MOSFET by green's function approach

    Science.gov (United States)

    Nandi, Ashutosh; Pandey, Nilesh

    2017-11-01

    An accurate analytical model of Junctionless double gate MOSFET (JL-DG-MOSFET) in the subthreshold regime of operation is developed in this work using green's function approach. The approach considers 2-D mixed boundary conditions and multi-zone techniques to provide an exact analytical solution to 2-D Poisson's equation. The Fourier coefficients are calculated correctly to derive the potential equations that are further used to model the channel current and subthreshold slope of the device. The threshold voltage roll-off is computed from parallel shifts of Ids-Vgs curves between the long channel and short-channel devices. It is observed that the green's function approach of solving 2-D Poisson's equation in both oxide and silicon region can accurately predict channel potential, subthreshold current (Isub), threshold voltage (Vt) roll-off and subthreshold slope (SS) of both long & short channel devices designed with different doping concentrations and higher as well as lower tsi/tox ratio. All the analytical model results are verified through comparisons with TCAD Sentaurus simulation results. It is observed that the model matches quite well with TCAD device simulations.

  19. BEYOND ELLIPSE(S): ACCURATELY MODELING THE ISOPHOTAL STRUCTURE OF GALAXIES WITH ISOFIT AND CMODEL

    Energy Technology Data Exchange (ETDEWEB)

    Ciambur, B. C., E-mail: bciambur@swin.edu.au [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Hawthorn, VIC 3122 (Australia)

    2015-09-10

    This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial, cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.

  20. An automatic and accurate method of full heart segmentation from CT image based on linear gradient model

    Science.gov (United States)

    Yang, Zili

    2017-07-01

    Heart segmentation is an important auxiliary method in the diagnosis of many heart diseases, such as coronary heart disease and atrial fibrillation, and in the planning of tumor radiotherapy. Most of the existing methods for full heart segmentation treat the heart as a whole part and cannot accurately extract the bottom of the heart. In this paper, we propose a new method based on linear gradient model to segment the whole heart from the CT images automatically and accurately. Twelve cases were tested in order to test this method and accurate segmentation results were achieved and identified by clinical experts. The results can provide reliable clinical support.

  1. Kajian Model Kesuksesan Sistem Informasi Delone & Mclean Pada Pengguna Sistem Informasi Akuntansi Accurate Di Kota Sukabumi

    OpenAIRE

    Hudin, Jamal Maulana; Riana, Dwiza

    2016-01-01

    Accurate accounting information system is one of accounting information systems used in the sixcompanies in the city of Sukabumi. DeLone and McLean information system success model is asuitable model to measure the success of the application of information systems in an organizationor company. This study will analyze factors that measure the success of DeLone & McLeaninformation systems model to the users of the Accurate accounting information systems in sixcompanies in the city of Sukabumi. ...

  2. Development of an accurate 3D Monte Carlo broadband atmospheric radiative transfer model

    Science.gov (United States)

    Jones, Alexandra L.

    Radiation is the ultimate source of energy that drives our weather and climate. It is also the fundamental quantity detected by satellite sensors from which earth's properties are inferred. Radiative energy from the sun and emitted from the earth and atmosphere is redistributed by clouds in one of their most important roles in the atmosphere. Without accurately representing these interactions we greatly decrease our ability to successfully predict climate change, weather patterns, and to observe our environment from space. The remote sensing algorithms and dynamic models used to study and observe earth's atmosphere all parameterize radiative transfer with approximations that reduce or neglect horizontal variation of the radiation field, even in the presence of clouds. Despite having complete knowledge of the underlying physics at work, these approximations persist due to perceived computational expense. In the current context of high resolution modeling and remote sensing observations of clouds, from shallow cumulus to deep convective clouds, and given our ever advancing technological capabilities, these approximations have been exposed as inappropriate in many situations. This presents a need for accurate 3D spectral and broadband radiative transfer models to provide bounds on the interactions between clouds and radiation to judge the accuracy of similar but less expensive models and to aid in new parameterizations that take into account 3D effects when coupled to dynamic models of the atmosphere. Developing such a state of the art model based on the open source, object-oriented framework of the I3RC Monte Carlo Community Radiative Transfer ("IMC-original") Model is the task at hand. It has involved incorporating (1) thermal emission sources of radiation ("IMC+emission model"), allowing it to address remote sensing problems involving scattering of light emitted at earthly temperatures as well as spectral cooling rates, (2) spectral integration across an arbitrary

  3. Improving DOE-2's RESYS routine: User defined functions to provide more accurate part load energy use and humidity predictions

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, Hugh I.; Parker, Danny; Huang, Yu J.

    2000-08-04

    In hourly energy simulations, it is important to properly predict the performance of air conditioning systems over a range of full and part load operating conditions. An important component of these calculations is to properly consider the performance of the cycling air conditioner and how it interacts with the building. This paper presents improved approaches to properly account for the part load performance of residential and light commercial air conditioning systems in DOE-2. First, more accurate correlations are given to predict the degradation of system efficiency at part load conditions. In addition, a user-defined function for RESYS is developed that provides improved predictions of air conditioner sensible and latent capacity at part load conditions. The user function also provides more accurate predictions of space humidity by adding ''lumped'' moisture capacitance into the calculations. The improved cooling coil model and the addition of moisture capacitance predicts humidity swings that are more representative of the performance observed in real buildings.

  4. Fractional Order Modeling of Atmospheric Turbulence - A More Accurate Modeling Methodology for Aero Vehicles

    Science.gov (United States)

    Kopasakis, George

    2014-01-01

    The presentation covers a recently developed methodology to model atmospheric turbulence as disturbances for aero vehicle gust loads and for controls development like flutter and inlet shock position. The approach models atmospheric turbulence in their natural fractional order form, which provides for more accuracy compared to traditional methods like the Dryden model, especially for high speed vehicle. The presentation provides a historical background on atmospheric turbulence modeling and the approaches utilized for air vehicles. This is followed by the motivation and the methodology utilized to develop the atmospheric turbulence fractional order modeling approach. Some examples covering the application of this method are also provided, followed by concluding remarks.

  5. An efficient and accurate representation of complex oceanic and biospheric models of anthropogenic carbon uptake

    Science.gov (United States)

    Joos, Fortunat; Bruno, Michele; Fink, Roger; Siegenthaler, Ulrich; Stocker, Thomas F.; Le Quéré, Corinne; Sarmiento, Jorge L.

    1996-07-01

    Establishing the link between atmospheric CO2 concentration and anthropogenic carbon emissions requires the development of complex carbon cycle models of the primary sinks, the ocean and terrestrial biosphere. Once such models have been developed, the potential exists to use pulse response functions to characterize their behaviour. However, the application of response functions based on a pulse increase in atmospheric CO2 to characterize oceanic uptake, the conventional technique, does not yield a very accurate result due to nonlinearities in the aquatic carbon chemistry. Here, we propose the use of an ocean mixed-layer pulse response function that characterizes the surface to deep ocean mixing in combination with a separate equation describing air-sea exchange. The use of a mixed-layer pulse response function avoids the problem arising from the nonlinearities of the carbon chemistry and gives therefore more accurate results. The response function is also valid for tracers other than carbon. We found that tracer uptake of the HILDA and Box-Diffusion model can be represented exactly by the new method. For the Princeton 3-D model, we find that the agreement between the complete model and its pulse substitute is better than 4% for the cumulative uptake of anthropogenic carbon for the period 1765 2300 applying the IPCC stabilization scenarios S450 and S750 and better than 2% for the simulated inventory and surface concentration of bomb-produced radiocarbon. By contrast, the use of atmospheric response functions gives deviations up to 73% for the cumulative CO2 uptake as calculated with the Princeton 3-D model. We introduce the use of a decay response function for calculating the potential carbon storage on land as a substitute for terrestrial biosphere models that describe the overturning of assimilated carbon. This, in combination with an equation describing the net primary productivity permits us to exactly characterize simple biosphere models. As the time scales of

  6. A different interpretation of Einstein's viscosity equation provides accurate representations of the behavior of hydrophilic solutes to high concentrations.

    Science.gov (United States)

    Zavitsas, Andreas A

    2012-08-23

    Viscosities of aqueous solutions of many highly soluble hydrophilic solutes with hydroxyl and amino groups are examined with a focus on improving the concentration range over which Einstein's relationship between solution viscosity and solute volume, V, is applicable accurately. V is the hydrodynamic effective volume of the solute, including any water strongly bound to it and acting as a single entity with it. The widespread practice is to relate the relative viscosity of solute to solvent, η/η(0), to V/V(tot), where V(tot) is the total volume of the solution. For solutions that are not infinitely dilute, it is shown that the volume ratio must be expressed as V/V(0), where V(0) = V(tot) - V. V(0) is the volume of water not bound to the solute, the "free" water solvent. At infinite dilution, V/V(0) = V/V(tot). For the solutions examined, the proportionality constant between the relative viscosity and volume ratio is shown to be 2.9, rather than the 2.5 commonly used. To understand the phenomena relating to viscosity, the hydrodynamic effective volume of water is important. It is estimated to be between 54 and 85 cm(3). With the above interpretations of Einstein's equation, which are consistent with his stated reasoning, the relation between the viscosity and volume ratio remains accurate to much higher concentrations than those attainable with any of the other relations examined that express the volume ratio as V/V(tot).

  7. Integrative structural annotation of de novo RNA-Seq provides an accurate reference gene set of the enormous genome of the onion (Allium cepa L.).

    Science.gov (United States)

    Kim, Seungill; Kim, Myung-Shin; Kim, Yong-Min; Yeom, Seon-In; Cheong, Kyeongchae; Kim, Ki-Tae; Jeon, Jongbum; Kim, Sunggil; Kim, Do-Sun; Sohn, Seong-Han; Lee, Yong-Hwan; Choi, Doil

    2015-02-01

    The onion (Allium cepa L.) is one of the most widely cultivated and consumed vegetable crops in the world. Although a considerable amount of onion transcriptome data has been deposited into public databases, the sequences of the protein-coding genes are not accurate enough to be used, owing to non-coding sequences intermixed with the coding sequences. We generated a high-quality, annotated onion transcriptome from de novo sequence assembly and intensive structural annotation using the integrated structural gene annotation pipeline (ISGAP), which identified 54,165 protein-coding genes among 165,179 assembled transcripts totalling 203.0 Mb by eliminating the intron sequences. ISGAP performed reliable annotation, recognizing accurate gene structures based on reference proteins, and ab initio gene models of the assembled transcripts. Integrative functional annotation and gene-based SNP analysis revealed a whole biological repertoire of genes and transcriptomic variation in the onion. The method developed in this study provides a powerful tool for the construction of reference gene sets for organisms based solely on de novo transcriptome data. Furthermore, the reference genes and their variation described here for the onion represent essential tools for molecular breeding and gene cloning in Allium spp. © The Author 2014. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  8. Modeling Site Heterogeneity with Posterior Mean Site Frequency Profiles Accelerates Accurate Phylogenomic Estimation.

    Science.gov (United States)

    Wang, Huai-Chun; Minh, Bui Quang; Susko, Edward; Roger, Andrew J

    2018-03-01

    Proteins have distinct structural and functional constraints at different sites that lead to site-specific preferences for particular amino acid residues as the sequences evolve. Heterogeneity in the amino acid substitution process between sites is not modeled by commonly used empirical amino acid exchange matrices. Such model misspecification can lead to artefacts in phylogenetic estimation such as long-branch attraction. Although sophisticated site-heterogeneous mixture models have been developed to address this problem in both Bayesian and maximum likelihood (ML) frameworks, their formidable computational time and memory usage severely limits their use in large phylogenomic analyses. Here we propose a posterior mean site frequency (PMSF) method as a rapid and efficient approximation to full empirical profile mixture models for ML analysis. The PMSF approach assigns a conditional mean amino acid frequency profile to each site calculated based on a mixture model fitted to the data using a preliminary guide tree. These PMSF profiles can then be used for in-depth tree-searching in place of the full mixture model. Compared with widely used empirical mixture models with $k$ classes, our implementation of PMSF in IQ-TREE (http://www.iqtree.org) speeds up the computation by approximately $k$/1.5-fold and requires a small fraction of the RAM. Furthermore, this speedup allows, for the first time, full nonparametric bootstrap analyses to be conducted under complex site-heterogeneous models on large concatenated data matrices. Our simulations and empirical data analyses demonstrate that PMSF can effectively ameliorate long-branch attraction artefacts. In some empirical and simulation settings PMSF provided more accurate estimates of phylogenies than the mixture models from which they derive.

  9. An accurate analytical solution of a zero-dimensional greenhouse model for global warming

    International Nuclear Information System (INIS)

    Foong, S K

    2006-01-01

    In introducing the complex subject of global warming, books and papers usually use the zero-dimensional greenhouse model. When the ratio of the infrared radiation energy of the Earth's surface that is lost to outer space to the non-reflected average solar radiation energy is small, the model admits an accurate approximate analytical solution-the resulting energy balance equation of the model is a quartic equation that can be solved analytically-and thus provides an alternative solution and instructional strategy. A search through the literature fails to find an analytical solution, suggesting that the solution may be new. In this paper, we review the model, derive the approximation and obtain its solution. The dependence of the temperature of the surface of the Earth and the temperature of the atmosphere on seven parameters is made explicit. A simple and convenient formula for global warming (or cooling) in terms of the percentage change of the parameters is derived. The dependence of the surface temperature on the parameters is illustrated by several representative graphs

  10. Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-Band OpenMote Hardware

    Directory of Open Access Journals (Sweden)

    Glenn Daneels

    2018-02-01

    Full Text Available The Time-Slotted Channel Hopping (TSCH mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks.

  11. Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware.

    Science.gov (United States)

    Daneels, Glenn; Municio, Esteban; Van de Velde, Bruno; Ergeerts, Glenn; Weyn, Maarten; Latré, Steven; Famaey, Jeroen

    2018-02-02

    The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks.

  12. Production of Accurate Skeletal Models of Domestic Animals Using Three-Dimensional Scanning and Printing Technology

    Science.gov (United States)

    Li, Fangzheng; Liu, Chunying; Song, Xuexiong; Huan, Yanjun; Gao, Shansong; Jiang, Zhongling

    2018-01-01

    Access to adequate anatomical specimens can be an important aspect in learning the anatomy of domestic animals. In this study, the authors utilized a structured light scanner and fused deposition modeling (FDM) printer to produce highly accurate animal skeletal models. First, various components of the bovine skeleton, including the femur, the…

  13. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    Directory of Open Access Journals (Sweden)

    Tweya Hannock

    2012-07-01

    Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among

  14. Random generalized linear model: a highly accurate and interpretable ensemble predictor.

    Science.gov (United States)

    Song, Lin; Langfelder, Peter; Horvath, Steve

    2013-01-16

    Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a "thinned" ensemble predictor (involving few features) that retains excellent predictive accuracy. RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM.

  15. Measuring Physical Inactivity: Do Current Measures Provide an Accurate View of “Sedentary” Video Game Time?

    Directory of Open Access Journals (Sweden)

    Simon Fullerton

    2014-01-01

    Full Text Available Background. Measures of screen time are often used to assess sedentary behaviour. Participation in activity-based video games (exergames can contribute to estimates of screen time, as current practices of measuring it do not consider the growing evidence that playing exergames can provide light to moderate levels of physical activity. This study aimed to determine what proportion of time spent playing video games was actually spent playing exergames. Methods. Data were collected via a cross-sectional telephone survey in South Australia. Participants aged 18 years and above (n=2026 were asked about their video game habits, as well as demographic and socioeconomic factors. In cases where children were in the household, the video game habits of a randomly selected child were also questioned. Results. Overall, 31.3% of adults and 79.9% of children spend at least some time playing video games. Of these, 24.1% of adults and 42.1% of children play exergames, with these types of games accounting for a third of all time that adults spend playing video games and nearly 20% of children’s video game time. Conclusions. A substantial proportion of time that would usually be classified as “sedentary” may actually be spent participating in light to moderate physical activity.

  16. Accurate mitochondrial DNA sequencing using off-target reads provides a single test to identify pathogenic point mutations.

    Science.gov (United States)

    Griffin, Helen R; Pyle, Angela; Blakely, Emma L; Alston, Charlotte L; Duff, Jennifer; Hudson, Gavin; Horvath, Rita; Wilson, Ian J; Santibanez-Koref, Mauro; Taylor, Robert W; Chinnery, Patrick F

    2014-12-01

    Mitochondrial disorders are a common cause of inherited metabolic disease and can be due to mutations affecting mitochondrial DNA or nuclear DNA. The current diagnostic approach involves the targeted resequencing of mitochondrial DNA and candidate nuclear genes, usually proceeds step by step, and is time consuming and costly. Recent evidence suggests that variations in mitochondrial DNA sequence can be obtained from whole-exome sequence data, raising the possibility of a comprehensive single diagnostic test to detect pathogenic point mutations. We compared the mitochondrial DNA sequence derived from off-target exome reads with conventional mitochondrial DNA Sanger sequencing in 46 subjects. Mitochondrial DNA sequences can be reliably obtained using three different whole-exome sequence capture kits. Coverage correlates with the relative amount of mitochondrial DNA in the original genomic DNA sample, heteroplasmy levels can be determined using variant and total read depths, and-providing there is a minimum read depth of 20-fold-rare sequencing errors occur at a rate similar to that observed with conventional Sanger sequencing. This offers the prospect of using whole-exome sequence in a diagnostic setting to screen not only all protein coding nuclear genes but also all mitochondrial DNA genes for pathogenic mutations. Off-target mitochondrial DNA reads can also be used to assess quality control and maternal ancestry, inform on ethnic origin, and allow genetic disease association studies not previously anticipated with existing whole-exome data sets.

  17. Accurate Modeling of a Transverse Flux Permanent Magnet Generator Using 3D Finite Element Analysis

    DEFF Research Database (Denmark)

    Hosseini, Seyedmohsen; Moghani, Javad Shokrollahi; Jensen, Bogi Bech

    2011-01-01

    This paper presents an accurate modeling method that is applied to a single-sided outer-rotor transverse flux permanent magnet generator. The inductances and the induced electromotive force for a typical generator are calculated using the magnetostatic three-dimensional finite element method. A n...

  18. In-situ measurements of material thermal parameters for accurate LED lamp thermal modelling

    NARCIS (Netherlands)

    Vellvehi, M.; Perpina, X.; Jorda, X.; Werkhoven, R.J.; Kunen, J.M.G.; Jakovenko, J.; Bancken, P.; Bolt, P.J.

    2013-01-01

    This work deals with the extraction of key thermal parameters for accurate thermal modelling of LED lamps: air exchange coefficient around the lamp, emissivity and thermal conductivity of all lamp parts. As a case study, an 8W retrofit lamp is presented. To assess simulation results, temperature is

  19. ACCURATE UNIVERSAL MODELS FOR THE MASS ACCRETION HISTORIES AND CONCENTRATIONS OF DARK MATTER HALOS

    International Nuclear Information System (INIS)

    Zhao, D. H.; Jing, Y. P.; Mo, H. J.; Boerner, G.

    2009-01-01

    A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance ΛCDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and ΛCDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the ΛCDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass, when

  20. Accurate temperature model for absorptance determination of optical components with laser calorimetry.

    Science.gov (United States)

    Wang, Yanru; Li, Bincheng

    2011-03-20

    In the international standard (International Organization for Standardization 11551) for measuring the absorptance of optical components (i.e., laser calorimetry), the absorptance is obtained by fitting the temporal behavior of laser irradiation-induced temperature rise to a homogeneous temperature model in which the infinite thermal conductivity of the sample is assumed. In this paper, an accurate temperature model, in which both the finite thermal conductivity and size of the sample are taken into account, is developed to fit the experimental temperature data for a more precise determination of the absorptance. The difference and repeatability of the results fitted with the two theoretical models for the same experimental data are compared. The optimum detection position when the homogeneous model is employed in the data-fitting procedure is also analyzed with the accurate temperature model. The results show that the optimum detection location optimized for a wide thermal conductivity range of 0.2-50W/m·K moves toward the center of the sample as the sample thickness increases and deviates from the center as the radius and irradiation time increase. However, if the detection position is optimized for an individual sample with known sample size and thermal conductivity by applying the accurate temperature model, the influence of the finite thermal conductivity and sample size on the absorptance determination can be fully compensated for by fitting the temperature data recorded at the optimum detection position to the homogeneous temperature model.

  1. A new model for the accurate calculation of natural gas viscosity

    OpenAIRE

    Xiaohong Yang; Shunxi Zhang; Weiling Zhu

    2017-01-01

    Viscosity of natural gas is a basic and important parameter, of theoretical and practical significance in the domain of natural gas recovery, transmission and processing. In order to obtain the accurate viscosity data efficiently at a low cost, a new model and its corresponding functional relation are derived on the basis of the relationship among viscosity, temperature and density derived from the kinetic theory of gases. After the model parameters were optimized using a lot of experimental ...

  2. Accurate Cure Modeling for Isothermal Processing of Fast Curing Epoxy Resins

    Directory of Open Access Journals (Sweden)

    Alexander Bernath

    2016-11-01

    Full Text Available In this work a holistic approach for the characterization and mathematical modeling of the reaction kinetics of a fast epoxy resin is shown. Major composite manufacturing processes like resin transfer molding involve isothermal curing at temperatures far below the ultimate glass transition temperature. Hence, premature vitrification occurs during curing and consequently has to be taken into account by the kinetic model. In order to show the benefit of using a complex kinetic model, the Kamal-Malkin kinetic model is compared to the Grindling kinetic model in terms of prediction quality for isothermal processing. From the selected models, only the Grindling kinetic is capable of taking into account vitrification. Non-isothermal, isothermal and combined differential scanning calorimetry (DSC measurements are conducted and processed for subsequent use for model parametrization. In order to demonstrate which DSC measurements are vital for proper cure modeling, both models are fitted to varying sets of measurements. Special attention is given to the evaluation of isothermal DSC measurements which are subject to deviations arising from unrecorded cross-linking prior to the beginning of the measurement as well as from physical aging effects. It is found that isothermal measurements are vital for accurate modeling of isothermal cure and cannot be neglected. Accurate cure predictions are achieved using the Grindling kinetic model.

  3. Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System

    Energy Technology Data Exchange (ETDEWEB)

    Goldhaber, Steve [National Center for Atmospheric Research, Boulder, CO (United States); Holland, Marika [National Center for Atmospheric Research, Boulder, CO (United States)

    2017-09-05

    The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enabling more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.

  4. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    OpenAIRE

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O

    2013-01-01

    eLife digest Our bodies?especially our skin, our saliva, the lining of our mouth and our gastrointestinal tract?are home to a diverse collection of bacteria and other microorganisms called the microbiome. While the roles played by many of these microorganisms have yet to be identified, it is known that they contribute to the health and wellbeing of their host by metabolizing indigestible compounds, producing essential vitamins, and preventing the growth of harmful bacteria. They are important...

  5. Simple, fast and accurate two-diode model for photovoltaic modules

    Energy Technology Data Exchange (ETDEWEB)

    Ishaque, Kashif; Salam, Zainal; Taheri, Hamed [Faculty of Electrical Engineering, Universiti Teknologi Malaysia, UTM 81310, Skudai, Johor Bahru (Malaysia)

    2011-02-15

    This paper proposes an improved modeling approach for the two-diode model of photovoltaic (PV) module. The main contribution of this work is the simplification of the current equation, in which only four parameters are required, compared to six or more in the previously developed two-diode models. Furthermore the values of the series and parallel resistances are computed using a simple and fast iterative method. To validate the accuracy of the proposed model, six PV modules of different types (multi-crystalline, mono-crystalline and thin-film) from various manufacturers are tested. The performance of the model is evaluated against the popular single diode models. It is found that the proposed model is superior when subjected to irradiance and temperature variations. In particular the model matches very accurately for all important points of the I-V curves, i.e. the peak power, short-circuit current and open circuit voltage. The modeling method is useful for PV power converter designers and circuit simulator developers who require simple, fast yet accurate model for the PV module. (author)

  6. An accurate model for numerical prediction of piezoelectric energy harvesting from fluid structure interaction problems

    International Nuclear Information System (INIS)

    Amini, Y; Emdad, H; Farid, M

    2014-01-01

    Piezoelectric energy harvesting (PEH) from ambient energy sources, particularly vibrations, has attracted considerable interest throughout the last decade. Since fluid flow has a high energy density, it is one of the best candidates for PEH. Indeed, a piezoelectric energy harvesting process from the fluid flow takes the form of natural three-way coupling of the turbulent fluid flow, the electromechanical effect of the piezoelectric material and the electrical circuit. There are some experimental and numerical studies about piezoelectric energy harvesting from fluid flow in literatures. Nevertheless, accurate modeling for predicting characteristics of this three-way coupling has not yet been developed. In the present study, accurate modeling for this triple coupling is developed and validated by experimental results. A new code based on this modeling in an openFOAM platform is developed. (paper)

  7. Modeling of Non-Gravitational Forces for Precise and Accurate Orbit Determination

    Science.gov (United States)

    Hackel, Stefan; Gisinger, Christoph; Steigenberger, Peter; Balss, Ulrich; Montenbruck, Oliver; Eineder, Michael

    2014-05-01

    Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The precise reconstruction of the satellite's trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency Integrated Geodetic and Occultation Receiver (IGOR) onboard the spacecraft. The increasing demand for precise radar products relies on validation methods, which require precise and accurate orbit products. An analysis of the orbit quality by means of internal and external validation methods on long and short timescales shows systematics, which reflect deficits in the employed force models. Following the proper analysis of this deficits, possible solution strategies are highlighted in the presentation. The employed Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for gravitational and non-gravitational forces. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). The satellite TerraSAR-X flies on a dusk-dawn orbit with an altitude of approximately 510 km above ground. Due to this constellation, the Sun almost constantly illuminates the satellite, which causes strong across-track accelerations on the plane rectangular to the solar rays. The indirect effect of the solar radiation is called Earth Radiation Pressure (ERP). This force depends on the sunlight, which is reflected by the illuminated Earth surface (visible spectra) and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed. The scope of

  8. Accurate path integration in continuous attractor network models of grid cells.

    Directory of Open Access Journals (Sweden)

    Yoram Burak

    2009-02-01

    Full Text Available Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.

  9. Accurate model-based segmentation of gynecologic brachytherapy catheter collections in MRI-images.

    Science.gov (United States)

    Mastmeyer, Andre; Pernelle, Guillaume; Ma, Ruibin; Barber, Lauren; Kapur, Tina

    2017-12-01

    The gynecological cancer mortality rate, including cervical, ovarian, vaginal and vulvar cancers, is more than 20,000 annually in the US alone. In many countries, including the US, external-beam radiotherapy followed by high dose rate brachytherapy is the standard-of-care. The superior ability of MR to visualize soft tissue has led to an increase in its usage in planning and delivering brachytherapy treatment. A technical challenge associated with the use of MRI imaging for brachytherapy, in contrast to that of CT imaging, is the visualization of catheters that are used to place radiation sources into cancerous tissue. We describe here a precise, accurate method for achieving catheter segmentation and visualization. The algorithm, with the assistance of manually provided tip locations, performs segmentation using image-features, and is guided by a catheter-specific, estimated mechanical model. A final quality control step removes outliers or conflicting catheter trajectories. The mean Hausdorff error on a 54 patient, 760 catheter reference database was 1.49  mm; 51 of the outliers deviated more than two catheter widths (3.4  mm) from the gold standard, corresponding to catheter identification accuracy of 93% in a Syed-Neblett template. In a multi-user simulation experiment for evaluating RMS precision by simulating varying manually-provided superior tip positions, 3σ maximum errors were 2.44  mm. The average segmentation time for a single catheter was 3 s on a standard PC. The segmentation time, accuracy and precision, are promising indicators of the value of this method for clinical translation of MR-guidance in gynecologic brachytherapy and other catheter-based interventional procedures. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. New models for energy beam machining enable accurate generation of free forms.

    Science.gov (United States)

    Axinte, Dragos; Billingham, John; Bilbao Guillerna, Aitor

    2017-09-01

    We demonstrate that, despite differences in their nature, many energy beam controlled-depth machining processes (for example, waterjet, pulsed laser, focused ion beam) can be modeled using the same mathematical framework-a partial differential evolution equation that requires only simple calibrations to capture the physics of each process. The inverse problem can be solved efficiently through the numerical solution of the adjoint problem and leads to beam paths that generate prescribed three-dimensional features with minimal error. The viability of this modeling approach has been demonstrated by generating accurate free-form surfaces using three processes that operate at very different length scales and with different physical principles for material removal: waterjet, pulsed laser, and focused ion beam machining. Our approach can be used to accurately machine materials that are hard to process by other means for scalable applications in a wide variety of industries.

  11. Combined model-based segmentation and elastic registration for accurate quantification of the aortic arch.

    Science.gov (United States)

    Biesdorf, Andreas; Rohr, Karl; von Tengg-Kobligk, Hendrik; Wörz, Stefan

    2010-01-01

    Accurate quantification of the morphology of vessels is important for diagnosis and treatment of cardiovascular diseases. We introduce a new approach for the quantification of the aortic arch morphology that combines 3D model-based segmentation with elastic image registration. The performance of the approach has been evaluated using 3D synthetic images and clinically relevant 3D CTA images including pathologies. We also performed a comparison with a previous approach.

  12. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers.

    Science.gov (United States)

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-10-29

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.

  13. Fast and accurate Bayesian model criticism and conflict diagnostics using R-INLA

    KAUST Repository

    Ferkingstad, Egil

    2017-10-16

    Bayesian hierarchical models are increasingly popular for realistic modelling and analysis of complex data. This trend is accompanied by the need for flexible, general and computationally efficient methods for model criticism and conflict detection. Usually, a Bayesian hierarchical model incorporates a grouping of the individual data points, as, for example, with individuals in repeated measurement data. In such cases, the following question arises: Are any of the groups “outliers,” or in conflict with the remaining groups? Existing general approaches aiming to answer such questions tend to be extremely computationally demanding when model fitting is based on Markov chain Monte Carlo. We show how group-level model criticism and conflict detection can be carried out quickly and accurately through integrated nested Laplace approximations (INLA). The new method is implemented as a part of the open-source R-INLA package for Bayesian computing (http://r-inla.org).

  14. Towards more accurate wind and solar power prediction by improving NWP model physics

    Science.gov (United States)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  15. Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models

    DEFF Research Database (Denmark)

    Stovgaard, Kasper; Andreetta, Christian; Ferkinghoff-Borg, Jesper

    2010-01-01

    scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion: We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program...... CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, Torus...

  16. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  17. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model.

    Science.gov (United States)

    Gan, Yangzhou; Xia, Zeyang; Xiong, Jing; Zhao, Qunfei; Hu, Ying; Zhang, Jianwei

    2015-01-01

    A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm(3)) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm(3), 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm(3), 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm, and 1.06 ± 0.40 mm

  18. Accurate anisotropic material modelling using only tensile tests for hot and cold forming

    Science.gov (United States)

    Abspoel, M.; Scholting, M. E.; Lansbergen, M.; Neelis, B. M.

    2017-09-01

    Accurate material data for simulations require a lot of effort. Advanced yield loci require many different kinds of tests and a Forming Limit Curve (FLC) needs a large amount of samples. Many people use simple material models to reduce the effort of testing, however some models are either not accurate enough (i.e. Hill’48), or do not describe new types of materials (i.e. Keeler). Advanced yield loci describe the anisotropic materials behaviour accurately, but are not widely adopted because of the specialized tests, and data post-processing is a hurdle for many. To overcome these issues, correlations between the advanced yield locus points (biaxial, plane strain and shear) and mechanical properties have been investigated. This resulted in accurate prediction of the advanced stress points using only Rm, Ag and r-values in three directions from which a Vegter yield locus can be constructed with low effort. FLC’s can be predicted with the equations of Abspoel & Scholting depending on total elongation A80, r-value and thickness. Both predictive methods are initially developed for steel, aluminium and stainless steel (BCC and FCC materials). The validity of the predicted Vegter yield locus is investigated with simulation and measurements on both hot and cold formed parts and compared with Hill’48. An adapted specimen geometry, to ensure a homogeneous temperature distribution in the Gleeble hot tensile test, was used to measure the mechanical properties needed to predict a hot Vegter yield locus. Since for hot material, testing of stress states other than uniaxial is really challenging, the prediction for the yield locus adds a lot of value. For the hot FLC an A80 sample with a homogeneous temperature distribution is needed which is due to size limitations not possible in the Gleeble tensile tester. Heating the sample in an industrial type furnace and tensile testing it in a dedicated device is a good alternative to determine the necessary parameters for the FLC

  19. Improvement of a land surface model for accurate prediction of surface energy and water balances

    International Nuclear Information System (INIS)

    Katata, Genki

    2009-02-01

    In order to predict energy and water balances between the biosphere and atmosphere accurately, sophisticated schemes to calculate evaporation and adsorption processes in the soil and cloud (fog) water deposition on vegetation were implemented in the one-dimensional atmosphere-soil-vegetation model including CO 2 exchange process (SOLVEG2). Performance tests in arid areas showed that the above schemes have a significant effect on surface energy and water balances. The framework of the above schemes incorporated in the SOLVEG2 and instruction for running the model are documented. With further modifications of the model to implement the carbon exchanges between the vegetation and soil, deposition processes of materials on the land surface, vegetation stress-growth-dynamics etc., the model is suited to evaluate an effect of environmental loads to ecosystems by atmospheric pollutants and radioactive substances under climate changes such as global warming and drought. (author)

  20. A Multiscale Red Blood Cell Model with Accurate Mechanics, Rheology, and Dynamics

    Science.gov (United States)

    Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em

    2010-01-01

    Abstract Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. PMID:20483330

  1. Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models

    Directory of Open Access Journals (Sweden)

    Stovgaard Kasper

    2010-08-01

    Full Text Available Abstract Background Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function, which is paramount for structure determination based on statistical inference. Results We present a method for the efficient calculation of accurate SAXS curves based on the Debye formula and a set of scattering form factors for dummy atom representations of amino acids. Such a method avoids the computationally costly iteration over all atoms. We estimated the form factors using generated data from a set of high quality protein structures. No ad hoc scaling or correction factors are applied in the calculation of the curves. Two coarse-grained representations of protein structure were investigated; two scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, TorusDBN. This resulted in a significant improvement in the decoy recognition performance. In conclusion, the presented method shows great promise for

  2. Inference Under a Wright-Fisher Model Using an Accurate Beta Approximation

    DEFF Research Database (Denmark)

    Tataru, Paula; Bataillon, Thomas; Hobolth, Asger

    2015-01-01

    frequencies and the influence of evolutionary pressures, such as mutation and selection. Despite its simple mathematical formulation, exact results for the distribution of allele frequency (DAF) as a function of time are not available in closed analytic form. Existing approximations build...... on the computationally intensive diffusion limit, or rely on matching moments of the DAF. One of the moment-based approximations relies on the beta distribution, which can accurately describe the DAF when the allele frequency is not close to the boundaries (zero and one). Nonetheless, under a Wright-Fisher model...

  3. Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models

    DEFF Research Database (Denmark)

    Stovgaard, Kasper; Andreetta, Christian; Ferkinghoff-Borg, Jesper

    2010-01-01

    Background: Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS......) is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function...

  4. FULLY AUTOMATED GENERATION OF ACCURATE DIGITAL SURFACE MODELS WITH SUB-METER RESOLUTION FROM SATELLITE IMAGERY

    Directory of Open Access Journals (Sweden)

    J. Wohlfeil

    2012-07-01

    Full Text Available Modern pixel-wise image matching algorithms like Semi-Global Matching (SGM are able to compute high resolution digital surface models from airborne and spaceborne stereo imagery. Although image matching itself can be performed automatically, there are prerequisites, like high geometric accuracy, which are essential for ensuring the high quality of resulting surface models. Especially for line cameras, these prerequisites currently require laborious manual interaction using standard tools, which is a growing problem due to continually increasing demand for such surface models. The tedious work includes partly or fully manual selection of tie- and/or ground control points for ensuring the required accuracy of the relative orientation of images for stereo matching. It also includes masking of large water areas that seriously reduce the quality of the results. Furthermore, a good estimate of the depth range is required, since accurate estimates can seriously reduce the processing time for stereo matching. In this paper an approach is presented that allows performing all these steps fully automated. It includes very robust and precise tie point selection, enabling the accurate calculation of the images’ relative orientation via bundle adjustment. It is also shown how water masking and elevation range estimation can be performed automatically on the base of freely available SRTM data. Extensive tests with a large number of different satellite images from QuickBird and WorldView are presented as proof of the robustness and reliability of the proposed method.

  5. Particle Image Velocimetry Measurements in Anatomically-Accurate Models of the Mammalian Nasal Cavity

    Science.gov (United States)

    Rumple, C.; Richter, J.; Craven, B. A.; Krane, M.

    2012-11-01

    A summary of the research being carried out by our multidisciplinary team to better understand the form and function of the nose in different mammalian species that include humans, carnivores, ungulates, rodents, and marine animals will be presented. The mammalian nose houses a convoluted airway labyrinth, where two hallmark features of mammals occur, endothermy and olfaction. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of airflow and respiratory and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture transparent, anatomically-accurate models for stereo particle image velocimetry (SPIV) measurements of nasal airflow. Challenges in the design and manufacture of index-matched anatomical models are addressed and preliminary SPIV measurements are presented. Such measurements will constitute a validation database for concurrent computational fluid dynamics (CFD) simulations of mammalian respiration and olfaction. Supported by the National Science Foundation.

  6. Cumulative atomic multipole moments complement any atomic charge model to obtain more accurate electrostatic properties

    Science.gov (United States)

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1992-01-01

    The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.

  7. An accurate behavioral model for single-photon avalanche diode statistical performance simulation

    Science.gov (United States)

    Xu, Yue; Zhao, Tingchen; Li, Ding

    2018-01-01

    An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.

  8. A simple but accurate procedure for solving the five-parameter model

    International Nuclear Information System (INIS)

    Mares, Oana; Paulescu, Marius; Badescu, Viorel

    2015-01-01

    Highlights: • A new procedure for extracting the parameters of the one-diode model is proposed. • Only the basic information listed in the datasheet of PV modules are required. • Results demonstrate a simple, robust and accurate procedure. - Abstract: The current–voltage characteristic of a photovoltaic module is typically evaluated by using a model based on the solar cell equivalent circuit. The complexity of the procedure applied for extracting the model parameters depends on data available in manufacture’s datasheet. Since the datasheet is not detailed enough, simplified models have to be used in many cases. This paper proposes a new procedure for extracting the parameters of the one-diode model in standard test conditions, using only the basic data listed by all manufactures in datasheet (short circuit current, open circuit voltage and maximum power point). The procedure is validated by using manufacturers’ data for six commercially crystalline silicon photovoltaic modules. Comparing the computed and measured current–voltage characteristics the determination coefficient is in the range 0.976–0.998. Thus, the proposed procedure represents a feasible tool for solving the five-parameter model applied to crystalline silicon photovoltaic modules. The procedure is described in detail, to guide potential users to derive similar models for other types of photovoltaic modules.

  9. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement

    Directory of Open Access Journals (Sweden)

    Suzhi Xiao

    2016-04-01

    Full Text Available In order to acquire an accurate three-dimensional (3D measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  10. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement.

    Science.gov (United States)

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-04-28

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the 'phase to 3D coordinates transformation' are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  11. Inference Under a Wright-Fisher Model Using an Accurate Beta Approximation

    DEFF Research Database (Denmark)

    Tataru, Paula; Bataillon, Thomas; Hobolth, Asger

    2015-01-01

    on the computationally intensive diffusion limit, or rely on matching moments of the DAF. One of the moment-based approximations relies on the beta distribution, which can accurately describe the DAF when the allele frequency is not close to the boundaries (zero and one). Nonetheless, under a Wright-Fisher model......, the probability of being on the boundary can be positive, corresponding to the allele being either lost or fixed. Here, we introduce the beta with spikes, an extension of the beta approximation, which explicitly models the loss and fixation probabilities as two spikes at the boundaries. We show that the addition...... of spikes greatly improves the quality of the approximation. We additionally illustrate, using both simulated and real data, how the beta with spikes can be used for inference of divergence times between populations, with comparable performance to an existing state-of-the-art method....

  12. 2016 KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Carrington, David Bradley [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Waters, Jiajia [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-25

    Los Alamos National Laboratory and its collaborators are facilitating engine modeling by improving accuracy and robustness of the modeling, and improving the robustness of software. We also continue to improve the physical modeling methods. We are developing and implementing new mathematical algorithms, those that represent the physics within an engine. We provide software that others may use directly or that they may alter with various models e.g., sophisticated chemical kinetics, different turbulent closure methods or other fuel injection and spray systems.

  13. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    Science.gov (United States)

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  14. Accurate geometry scalable complementary metal oxide semiconductor modelling of low-power 90 nm amplifier circuits

    Directory of Open Access Journals (Sweden)

    Apratim Roy

    2014-05-01

    Full Text Available This paper proposes a technique to accurately estimate radio frequency behaviour of low-power 90 nm amplifier circuits with geometry scalable discrete complementary metal oxide semiconductor (CMOS modelling. Rather than characterising individual elements, the scheme is able to predict gain, noise and reflection loss of low-noise amplifier (LNA architectures made with bias, active and passive components. It reduces number of model parameters by formulating dependent functions in symmetric distributed modelling and shows that simple fitting factors can account for extraneous (interconnect effects in LNA structure. Equivalent-circuit model equations based on physical structure and describing layout parasites are developed for major amplifier elements like metal–insulator–metal (MIM capacitor, spiral symmetric inductor, polysilicon (PS resistor and bulk RF transistor. The models are geometry scalable with respect to feature dimensions, i.e. MIM/PS width and length, outer-dimension/turns of planar inductor and channel-width/fingers of active device. Results obtained with the CMOS models are compared against measured literature data for two 1.2 V amplifier circuits where prediction accuracy for RF parameters (S(21, noise figure, S(11, S(22 lies within the range of 92–99%.

  15. Comprehensive Care For Joint Replacement Model - Provider Data

    Data.gov (United States)

    U.S. Department of Health & Human Services — Comprehensive Care for Joint Replacement Model - provider data. This data set includes provider data for two quality measures tracked during an episode of care:...

  16. Accurate representation of geostrophic and hydrostatic balance in unstructured mesh finite element ocean modelling

    Science.gov (United States)

    Maddison, J. R.; Marshall, D. P.; Pain, C. C.; Piggott, M. D.

    Accurate representation of geostrophic and hydrostatic balance is an essential requirement for numerical modelling of geophysical flows. Potentially, unstructured mesh numerical methods offer significant benefits over conventional structured meshes, including the ability to conform to arbitrary bounding topography in a natural manner and the ability to apply dynamic mesh adaptivity. However, there is a need to develop robust schemes with accurate representation of physical balance on arbitrary unstructured meshes. We discuss the origin of physical balance errors in a finite element discretisation of the Navier-Stokes equations using the fractional timestep pressure projection method. By considering the Helmholtz decomposition of forcing terms in the momentum equation, it is shown that the components of the buoyancy and Coriolis accelerations that project onto the non-divergent velocity tendency are the small residuals between two terms of comparable magnitude. Hence there is a potential for significant injection of imbalance by a numerical method that does not compute these residuals accurately. This observation is used to motivate a balanced pressure decomposition method whereby an additional "balanced pressure" field, associated with buoyancy and Coriolis accelerations, is solved for at increased accuracy and used to precondition the solution for the dynamical pressure. The utility of this approach is quantified in a fully non-linear system in exact geostrophic balance. The approach is further tested via quantitative comparison of unstructured mesh simulations of the thermally driven rotating annulus against laboratory data. Using a piecewise linear discretisation for velocity and pressure (a stabilised P1P1 discretisation), it is demonstrated that the balanced pressure decomposition method is required for a physically realistic representation of the system.

  17. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    Science.gov (United States)

    Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish

    2016-04-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy.

  18. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    Directory of Open Access Journals (Sweden)

    Matias I Maturana

    2016-04-01

    Full Text Available Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants. Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF, i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy.

  19. A new model for the accurate calculation of natural gas viscosity

    Directory of Open Access Journals (Sweden)

    Xiaohong Yang

    2017-03-01

    Full Text Available Viscosity of natural gas is a basic and important parameter, of theoretical and practical significance in the domain of natural gas recovery, transmission and processing. In order to obtain the accurate viscosity data efficiently at a low cost, a new model and its corresponding functional relation are derived on the basis of the relationship among viscosity, temperature and density derived from the kinetic theory of gases. After the model parameters were optimized using a lot of experimental data, the diagram showing the variation of viscosity along with temperature and density is prepared, showing that: ① the gas viscosity increases with the increase of density as well as the increase of temperature in the low density region; ② the gas viscosity increases with the decrease of temperature in high density region. With this new model, the viscosity of 9 natural gas samples was calculated precisely. The average relative deviation between these calculated values and 1539 experimental data measured at 250–450 K and 0.10–140.0 MPa is less than 1.9%. Compared with the 793 experimental data with a measurement error less than 0.5%, the maximum relative deviation is less than 0.98%. It is concluded that this new model is more advantageous than the previous 8 models in terms of simplicity, accuracy, fast calculation, and direct applicability to the CO2 bearing gas samples.

  20. Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model

    International Nuclear Information System (INIS)

    Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.

    2007-01-01

    Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem

  1. Three-dimensional printing of anatomically accurate, patient specific intracranial aneurysm models.

    Science.gov (United States)

    Anderson, Jeff R; Thompson, Walker L; Alkattan, Abdulaziz K; Diaz, Orlando; Klucznik, Richard; Zhang, Yi J; Britz, Gavin W; Grossman, Robert G; Karmonik, Christof

    2016-05-01

    To develop and validate a method for creating realistic, patient specific replicas of cerebral aneurysms by means of fused deposition modeling. The luminal boundaries of 10 cerebral aneurysms, together with adjacent proximal and distal sections of the parent artery, were segmented based on DSA images, and corresponding virtual three-dimensional (3D) surface reconstructions were created. From these, polylactic acid and MakerBot Flexible Filament replicas of each aneurysm were created by means of fused deposition modeling. The accuracy of the replicas was assessed by quantifying statistical significance in the variations of their inner dimensions relative to 3D DSA images. Feasibility for using these replicas as flow phantoms in combination with phase contrast MRI was demonstrated. 3D printed aneurysm models were created for all 10 subjects. Good agreement was seen between the models and the source anatomy. Aneurysm diameter measurements of the printed models and source images correlated well (r=0.999; pmodels, respectively. 3D printed models could be imaged with flow via MRI. The 3D printed aneurysm models presented were accurate and were able to be produced inhouse. These models can be used for previously cited applications, but their anatomical accuracy also enables their use as MRI flow phantoms for comparison with ongoing studies of computational fluid dynamics. Proof of principle imaging experiments confirm MRI flow phantom utility. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  2. Effective and accurate approach for modeling of commensurate–incommensurate transition in krypton monolayer on graphite

    International Nuclear Information System (INIS)

    Ustinov, E. A.

    2014-01-01

    Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system

  3. Spiral CT scanning plan to generate accurate Fe models of the human femur

    International Nuclear Information System (INIS)

    Zannoni, C.; Testi, D.; Capello, A.

    1999-01-01

    In spiral computed tomography (CT), source rotation, patient translation, and data acquisition are continuously conducted. Settings of the detector collimation and the table increment affect the image quality in terms of spatial and contrast resolution. This study assessed and measured the efficacy of spiral CT in those applications where the accurate reconstruction of bone morphology is critical: custom made prosthesis design or three dimensional modelling of the mechanical behaviour of long bones. Results show that conventional CT grants the highest accuracy. Spiral CT with D=5 mm and P=1,5 in the regions where the morphology is more regular, slightly degrades the image quality but allows to acquire at comparable cost an higher number of images increasing the longitudinal resolution of the acquired data set. (author)

  4. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    International Nuclear Information System (INIS)

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-01-01

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelity quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.

  5. Accurate Treatment of Collisions and Water-Delivery in Models of Terrestrial Planet Formation

    Science.gov (United States)

    Haghighipour, Nader; Maindl, Thomas; Schaefer, Christoph

    2017-10-01

    It is widely accepted that collisions among solid bodies, ignited by their interactions with planetary embryos is the key process in the formation of terrestrial planets and transport of volatiles and chemical compounds to their accretion zones. Unfortunately, due to computational complexities, these collisions are often treated in a rudimentary way. Impacts are considered to be perfectly inelastic and volatiles are considered to be fully transferred from one object to the other. This perfect-merging assumption has profound effects on the mass and composition of final planetary bodies as it grossly overestimates the masses of these objects and the amounts of volatiles and chemical elements transferred to them. It also entirely neglects collisional-loss of volatiles (e.g., water) and draws an unrealistic connection between these properties and the chemical structure of the protoplanetary disk (i.e., the location of their original carriers). We have developed a new and comprehensive methodology to simulate growth of embryos to planetary bodies where we use a combination of SPH and N-body codes to accurately model collisions as well as the transport/transfer of chemical compounds. Our methodology accounts for the loss of volatiles (e.g., ice sublimation) during the orbital evolution of their careers and accurately tracks their transfer from one body to another. Results of our simulations show that traditional N-body modeling of terrestrial planet formation overestimates the amount of the mass and water contents of the final planets by over 60% implying that not only the amount of water they suggest is far from being realistic, small planets such as Mars can also form in these simulations when collisions are treated properly. We will present details of our methodology and discuss its implications for terrestrial planet formation and water delivery to Earth.

  6. An Efficient Hybrid DSMC/MD Algorithm for Accurate Modeling of Micro Gas Flows

    KAUST Repository

    Liang, Tengfei

    2013-01-01

    Aiming at simulating micro gas flows with accurate boundary conditions, an efficient hybrid algorithmis developed by combining themolecular dynamics (MD) method with the direct simulationMonte Carlo (DSMC)method. The efficiency comes from the fact that theMD method is applied only within the gas-wall interaction layer, characterized by the cut-off distance of the gas-solid interaction potential, to resolve accurately the gas-wall interaction process, while the DSMC method is employed in the remaining portion of the flow field to efficiently simulate rarefied gas transport outside the gas-wall interaction layer. A unique feature about the present scheme is that the coupling between the two methods is realized by matching the molecular velocity distribution function at the DSMC/MD interface, hence there is no need for one-toone mapping between a MD gas molecule and a DSMC simulation particle. Further improvement in efficiency is achieved by taking advantage of gas rarefaction inside the gas-wall interaction layer and by employing the "smart-wall model" proposed by Barisik et al. The developed hybrid algorithm is validated on two classical benchmarks namely 1-D Fourier thermal problem and Couette shear flow problem. Both the accuracy and efficiency of the hybrid algorithm are discussed. As an application, the hybrid algorithm is employed to simulate thermal transpiration coefficient in the free-molecule regime for a system with atomically smooth surface. Result is utilized to validate the coefficients calculated from the pure DSMC simulation with Maxwell and Cercignani-Lampis gas-wall interaction models. ©c 2014 Global-Science Press.

  7. Customer-Provider Strategic Alignment: A Maturity Model

    Science.gov (United States)

    Luftman, Jerry; Brown, Carol V.; Balaji, S.

    This chapter presents a new model for assessing the maturity of a ­customer-provider relationship from a collaborative service delivery perspective: the Customer-Provider Strategic Alignment Maturity (CPSAM) Model. This model builds on recent research for effectively managing the customer-provider relationship in IT service outsourcing contexts and a validated model for assessing alignment across internal IT service units and their business customers within the same organization. After reviewing relevant literature by service science and information systems researchers, the six overarching components of the maturity model are presented: value measurements, governance, partnership, communications, human resources and skills, and scope and architecture. A key assumption of the model is that all of the components need be addressed to assess and improve customer-provider alignment. Examples of specific metrics for measuring the maturity level of each component over the five levels of maturity are also presented.

  8. A risk assessment model for selecting cloud service providers

    OpenAIRE

    Cayirci, Erdal; Garaga, Alexandr; Santana de Oliveira, Anderson; Roudier, Yves

    2016-01-01

    The Cloud Adoption Risk Assessment Model is designed to help cloud customers in assessing the risks that they face by selecting a specific cloud service provider. It evaluates background information obtained from cloud customers and cloud service providers to analyze various risk scenarios. This facilitates decision making an selecting the cloud service provider with the most preferable risk profile based on aggregated risks to security, privacy, and service delivery. Based on this model we ...

  9. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    Science.gov (United States)

    Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi

    2017-05-01

    To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  10. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    Directory of Open Access Journals (Sweden)

    Baoquan Kou

    2017-05-01

    Full Text Available To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  11. Development of a Fast and Accurate PCRTM Radiative Transfer Model in the Solar Spectral Region

    Science.gov (United States)

    Liu, Xu; Yang, Qiguang; Li, Hui; Jin, Zhonghai; Wu, Wan; Kizer, Susan; Zhou, Daniel K.; Yang, Ping

    2016-01-01

    A fast and accurate principal component-based radiative transfer model in the solar spectral region (PCRTMSOLAR) has been developed. The algorithm is capable of simulating reflected solar spectra in both clear sky and cloudy atmospheric conditions. Multiple scattering of the solar beam by the multilayer clouds and aerosols are calculated using a discrete ordinate radiative transfer scheme. The PCRTM-SOLAR model can be trained to simulate top-of-atmosphere radiance or reflectance spectra with spectral resolution ranging from 1 cm(exp -1) resolution to a few nanometers. Broadband radiances or reflectance can also be calculated if desired. The current version of the PCRTM-SOLAR covers a spectral range from 300 to 2500 nm. The model is valid for solar zenith angles ranging from 0 to 80 deg, the instrument view zenith angles ranging from 0 to 70 deg, and the relative azimuthal angles ranging from 0 to 360 deg. Depending on the number of spectral channels, the speed of the current version of PCRTM-SOLAR is a few hundred to over one thousand times faster than the medium speed correlated-k option MODTRAN5. The absolute RMS error in channel radiance is smaller than 10(exp -3) mW/cm)exp 2)/sr/cm(exp -1) and the relative error is typically less than 0.2%.

  12. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    Science.gov (United States)

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions. © 2013 Wiley Periodicals, Inc.

  13. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    Science.gov (United States)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of

  14. Generating Converged Accurate Free Energy Surfaces for Chemical Reactions with a Force-Matched Semiempirical Model.

    Science.gov (United States)

    Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir

    2018-03-22

    We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .

  15. Bisection method for accurate modeling and simulation of fouling in hollow fiber membrane system.

    Science.gov (United States)

    Liang, Shuang; Zhao, Yubo; Zhang, Jian; Song, Lianfa

    2017-06-01

    Accurate description and modeling of fouling on hollow fibers imposes a serious challenge to more effective fouling mitigation and performance optimization of the membrane system. Although the governing equations for membrane fouling can be constructed based on the known theories from membrane filtration and fluid dynamics, they are unsolvable analytically due to the complex spatially and temporally varying nature of fouling on hollow fibers. The current available numerical solutions for the governing equations are either unreliable or inconvenient to use because of the uses of unfounded assumptions or cumbersome calculation methods. This work presented for the first time a rigorous numerical procedure to solve the governing equations for fouling development on hollow fibers. A critical step to achieve the goal is the use of bisection method to determine the transmembrane pressure at the dead end of the fibers. With this procedure, fouling behavior in the hollow fiber membrane system under a given condition can be simulated within a second. The model simulations were well calibrated and verified with the published experimental data from literature. Also presented in the paper were simulations for performances of the hollow fiber membrane system under various operation conditions. Graphical abstract ᅟ.

  16. Linearized Modeling Methods of AC-DC Converters for an Accurate Frequency Response

    DEFF Research Database (Denmark)

    Kwon, Jun Bum; Wang, Xiongfei; Blaabjerg, Frede

    2017-01-01

    difficulties in modeling are discussed. A comparison of these methods is presented. Simulation results show that the harmonic state-space modeling method provides an efficient way to analyze both steady-state frequency coupling and dynamic harmonic interactions in power-electronic-based power systems.......Wideband harmonics and resonances are challenging the stability and power quality of emerging power-electronic-based power systems, and therefore, harmonic modeling and analysis of power converters are becoming even more important. However, the complex interactions on both ac and dc sides...

  17. Accurate estimate of the relic density and the kinetic decoupling in nonthermal dark matter models

    International Nuclear Information System (INIS)

    Arcadi, Giorgio; Ullio, Piero

    2011-01-01

    Nonthermal dark matter generation is an appealing alternative to the standard paradigm of thermal WIMP dark matter. We reconsider nonthermal production mechanisms in a systematic way, and develop a numerical code for accurate computations of the dark matter relic density. We discuss, in particular, scenarios with long-lived massive states decaying into dark matter particles, appearing naturally in several beyond the standard model theories, such as supergravity and superstring frameworks. Since nonthermal production favors dark matter candidates with large pair annihilation rates, we analyze the possible connection with the anomalies detected in the lepton cosmic-ray flux by Pamela and Fermi. Concentrating on supersymmetric models, we consider the effect of these nonstandard cosmologies in selecting a preferred mass scale for the lightest supersymmetric particle as a dark matter candidate, and the consequent impact on the interpretation of new physics discovered or excluded at the LHC. Finally, we examine a rather predictive model, the G2-MSSM, investigating some of the standard assumptions usually implemented in the solution of the Boltzmann equation for the dark matter component, including coannihilations. We question the hypothesis that kinetic equilibrium holds along the whole phase of dark matter generation, and the validity of the factorization usually implemented to rewrite the system of a coupled Boltzmann equation for each coannihilating species as a single equation for the sum of all the number densities. As a byproduct we develop here a formalism to compute the kinetic decoupling temperature in case of coannihilating particles, which can also be applied to other particle physics frameworks, and also to standard thermal relics within a standard cosmology.

  18. Accurate Characterization of Ion Transport Properties in Binary Symmetric Electrolytes Using In Situ NMR Imaging and Inverse Modeling.

    Science.gov (United States)

    Sethurajan, Athinthra Krishnaswamy; Krachkovskiy, Sergey A; Halalay, Ion C; Goward, Gillian R; Protas, Bartosz

    2015-09-17

    We used NMR imaging (MRI) combined with data analysis based on inverse modeling of the mass transport problem to determine ionic diffusion coefficients and transference numbers in electrolyte solutions of interest for Li-ion batteries. Sensitivity analyses have shown that accurate estimates of these parameters (as a function of concentration) are critical to the reliability of the predictions provided by models of porous electrodes. The inverse modeling (IM) solution was generated with an extension of the Planck-Nernst model for the transport of ionic species in electrolyte solutions. Concentration-dependent diffusion coefficients and transference numbers were derived using concentration profiles obtained from in situ (19)F MRI measurements. Material properties were reconstructed under minimal assumptions using methods of variational optimization to minimize the least-squares deviation between experimental and simulated concentration values with uncertainty of the reconstructions quantified using a Monte Carlo analysis. The diffusion coefficients obtained by pulsed field gradient NMR (PFG-NMR) fall within the 95% confidence bounds for the diffusion coefficient values obtained by the MRI+IM method. The MRI+IM method also yields the concentration dependence of the Li(+) transference number in agreement with trends obtained by electrochemical methods for similar systems and with predictions of theoretical models for concentrated electrolyte solutions, in marked contrast to the salt concentration dependence of transport numbers determined from PFG-NMR data.

  19. Importance of Housekeeping gene selection for accurate RT-qPCR in a wound healing model

    Science.gov (United States)

    Turabelidze, Anna; Guo, Shujuan; DiPietro, Luisa A.

    2010-01-01

    Studies in the field of wound healing have utilized a variety of different housekeeping genes for RT-qPCR analysis. However, nearly all of these studies assume that the selected normalization gene is stably expressed throughout the course of the repair process. The purpose of our current investigation was to identify the most stable housekeeping genes for studying gene expression in mouse wound healing using RT-qPCR. To identify which housekeeping genes are optimal for studying gene expression in wound healing, we examined all articles published in Wound Repair and Regeneration that cited RT-qPCR during the period of Jan/Feb 2008 until July/August2009. We determined that ACTIN, GAPDH, 18S and β2M were the most frequently used housekeeping genes in human, mouse, and pig studies. We also investigated nine commonly used housekeeping genes that are not generally used in wound healing models: GUS, TBP, RPLP2, ATP5B, SDHA, UBC, CANX, CYC1, and YWHAZ. We observed that wounded and unwounded tissues have contrasting housekeeping gene expression stability. The results demonstrate that commonly used housekeeping genes must be validated as accurate normalizing genes for each individual experimental condition. PMID:20731795

  20. An evolutionary model-based algorithm for accurate phylogenetic breakpoint mapping and subtype prediction in HIV-1.

    Directory of Open Access Journals (Sweden)

    Sergei L Kosakovsky Pond

    2009-11-01

    Full Text Available Genetically diverse pathogens (such as Human Immunodeficiency virus type 1, HIV-1 are frequently stratified into phylogenetically or immunologically defined subtypes for classification purposes. Computational identification of such subtypes is helpful in surveillance, epidemiological analysis and detection of novel variants, e.g., circulating recombinant forms in HIV-1. A number of conceptually and technically different techniques have been proposed for determining the subtype of a query sequence, but there is not a universally optimal approach. We present a model-based phylogenetic method for automatically subtyping an HIV-1 (or other viral or bacterial sequence, mapping the location of breakpoints and assigning parental sequences in recombinant strains as well as computing confidence levels for the inferred quantities. Our Subtype Classification Using Evolutionary ALgorithms (SCUEAL procedure is shown to perform very well in a variety of simulation scenarios, runs in parallel when multiple sequences are being screened, and matches or exceeds the performance of existing approaches on typical empirical cases. We applied SCUEAL to all available polymerase (pol sequences from two large databases, the Stanford Drug Resistance database and the UK HIV Drug Resistance Database. Comparing with subtypes which had previously been assigned revealed that a minor but substantial (approximately 5% fraction of pure subtype sequences may in fact be within- or inter-subtype recombinants. A free implementation of SCUEAL is provided as a module for the HyPhy package and the Datamonkey web server. Our method is especially useful when an accurate automatic classification of an unknown strain is desired, and is positioned to complement and extend faster but less accurate methods. Given the increasingly frequent use of HIV subtype information in studies focusing on the effect of subtype on treatment, clinical outcome, pathogenicity and vaccine design, the importance

  1. An accurate European option pricing model under Fractional Stable Process based on Feynman Path Integral

    Science.gov (United States)

    Ma, Chao; Ma, Qinghua; Yao, Haixiang; Hou, Tiancheng

    2018-03-01

    In this paper, we propose to use the Fractional Stable Process (FSP) for option pricing. The FSP is one of the few candidates to directly model a number of desired empirical properties of asset price risk neutral dynamics. However, pricing the vanilla European option under FSP is difficult and problematic. In the paper, built upon the developed Feynman Path Integral inspired techniques, we present a novel computational model for option pricing, i.e. the Fractional Stable Process Path Integral (FSPPI) model under a general fractional stable distribution that tackles this problem. Numerical and empirical experiments show that the proposed pricing model provides a correction of the Black-Scholes pricing error - overpricing long term options, underpricing short term options; overpricing out-of-the-money options, underpricing in-the-money options without any additional structures such as stochastic volatility and a jump process.

  2. An accurate modelling of the two-diode model of PV module using a hybrid solution based on differential evolution

    International Nuclear Information System (INIS)

    Chin, Vun Jack; Salam, Zainal; Ishaque, Kashif

    2016-01-01

    Highlights: • An accurate computational method for the two-diode model of PV module is proposed. • The hybrid method employs analytical equations and Differential Evolution (DE). • I PV , I o1 , and R p are computed analytically, while a 1 , a 2 , I o2 and R s are optimized. • This allows the model parameters to be computed without using costly assumptions. - Abstract: This paper proposes an accurate computational technique for the two-diode model of PV module. Unlike previous methods, it does not rely on assumptions that cause the accuracy to be compromised. The key to this improvement is the implementation of a hybrid solution, i.e. by incorporating the analytical method with the differential evolution (DE) optimization technique. Three parameters, i.e. I PV , I o1 , and R p are computed analytically, while the remaining, a 1 , a 2 , I o2 and R s are optimized using the DE. To validate its accuracy, the proposed method is tested on three PV modules of different technologies: mono-crystalline, poly-crystalline and thin film. Furthermore, its performance is evaluated against two popular computational methods for the two-diode model. The proposed method is found to exhibit superior accuracy for the variation in irradiance and temperature for all module types. In particular, the improvement in accuracy is evident at low irradiance conditions; the root-mean-square error is one order of magnitude lower than that of the other methods. In addition, the values of the model parameters are consistent with the physics of PV cell. It is envisaged that the method can be very useful for PV simulation, in which accuracy of the model is of prime concern.

  3. Accurate 3D modeling of Cable in Conduit Conductor type superconductors by X-ray microtomography

    Energy Technology Data Exchange (ETDEWEB)

    Tiseanu, Ion, E-mail: tiseanu@infim.ro [National Institute for Laser, Plasma and Radiation Physics (INFLPR), Bucharest-Magurele (Romania); Zani, Louis [CEA/Cadarache – Institut de Recherche sur la Fusion Magnetique, St Paul-lez-Durance Cedex (France); Tiseanu, Catalin-Stefan [University of Bucharest, Faculty of Mathematics and Computer Science (Romania); Craciunescu, Teddy; Dobrea, Cosmin [National Institute for Laser, Plasma and Radiation Physics (INFLPR), Bucharest-Magurele (Romania)

    2015-10-15

    Graphical abstract: - Highlights: • Quality controls monitoring of Cable in Conduit Conductor (CICC) by X-ray tomography. • High resolution (≈40 μm) X-ray tomography images of CICC section up to 300 mm long. • Assignment of vast majority of strand trajectories over relevant section of CICC. • Non-invasive accurate measurements of local void fraction statistics. - Abstract: Operation and data acquisition of an X-ray microtomography developed at INFLPR are optimized to produce stacks of 2-D high-resolution tomographic sections of Cable in Conduit Conductor (CICC) type superconductors demanded in major fusion projects. High-resolution images for CCIC samples (486 NbTi&Cu strands of 0.81 mm diameter, jacketed in rectangular stainless steel pipes of 22 × 26 mm{sup 2}) are obtained by a combination of high energy/intensity and small focus spot X-ray source and high resolution/efficiency detector array. The stack of reconstructed slices is then used for quantitative analysis consisting of accurate strand positioning, determination of the local and global void fraction and 3D strand trajectory assignment for relevant fragments of cable (∼300 mm). The strand positioning algorithm is based on the application of Gabor Annular filtering followed by local maxima detection. The local void fraction is extensively mapped by employing local segmentation methods at a space resolution of about 50 sub-cells sized to be relevant to the triplet of triplet twisting pattern. For the strand trajectory assignment part we developed a global algorithm of the linear programing type which provides the vast majority of correct strand trajectories for most practical applications. For carefully manufactured benchmark CCIC samples over 99% of the trajectories are correctly assigned. For production samples the efficiency of the algorithm is around 90%. Trajectory assignment of a high proportion of the strands is a crucial factor for the derivation of statistical properties of the cable

  4. A simple and accurate model for Love wave based sensors: Dispersion equation and mass sensitivity

    Directory of Open Access Journals (Sweden)

    Jiansheng Liu

    2014-07-01

    Full Text Available Dispersion equation is an important tool for analyzing propagation properties of acoustic waves in layered structures. For Love wave (LW sensors, the dispersion equation with an isotropic-considered substrate is too rough to get accurate solutions; the full dispersion equation with a piezoelectric-considered substrate is too complicated to get simple and practical expressions for optimizing LW-based sensors. In this work, a dispersion equation is introduced for Love waves in a layered structure with an anisotropic-considered substrate and an isotropic guiding layer; an intuitive expression for mass sensitivity is also derived based on the dispersion equation. The new equations are in simple forms similar to the previously reported simplified model with an isotropic substrate. By introducing the Maxwell-Weichert model, these equations are also applicable to the LW device incorporating a viscoelastic guiding layer; the mass velocity sensitivity and the mass propagation loss sensitivity are obtained from the real part and the imaginary part of the complex mass sensitivity, respectively. With Love waves in an elastic SiO2 layer on an ST-90°X quartz structure, for example, comparisons are carried out between the velocities and normalized sensitivities calculated by using different dispersion equations and corresponding mass sensitivities. Numerical results of the method presented in this work are very close to those of the method with a piezoelectric-considered substrate. Another numerical calculation is carried out for the case of a LW sensor with a viscoelastic guiding layer. If the viscosity of the layer is not too big, the effect on the real part of the velocity and the mass velocity sensitivity is relatively small; the propagation loss and the mass loss sensitivity are proportional to the viscosity of the guiding layer.

  5. Stable, accurate and efficient computation of normal modes for horizontal stratified models

    Science.gov (United States)

    Wu, Bo; Chen, Xiaofei

    2016-08-01

    We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.

  6. Bring Your Own Device - Providing Reliable Model of Data Access

    Directory of Open Access Journals (Sweden)

    Stąpór Paweł

    2016-10-01

    Full Text Available The article presents a model of Bring Your Own Device (BYOD as a model network, which provides the user reliable access to network resources. BYOD is a model dynamically developing, which can be applied in many areas. Research network has been launched in order to carry out the test, in which as a service of BYOD model Work Folders service was used. This service allows the user to synchronize files between the device and the server. An access to the network is completed through the wireless communication by the 802.11n standard. Obtained results are shown and analyzed in this article.

  7. Accurate Locally Conservative Discretizations for Modeling Multiphase Flow in Porous Media on General Hexahedra Grids

    KAUST Repository

    Wheeler, M.F.

    2010-09-06

    For many years there have been formulations considered for modeling single phase ow on general hexahedra grids. These include the extended mixed nite element method, and families of mimetic nite di erence methods. In most of these schemes either no rate of convergence of the algorithm has been demonstrated both theoret- ically and computationally or a more complicated saddle point system needs to be solved for an accurate solution. Here we describe a multipoint ux mixed nite element (MFMFE) method [5, 2, 3]. This method is motivated from the multipoint ux approximation (MPFA) method [1]. The MFMFE method is locally conservative with continuous ux approximations and is a cell-centered scheme for the pressure. Compared to the MPFA method, the MFMFE has a variational formulation, since it can be viewed as a mixed nite element with special approximating spaces and quadrature rules. The framework allows han- dling of hexahedral grids with non-planar faces by applying trilinear mappings from physical elements to reference cubic elements. In addition, there are several multi- scale and multiphysics extensions such as the mortar mixed nite element method that allows the treatment of non-matching grids [4]. Extensions to the two-phase oil-water ow are considered. We reformulate the two- phase model in terms of total velocity, capillary velocity, water pressure, and water saturation. We choose water pressure and water saturation as primary variables. The total velocity is driven by the gradient of the water pressure and total mobility. Iterative coupling scheme is employed for the coupled system. This scheme allows treatments of di erent time scales for the water pressure and water saturation. In each time step, we rst solve the pressure equation using the MFMFE method; we then Center for Subsurface Modeling, The University of Texas at Austin, Austin, TX 78712; mfw@ices.utexas.edu. yCenter for Subsurface Modeling, The University of Texas at Austin, Austin, TX 78712; gxue

  8. Enhancement of a Turbulence Sub-Model for More Accurate Predictions of Vertical Stratifications in 3D Coastal and Estuarine Modeling

    Directory of Open Access Journals (Sweden)

    Wenrui Huang

    2010-03-01

    Full Text Available This paper presents an improvement of the Mellor and Yamada's 2nd order turbulence model in the Princeton Ocean Model (POM for better predictions of vertical stratifications of salinity in estuaries. The model was evaluated in the strongly stratified estuary, Apalachicola River, Florida, USA. The three-dimensional hydrodynamic model was applied to study the stratified flow and salinity intrusion in the estuary in response to tide, wind, and buoyancy forces. Model tests indicate that model predictions over estimate the stratification when using the default turbulent parameters. Analytic studies of density-induced and wind-induced flows indicate that accurate estimation of vertical eddy viscosity plays an important role in describing vertical profiles. Initial model revision experiments show that the traditional approach of modifying empirical constants in the turbulence model leads to numerical instability. In order to improve the performance of the turbulence model while maintaining numerical stability, a stratification factor was introduced to allow adjustment of the vertical turbulent eddy viscosity and diffusivity. Sensitivity studies indicate that the stratification factor, ranging from 1.0 to 1.2, does not cause numerical instability in Apalachicola River. Model simulations show that increasing the turbulent eddy viscosity by a stratification factor of 1.12 results in an optimal agreement between model predictions and observations in the case study presented in this study. Using the proposed stratification factor provides a useful way for coastal modelers to improve the turbulence model performance in predicting vertical turbulent mixing in stratified estuaries and coastal waters.

  9. Can segmental model reductions quantify whole-body balance accurately during dynamic activities?

    Science.gov (United States)

    Jamkrajang, Parunchaya; Robinson, Mark A; Limroongreungrat, Weerawat; Vanrenterghem, Jos

    2017-07-01

    When investigating whole-body balance in dynamic tasks, adequately tracking the whole-body centre of mass (CoM) or derivatives such as the extrapolated centre of mass (XCoM) can be crucial but add considerable measurement efforts. The aim of this study was to investigate whether reduced kinematic models can still provide adequate CoM and XCoM representations during dynamic sporting tasks. Seventeen healthy recreationally active subjects (14 males and 3 females; age, 24.9±3.2years; height, 177.3±6.9cm; body mass 72.6±7.0kg) participated in this study. Participants completed three dynamic movements, jumping, kicking, and overarm throwing. Marker-based kinematic data were collected with 10 optoelectronic cameras at 250Hz (Oqus Qualisys, Gothenburg, Sweden). The differences between (X)CoM from a full-body model (gold standard) and (X)CoM representations based on six selected model reductions were evaluated using a Bland-Altman approach. A threshold difference was set at ±2cm to help the reader interpret which model can still provide an acceptable (X)CoM representation. Antero-posterior and medio-lateral displacement profiles of the CoM representation based on lower limbs, trunk and upper limbs showed strong agreement, slightly reduced for lower limbs and trunk only. Representations based on lower limbs only showed less strong agreement, particularly for XCoM in kicking. Overall, our results provide justification of the use of certain model reductions for specific needs, saving measurement effort whilst limiting the error of tracking (X)CoM trajectories in the context of whole-body balance investigation. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Importance of Accurate Computation of Secondary Electron Emission for ModelingSpacecraft Charging

    OpenAIRE

    Clerc, Sebastien; Dennison, JR

    2005-01-01

    The secondary electron yield is a critical process in establishing the charge balance in spacecraft charging and the subsequent determination of the equilibrium potential. Spacecraft charging codes use a parameterized expression for the secondary electron yield δ(Eo) as a function of incident electron energy, Eo. A critical step in accurately characterizing a particular spacecraft material is establishing the most efficient and accurate way to determine the fitting parameters in terms of the ...

  11. Numerical Computation of a Continuous-thrust State Transition Matrix Incorporating Accurate Hardware and Ephemeris Models

    Science.gov (United States)

    Ellison, Donald; Conway, Bruce; Englander, Jacob

    2015-01-01

    A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.

  12. SPARC: MASS MODELS FOR 175 DISK GALAXIES WITH SPITZER PHOTOMETRY AND ACCURATE ROTATION CURVES

    Energy Technology Data Exchange (ETDEWEB)

    Lelli, Federico; McGaugh, Stacy S. [Department of Astronomy, Case Western Reserve University, Cleveland, OH 44106 (United States); Schombert, James M., E-mail: federico.lelli@case.edu [Department of Physics, University of Oregon, Eugene, OR 97403 (United States)

    2016-12-01

    We introduce SPARC ( Spitzer Photometry and Accurate Rotation Curves): a sample of 175 nearby galaxies with new surface photometry at 3.6  μ m and high-quality rotation curves from previous H i/H α studies. SPARC spans a broad range of morphologies (S0 to Irr), luminosities (∼5 dex), and surface brightnesses (∼4 dex). We derive [3.6] surface photometry and study structural relations of stellar and gas disks. We find that both the stellar mass–H i mass relation and the stellar radius–H i radius relation have significant intrinsic scatter, while the H i   mass–radius relation is extremely tight. We build detailed mass models and quantify the ratio of baryonic to observed velocity ( V {sub bar}/ V {sub obs}) for different characteristic radii and values of the stellar mass-to-light ratio (ϒ{sub ⋆}) at [3.6]. Assuming ϒ{sub ⋆} ≃ 0.5 M {sub ⊙}/ L {sub ⊙} (as suggested by stellar population models), we find that (i) the gas fraction linearly correlates with total luminosity; (ii) the transition from star-dominated to gas-dominated galaxies roughly corresponds to the transition from spiral galaxies to dwarf irregulars, in line with density wave theory; and (iii)  V {sub bar}/ V {sub obs} varies with luminosity and surface brightness: high-mass, high-surface-brightness galaxies are nearly maximal, while low-mass, low-surface-brightness galaxies are submaximal. These basic properties are lost for low values of ϒ{sub ⋆} ≃ 0.2 M {sub ⊙}/ L {sub ⊙} as suggested by the DiskMass survey. The mean maximum-disk limit in bright galaxies is ϒ{sub ⋆} ≃ 0.7 M {sub ⊙}/ L {sub ⊙} at [3.6]. The SPARC data are publicly available and represent an ideal test bed for models of galaxy formation.

  13. An Interpretable Machine Learning Model for Accurate Prediction of Sepsis in the ICU.

    Science.gov (United States)

    Nemati, Shamim; Holder, Andre; Razmi, Fereshteh; Stanley, Matthew D; Clifford, Gari D; Buchman, Timothy G

    2018-04-01

    Sepsis is among the leading causes of morbidity, mortality, and cost overruns in critically ill patients. Early intervention with antibiotics improves survival in septic patients. However, no clinically validated system exists for real-time prediction of sepsis onset. We aimed to develop and validate an Artificial Intelligence Sepsis Expert algorithm for early prediction of sepsis. Observational cohort study. Academic medical center from January 2013 to December 2015. Over 31,000 admissions to the ICUs at two Emory University hospitals (development cohort), in addition to over 52,000 ICU patients from the publicly available Medical Information Mart for Intensive Care-III ICU database (validation cohort). Patients who met the Third International Consensus Definitions for Sepsis (Sepsis-3) prior to or within 4 hours of their ICU admission were excluded, resulting in roughly 27,000 and 42,000 patients within our development and validation cohorts, respectively. None. High-resolution vital signs time series and electronic medical record data were extracted. A set of 65 features (variables) were calculated on hourly basis and passed to the Artificial Intelligence Sepsis Expert algorithm to predict onset of sepsis in the proceeding T hours (where T = 12, 8, 6, or 4). Artificial Intelligence Sepsis Expert was used to predict onset of sepsis in the proceeding T hours and to produce a list of the most significant contributing factors. For the 12-, 8-, 6-, and 4-hour ahead prediction of sepsis, Artificial Intelligence Sepsis Expert achieved area under the receiver operating characteristic in the range of 0.83-0.85. Performance of the Artificial Intelligence Sepsis Expert on the development and validation cohorts was indistinguishable. Using data available in the ICU in real-time, Artificial Intelligence Sepsis Expert can accurately predict the onset of sepsis in an ICU patient 4-12 hours prior to clinical recognition. A prospective study is necessary to determine the

  14. An accurate description of Aspergillus niger organic acid batch fermentation through dynamic metabolic modelling.

    Science.gov (United States)

    Upton, Daniel J; McQueen-Mason, Simon J; Wood, A Jamie

    2017-01-01

    Aspergillus niger fermentation has provided the chief source of industrial citric acid for over 50 years. Traditional strain development of this organism was achieved through random mutagenesis, but advances in genomics have enabled the development of genome-scale metabolic modelling that can be used to make predictive improvements in fermentation performance. The parent citric acid-producing strain of A. niger , ATCC 1015, has been described previously by a genome-scale metabolic model that encapsulates its response to ambient pH. Here, we report the development of a novel double optimisation modelling approach that generates time-dependent citric acid fermentation using dynamic flux balance analysis. The output from this model shows a good match with empirical fermentation data. Our studies suggest that citric acid production commences upon a switch to phosphate-limited growth and this is validated by fitting to empirical data, which confirms the diauxic growth behaviour and the role of phosphate storage as polyphosphate. The calibrated time-course model reflects observed metabolic events and generates reliable in silico data for industrially relevant fermentative time series, and for the behaviour of engineered strains suggesting that our approach can be used as a powerful tool for predictive metabolic engineering.

  15. Levels of Interaction Provided by Online Distance Education Models

    Science.gov (United States)

    Alhih, Mohammed; Ossiannilsson, Ebba; Berigel, Muhammet

    2017-01-01

    Interaction plays a significant role to foster usability and quality in online education. It is one of the quality standard to reveal the evidence of practice in online distance education models. This research study aims to evaluate levels of interaction in the practices of distance education centres. It is aimed to provide online distance…

  16. Short communication: Genetic lag represents commercial herd genetic merit more accurately than the 4-path selection model.

    Science.gov (United States)

    Dechow, C D; Rogers, G W

    2018-05-01

    Expectation of genetic merit in commercial dairy herds is routinely estimated using a 4-path genetic selection model that was derived for a closed population, but commercial herds using artificial insemination sires are not closed. The 4-path model also predicts a higher rate of genetic progress in elite herds that provide artificial insemination sires than in commercial herds that use such sires, which counters other theoretical assumptions and observations of realized genetic responses. The aim of this work is to clarify whether genetic merit in commercial herds is more accurately reflected under the assumptions of the 4-path genetic response formula or by a genetic lag formula. We demonstrate by tracing the transmission of genetic merit from parents to offspring that the rate of genetic progress in commercial dairy farms is expected to be the same as that in the genetic nucleus. The lag in genetic merit between the nucleus and commercial farms is a function of sire and dam generation interval, the rate of genetic progress in elite artificial insemination herds, and genetic merit of sires and dams. To predict how strategies such as the use of young versus daughter-proven sires, culling heifers following genomic testing, or selective use of sexed semen will alter genetic merit in commercial herds, genetic merit expectations for commercial herds should be modeled using genetic lag expectations. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. High Fidelity Non-Gravitational Force Models for Precise and Accurate Orbit Determination of TerraSAR-X

    Science.gov (United States)

    Hackel, Stefan; Montenbruck, Oliver; Steigenberger, -Peter; Eineder, Michael; Gisinger, Christoph

    Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The increasing demand for precise radar products relies on sophisticated validation methods, which require precise and accurate orbit products. Basically, the precise reconstruction of the satellite’s trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency receiver onboard the spacecraft. The Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for the gravitational and non-gravitational forces. Following a proper analysis of the orbit quality, systematics in the orbit products have been identified, which reflect deficits in the non-gravitational force models. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). Due to the dusk-dawn orbit configuration of TerraSAR-X, the satellite is almost constantly illuminated by the Sun. Therefore, the direct SRP has an effect on the lateral stability of the determined orbit. The indirect effect of the solar radiation principally contributes to the Earth Radiation Pressure (ERP). The resulting force depends on the sunlight, which is reflected by the illuminated Earth surface in the visible, and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed within the presentation. The presentation highlights the influence of non-gravitational force and satellite macro models on the orbit quality of TerraSAR-X.

  18. Advancements and challenges in generating accurate animal models of gestational diabetes mellitus

    Science.gov (United States)

    Pasek, Raymond C.

    2013-01-01

    The maintenance of glucose homeostasis during pregnancy is critical to the health and well-being of both the mother and the developing fetus. Strikingly, approximately 7% of human pregnancies are characterized by insufficient insulin production or signaling, resulting in gestational diabetes mellitus (GDM). In addition to the acute health concerns of hyperglycemia, women diagnosed with GDM during pregnancy have an increased incidence of complications during pregnancy as well as an increased risk of developing type 2 diabetes (T2D) later in life. Furthermore, children born to mothers diagnosed with GDM have increased incidence of perinatal complications, including hypoglycemia, respiratory distress syndrome, and macrosomia, as well as an increased risk of being obese or developing T2D as adults. No single environmental or genetic factor is solely responsible for the disease; instead, a variety of risk factors, including weight, ethnicity, genetics, and family history, contribute to the likelihood of developing GDM, making the generation of animal models that fully recapitulate the disease difficult. Here, we discuss and critique the various animal models that have been generated to better understand the etiology of diabetes during pregnancy and its physiological impacts on both the mother and the fetus. Strategies utilized are diverse in nature and include the use of surgical manipulation, pharmacological treatment, nutritional manipulation, and genetic approaches in a variety of animal models. Continued development of animal models of GDM is essential for understanding the consequences of this disease as well as providing insights into potential treatments and preventative measures. PMID:24085033

  19. Neonatal tolerance induction enables accurate evaluation of gene therapy for MPS I in a canine model.

    Science.gov (United States)

    Hinderer, Christian; Bell, Peter; Louboutin, Jean-Pierre; Katz, Nathan; Zhu, Yanqing; Lin, Gloria; Choa, Ruth; Bagel, Jessica; O'Donnell, Patricia; Fitzgerald, Caitlin A; Langan, Therese; Wang, Ping; Casal, Margret L; Haskins, Mark E; Wilson, James M

    2016-09-01

    High fidelity animal models of human disease are essential for preclinical evaluation of novel gene and protein therapeutics. However, these studies can be complicated by exaggerated immune responses against the human transgene. Here we demonstrate that dogs with a genetic deficiency of the enzyme α-l-iduronidase (IDUA), a model of the lysosomal storage disease mucopolysaccharidosis type I (MPS I), can be rendered immunologically tolerant to human IDUA through neonatal exposure to the enzyme. Using MPS I dogs tolerized to human IDUA as neonates, we evaluated intrathecal delivery of an adeno-associated virus serotype 9 vector expressing human IDUA as a therapy for the central nervous system manifestations of MPS I. These studies established the efficacy of the human vector in the canine model, and allowed for estimation of the minimum effective dose, providing key information for the design of first-in-human trials. This approach can facilitate evaluation of human therapeutics in relevant animal models, and may also have clinical applications for the prevention of immune responses to gene and protein replacement therapies. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Towards Relaxing the Spherical Solar Radiation Pressure Model for Accurate Orbit Predictions

    Science.gov (United States)

    Lachut, M.; Bennett, J.

    2016-09-01

    The well-known cannonball model has been used ubiquitously to capture the effects of atmospheric drag and solar radiation pressure on satellites and/or space debris for decades. While it lends itself naturally to spherical objects, its validity in the case of non-spherical objects has been debated heavily for years throughout the space situational awareness community. One of the leading motivations to improve orbit predictions by relaxing the spherical assumption, is the ongoing demand for more robust and reliable conjunction assessments. In this study, we explore the orbit propagation of a flat plate in a near-GEO orbit under the influence of solar radiation pressure, using a Lambertian BRDF model. Consequently, this approach will account for the spin rate and orientation of the object, which is typically determined in practice using a light curve analysis. Here, simulations will be performed which systematically reduces the spin rate to demonstrate the point at which the spherical model no longer describes the orbital elements of the spinning plate. Further understanding of this threshold would provide insight into when a higher fidelity model should be used, thus resulting in improved orbit propagations. Therefore, the work presented here is of particular interest to organizations and researchers that maintain their own catalog, and/or perform conjunction analyses.

  1. An accurate coarse-grained model for chitosan polysaccharides in aqueous solution.

    Directory of Open Access Journals (Sweden)

    Levan Tsereteli

    Full Text Available Computational models can provide detailed information about molecular conformations and interactions in solution, which is currently inaccessible by other means in many cases. Here we describe an efficient and precise coarse-grained model for long polysaccharides in aqueous solution at different physico-chemical conditions such as pH and ionic strength. The Model is carefully constructed based on all-atom simulations of small saccharides and metadynamics sampling of the dihedral angles in the glycosidic links, which represent the most flexible degrees of freedom of the polysaccharides. The model is validated against experimental data for Chitosan molecules in solution with various degree of deacetylation, and is shown to closely reproduce the available experimental data. For long polymers, subtle differences of the free energy maps of the glycosidic links are found to significantly affect the measurable polymer properties. Therefore, for titratable monomers the free energy maps of the corresponding links are updated according to the current charge of the monomers. We then characterize the microscopic and mesoscopic structural properties of large chitosan polysaccharides in solution for a wide range of solvent pH and ionic strength, and investigate the effect of polymer length and degree and pattern of deacetylation on the polymer properties.

  2. Morphometric analysis of Russian Plain's small lakes on the base of accurate digital bathymetric models

    Science.gov (United States)

    Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana

    2016-04-01

    Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.

  3. An accurate locally active memristor model for S-type negative differential resistance in NbO{sub x}

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, Gary A.; Musunuru, Srinitya; Zhang, Jiaming; Lee, James; Hsieh, Cheng-Chih; Jackson, Warren; Jeon, Yoocharn; Henze, Dick; Li, Zhiyong; Stanley Williams, R. [Hewlett-Packard Laboratories, 1501 Page Mill Road, Palo Alto, California 94304 (United States); Vandenberghe, Ken [PTD-PPS, Hewlett-Packard Company, 1070 NE Circle Boulevard, Corvallis, Oregon 97330 (United States)

    2016-01-11

    A number of important commercial applications would benefit from the introduction of easily manufactured devices that exhibit current-controlled, or “S-type,” negative differential resistance (NDR). A leading example is emerging non-volatile memory based on crossbar array architectures. Due to the inherently linear current vs. voltage characteristics of candidate non-volatile memristor memory elements, individual memory cells in these crossbar arrays can be addressed only if a highly non-linear circuit element, termed a “selector,” is incorporated in the cell. Selectors based on a layer of niobium oxide sandwiched between two electrodes have been investigated by a number of groups because the NDR they exhibit provides a promisingly large non-linearity. We have developed a highly accurate compact dynamical model for their electrical conduction that shows that the NDR in these devices results from a thermal feedback mechanism. A series of electrothermal measurements and numerical simulations corroborate this model. These results reveal that the leakage currents can be minimized by thermally isolating the selector or by incorporating materials with larger activation energies for electron motion.

  4. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    International Nuclear Information System (INIS)

    Gray, Alan; Harlen, Oliver G.; Harris, Sarah A.; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J.; Pearson, Arwen R.; Read, Daniel J.; Richardson, Robin A.

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational

  5. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Alan [The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); Harlen, Oliver G. [University of Leeds, Leeds LS2 9JT (United Kingdom); Harris, Sarah A., E-mail: s.a.harris@leeds.ac.uk [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Leeds, Leeds LS2 9JT (United Kingdom); Khalid, Syma; Leung, Yuk Ming [University of Southampton, Southampton SO17 1BJ (United Kingdom); Lonsdale, Richard [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Philipps-Universität Marburg, Hans-Meerwein Strasse, 35032 Marburg (Germany); Mulholland, Adrian J. [University of Bristol, Bristol BS8 1TS (United Kingdom); Pearson, Arwen R. [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Hamburg, Hamburg (Germany); Read, Daniel J.; Richardson, Robin A. [University of Leeds, Leeds LS2 9JT (United Kingdom); The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom)

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  6. An accurate locally active memristor model for S-type negative differential resistance in NbOx

    International Nuclear Information System (INIS)

    Gibson, Gary A.; Musunuru, Srinitya; Zhang, Jiaming; Lee, James; Hsieh, Cheng-Chih; Jackson, Warren; Jeon, Yoocharn; Henze, Dick; Li, Zhiyong; Stanley Williams, R.; Vandenberghe, Ken

    2016-01-01

    A number of important commercial applications would benefit from the introduction of easily manufactured devices that exhibit current-controlled, or “S-type,” negative differential resistance (NDR). A leading example is emerging non-volatile memory based on crossbar array architectures. Due to the inherently linear current vs. voltage characteristics of candidate non-volatile memristor memory elements, individual memory cells in these crossbar arrays can be addressed only if a highly non-linear circuit element, termed a “selector,” is incorporated in the cell. Selectors based on a layer of niobium oxide sandwiched between two electrodes have been investigated by a number of groups because the NDR they exhibit provides a promisingly large non-linearity. We have developed a highly accurate compact dynamical model for their electrical conduction that shows that the NDR in these devices results from a thermal feedback mechanism. A series of electrothermal measurements and numerical simulations corroborate this model. These results reveal that the leakage currents can be minimized by thermally isolating the selector or by incorporating materials with larger activation energies for electron motion

  7. Model of Providing Assistive Technologies in Special Education Schools.

    Science.gov (United States)

    Lersilp, Suchitporn; Putthinoi, Supawadee; Chakpitak, Nopasit

    2015-05-14

    Most students diagnosed with disabilities in Thai special education schools received assistive technologies, but this did not guarantee the greatest benefits. The purpose of this study was to survey the provision, use and needs of assistive technologies, as well as the perspectives of key informants regarding a model of providing them in special education schools. The participants were selected by the purposive sampling method, and they comprised 120 students with visual, physical, hearing or intellectual disabilities from four special education schools in Chiang Mai, Thailand; and 24 key informants such as parents or caregivers, teachers, school principals and school therapists. The instruments consisted of an assistive technology checklist and a semi-structured interview. Results showed that a category of assistive technologies was provided for students with disabilities, with the highest being "services", followed by "media" and then "facilities". Furthermore, mostly students with physical disabilities were provided with assistive technologies, but those with visual disabilities needed it more. Finally, the model of providing assistive technologies was composed of 5 components: Collaboration; Holistic perspective; Independent management of schools; Learning systems and a production manual for users; and Development of an assistive technology center, driven by 3 major sources such as Government and Private organizations, and Schools.

  8. Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?

    Science.gov (United States)

    Ramarohetra, J.; Sultan, B.

    2012-04-01

    Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and

  9. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    Science.gov (United States)

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  10. Small pores in soils: Is the physico-chemical environment accurately reflected in biogeochemical models ?

    Science.gov (United States)

    Weber, Tobias K. D.; Riedel, Thomas

    2015-04-01

    Free water is a prerequesite to chemical reactions and biological activity in earth's upper crust essential to life. The void volume between the solid compounds provides space for water, air, and organisms that thrive on the consumption of minerals and organic matter thereby regulating soil carbon turnover. However, not all water in the pore space in soils and sediments is in its liquid state. This is a result of the adhesive forces which reduce the water activity in small pores and charged mineral surfaces. This water has a lower tendency to react chemically in solution as this additional binding energy lowers its activity. In this work, we estimated the amount of soil pore water that is thermodynamically different from a simple aqueous solution. The quantity of soil pore water with properties different to liquid water was found to systematically increase with increasing clay content. The significance of this is that the grain size and surface area apparently affects the thermodynamic state of water. This implies that current methods to determine the amount of water content, traditionally determined from bulk density or gravimetric water content after drying at 105°C overestimates the amount of free water in a soil especially at higher clay content. Our findings have consequences for biogeochemical processes in soils, e.g. nutrients may be contained in water which is not free which could enhance preservation. From water activity measurements on a set of various soils with 0 to 100 wt-% clay, we can show that 5 to 130 mg H2O per g of soil can generally be considered as unsuitable for microbial respiration. These results may therefore provide a unifying explanation for the grain size dependency of organic matter preservation in sedimentary environments and call for a revised view on the biogeochemical environment in soils and sediments. This could allow a different type of process oriented modelling.

  11. Surgeon-performed touch preparation of breast core needle biopsies may provide accurate same-day diagnosis and expedite treatment planning.

    Science.gov (United States)

    Gadgil, Pranjali V; Korourian, Soheila; Malak, Sharp; Ochoa, Daniela; Lipschitz, Riley; Henry-Tillman, Ronda; Suzanne Klimberg, V

    2014-04-01

    We aimed to determine the accuracy of surgeon-performed touch-preparation cytology (TPC) of breast core-needle biopsies (CNB) and the ability to use TPC results to initiate treatment planning at the same patient visit. A single-institution retrospective review of TPC results of ultrasound-guided breast CNB was performed. All TPC slides were prepared by surgeons performing the biopsy and interpreted by the pathologist. TPC results were reported as positive/suspicious, atypical, negative/benign, or deferred; these were compared with final pathology of cores to calculate accuracy. Treatment planning was noted as having taken place if the patient had requisition of advanced imaging, referrals, or surgical planning undertaken during the same visit. Four hundred forty-seven CNB specimens with corresponding TPC were evaluated from 434 patient visits, and 203 samples (45.4 %) were malignant on final pathology. When the deferred, atypical, and benign results were considered negative and positive/suspicious results were considered positive, sensitivity and specificity were 83.7 % (77.9-88.5 %) and 98.4 % (95.9-99.6 %), respectively; positive and negative predictive values were 97.7 % (94.2-99.4 %) and 87.9 % (83.4-91.5 %), respectively. In practice, patients with atypical or deferred results were asked to await final pathology. An accurate same-day diagnosis (TPC positive/suspicious) was hence feasible in 83.7 % (170 of 203) of malignant and 79.5 % (194 of 244) of benign cases (TPC negative). Of patients who had a same-day diagnosis of a new malignancy, 77.3 % had treatment planning initiated at the same visit. Surgeon-performed TPC of breast CNB is an accurate method of same-day diagnosis that allows treatment planning to be initiated at the same visit and may serve to expedite patient care.

  12. Optimally Accurate Second-Order Time-Domain Finite-Difference Scheme for Acoustic, Electromagnetic, and Elastic Wave Modeling

    Directory of Open Access Journals (Sweden)

    C. Bommaraju

    2005-01-01

    Full Text Available Numerical methods are extremely useful in solving real-life problems with complex materials and geometries. However, numerical methods in the time domain suffer from artificial numerical dispersion. Standard numerical techniques which are second-order in space and time, like the conventional Finite Difference 3-point (FD3 method, Finite-Difference Time-Domain (FDTD method, and Finite Integration Technique (FIT provide estimates of the error of discretized numerical operators rather than the error of the numerical solutions computed using these operators. Here optimally accurate time-domain FD operators which are second-order in time as well as in space are derived. Optimal accuracy means the greatest attainable accuracy for a particular type of scheme, e.g., second-order FD, for some particular grid spacing. The modified operators lead to an implicit scheme. Using the first order Born approximation, this implicit scheme is transformed into a two step explicit scheme, namely predictor-corrector scheme. The stability condition (maximum time step for a given spatial grid interval for the various modified schemes is roughly equal to that for the corresponding conventional scheme. The modified FD scheme (FDM attains reduction of numerical dispersion almost by a factor of 40 in 1-D case, compared to the FD3, FDTD, and FIT. The CPU time for the FDM scheme is twice of that required by the FD3 method. The simulated synthetic data for a 2-D P-SV (elastodynamics problem computed using the modified scheme are 30 times more accurate than synthetics computed using a conventional scheme, at a cost of only 3.5 times as much CPU time. The FDM is of particular interest in the modeling of large scale (spatial dimension is more or equal to one thousand wave lengths or observation time interval is very high compared to reference time step wave propagation and scattering problems, for instance, in ultrasonic antenna and synthetic scattering data modeling for Non

  13. A logical model provides insights into T cell receptor signaling.

    Directory of Open Access Journals (Sweden)

    Julio Saez-Rodriguez

    2007-08-01

    Full Text Available Cellular decisions are determined by complex molecular interaction networks. Large-scale signaling networks are currently being reconstructed, but the kinetic parameters and quantitative data that would allow for dynamic modeling are still scarce. Therefore, computational studies based upon the structure of these networks are of great interest. Here, a methodology relying on a logical formalism is applied to the functional analysis of the complex signaling network governing the activation of T cells via the T cell receptor, the CD4/CD8 co-receptors, and the accessory signaling receptor CD28. Our large-scale Boolean model, which comprises 94 nodes and 123 interactions and is based upon well-established qualitative knowledge from primary T cells, reveals important structural features (e.g., feedback loops and network-wide dependencies and recapitulates the global behavior of this network for an array of published data on T cell activation in wild-type and knock-out conditions. More importantly, the model predicted unexpected signaling events after antibody-mediated perturbation of CD28 and after genetic knockout of the kinase Fyn that were subsequently experimentally validated. Finally, we show that the logical model reveals key elements and potential failure modes in network functioning and provides candidates for missing links. In summary, our large-scale logical model for T cell activation proved to be a promising in silico tool, and it inspires immunologists to ask new questions. We think that it holds valuable potential in foreseeing the effects of drugs and network modifications.

  14. National Water Model: Providing the Nation with Actionable Water Intelligence

    Science.gov (United States)

    Aggett, G. R.; Bates, B.

    2017-12-01

    The National Water Model (NWM) provides national, street-level detail of water movement through time and space. Operating hourly, this flood of information offers enormous benefits in the form of water resource management, natural disaster preparedness, and the protection of life and property. The Geo-Intelligence Division at the NOAA National Water Center supplies forecasters and decision-makers with timely, actionable water intelligence through the processing of billions of NWM data points every hour. These datasets include current streamflow estimates, short and medium range streamflow forecasts, and many other ancillary datasets. The sheer amount of NWM data produced yields a dataset too large to allow for direct human comprehension. As such, it is necessary to undergo model data post-processing, filtering, and data ingestion by visualization web apps that make use of cartographic techniques to bring attention to the areas of highest urgency. This poster illustrates NWM output post-processing and cartographic visualization techniques being developed and employed by the Geo-Intelligence Division at the NOAA National Water Center to provide national actionable water intelligence.

  15. 3D Vision Provides Shorter Operative Time and More Accurate Intraoperative Surgical Performance in Laparoscopic Hiatal Hernia Repair Compared With 2D Vision.

    Science.gov (United States)

    Leon, Piera; Rivellini, Roberta; Giudici, Fabiola; Sciuto, Antonio; Pirozzi, Felice; Corcione, Francesco

    2017-04-01

    The aim of this study is to evaluate if 3-dimensional high-definition (3D) vision in laparoscopy can prompt advantages over conventional 2D high-definition vision in hiatal hernia (HH) repair. Between September 2012 and September 2015, we randomized 36 patients affected by symptomatic HH to undergo surgery; 17 patients underwent 2D laparoscopic HH repair, whereas 19 patients underwent the same operation in 3D vision. No conversion to open surgery occurred. Overall operative time was significantly reduced in the 3D laparoscopic group compared with the 2D one (69.9 vs 90.1 minutes, P = .006). Operative time to perform laparoscopic crura closure did not differ significantly between the 2 groups. We observed a tendency to a faster crura closure in the 3D group in the subgroup of patients with mesh positioning (7.5 vs 8.9 minutes, P = .09). Nissen fundoplication was faster in the 3D group without mesh positioning ( P = .07). 3D vision in laparoscopic HH repair helps surgeon's visualization and seems to lead to operative time reduction. Advantages can result from the enhanced spatial perception of narrow spaces. Less operative time and more accurate surgery translate to benefit for patients and cost savings, compensating the high costs of the 3D technology. However, more data from larger series are needed to firmly assess the advantages of 3D over 2D vision in laparoscopic HH repair.

  16. Minimum required number of specimen records to develop accurate species distribution models

    NARCIS (Netherlands)

    Proosdij, van A.S.J.; Sosef, M.S.M.; Wieringa, J.J.; Raes, N.

    2016-01-01

    Species distribution models (SDMs) are widely used to predict the occurrence of species. Because SDMs generally use presence-only data, validation of the predicted distribution and assessing model accuracy is challenging. Model performance depends on both sample size and species’ prevalence, being

  17. Minimum required number of specimen records to develop accurate species distribution models

    NARCIS (Netherlands)

    Proosdij, van A.S.J.; Sosef, M.S.M.; Wieringa, Jan; Raes, N.

    2015-01-01

    Species Distribution Models (SDMs) are widely used to predict the occurrence of species. Because SDMs generally use presence-only data, validation of the predicted distribution and assessing model accuracy is challenging. Model performance depends on both sample size and species’ prevalence, being

  18. 3D Printing of Intracranial Aneurysms Using Fused Deposition Modeling Offers Highly Accurate Replications.

    Science.gov (United States)

    Frölich, A M J; Spallek, J; Brehmer, L; Buhk, J-H; Krause, D; Fiehler, J; Kemmling, A

    2016-01-01

    As part of a multicenter cooperation (Aneurysm-Like Synthetic bodies for Testing Endovascular devices in 3D Reality) with focus on implementation of additive manufacturing in neuroradiologic practice, we systematically assessed the technical feasibility and accuracy of several additive manufacturing techniques. We evaluated the method of fused deposition modeling for the production of aneurysm models replicating patient-specific anatomy. 3D rotational angiographic data from 10 aneurysms were processed to obtain volumetric models suitable for fused deposition modeling. A hollow aneurysm model with connectors for silicone tubes was fabricated by using acrylonitrile butadiene styrene. Support material was dissolved, and surfaces were finished by using NanoSeal. The resulting models were filled with iodinated contrast media. 3D rotational angiography of the models was acquired, and aneurysm geometry was compared with the original patient data. Reproduction of hollow aneurysm models was technically feasible in 8 of 10 cases, with aneurysm sizes ranging from 41 to 2928 mm(3) (aneurysm diameter, 3-19 mm). A high level of anatomic accuracy was observed, with a mean Dice index of 93.6% ± 2.4%. Obstructions were encountered in vessel segments of modeling is a promising technique, which allows rapid and precise replication of cerebral aneurysms. The porosity of the models can be overcome by surface finishing. Models produced with fused deposition modeling may serve as educational and research tools and could be used to individualize treatment planning. © 2016 by American Journal of Neuroradiology.

  19. Accurate gradually varied flow model for water surface profile in circular channels

    Directory of Open Access Journals (Sweden)

    Ali R. Vatankhah

    2013-12-01

    Full Text Available The paper presents an accurate approximation of the Froude number (F for circular channels which is part of the gradually varied flow (GVF equation. The proposed approximation is developed using optimization technique to minimize the relative error between the exact and estimated values, resulting in a maximum error of 0.6% compared with 14% for the existing approximate method. The approximate F is used in the governing GVF equation to develop an exact analytical solution of this equation using the concept of simplest partial fractions. A comparison of the proposed and approximate solutions for backwater length shows that the error of the existing approximate solution could reach up to 30% for large normal flow depths.

  20. Governance, Government, and the Search for New Provider Models

    Directory of Open Access Journals (Sweden)

    Richard B. Saltman

    2016-01-01

    Full Text Available A central problem in designing effective models of provider governance in health systems has been to ensure an appropriate balance between the concerns of public sector and/or government decision-makers, on the one hand, and of non-governmental health services actors in civil society and private life, on the other. In tax-funded European health systems up to the 1980s, the state and other public sector decision-makers played a dominant role over health service provision, typically operating hospitals through national or regional governments on a command-and-control basis. In a number of countries, however, this state role has started to change, with governments first stepping out of direct service provision and now de facto pushed to focus more on steering provider organizations rather than on direct public management. In this new approach to provider governance, the state has pulled back into a regulatory role that introduces market-like incentives and management structures, which then apply to both public and private sector providers alike. This article examines some of the main operational complexities in implementing this new governance reality/strategy, specifically from a service provision (as opposed to mostly a financing or even regulatory perspective. After briefly reviewing some of the key theoretical dilemmas, the paper presents two case studies where this new approach was put into practice: primary care in Sweden and hospitals in Spain. The article concludes that good governance today needs to reflect practical operational realities if it is to have the desired effect on health sector reform outcome.

  1. Genomic inference accurately predicts the timing and severity of a recent bottleneck in a non-model insect population

    Science.gov (United States)

    McCoy, Rajiv C.; Garud, Nandita R.; Kelley, Joanna L.; Boggs, Carol L.; Petrov, Dmitri A.

    2015-01-01

    The analysis of molecular data from natural populations has allowed researchers to answer diverse ecological questions that were previously intractable. In particular, ecologists are often interested in the demographic history of populations, information that is rarely available from historical records. Methods have been developed to infer demographic parameters from genomic data, but it is not well understood how inferred parameters compare to true population history or depend on aspects of experimental design. Here we present and evaluate a method of SNP discovery using RNA-sequencing and demographic inference using the program δaδi, which uses a diffusion approximation to the allele frequency spectrum to fit demographic models. We test these methods in a population of the checkerspot butterfly Euphydryas gillettii. This population was intentionally introduced to Gothic, Colorado in 1977 and has since experienced extreme fluctuations including bottlenecks of fewer than 25 adults, as documented by nearly annual field surveys. Using RNA-sequencing of eight individuals from Colorado and eight individuals from a native population in Wyoming, we generate the first genomic resources for this system. While demographic inference is commonly used to examine ancient demography, our study demonstrates that our inexpensive, all-in-one approach to marker discovery and genotyping provides sufficient data to accurately infer the timing of a recent bottleneck. This demographic scenario is relevant for many species of conservation concern, few of which have sequenced genomes. Our results are remarkably insensitive to sample size or number of genomic markers, which has important implications for applying this method to other non-model systems. PMID:24237665

  2. Proposition of a multicriteria model to select logistics services providers

    Directory of Open Access Journals (Sweden)

    Miriam Catarina Soares Aharonovitz

    2014-06-01

    Full Text Available This study aims to propose a multicriteria model to select logistics service providers by the development of a decision tree. The methodology consists of a survey, which resulted in a sample of 181 responses. The sample was analyzed using statistic methods, descriptive statistics among them, multivariate analysis, variance analysis, and parametric tests to compare means. Based on these results, it was possible to obtain the decision tree and information to support the multicriteria analysis. The AHP (Analytic Hierarchy Process was applied to determine the data influence and thus ensure better consistency in the analysis. The decision tree categorizes the criteria according to the decision levels (strategic, tactical and operational. Furthermore, it allows to generically evaluate the importance of each criterion in the supplier selection process from the point of view of logistics services contractors.

  3. Full-waveform modeling of Zero-Offset Electromagnetic Induction for Accurate Characterization of Subsurface Electrical Properties

    Science.gov (United States)

    Moghadas, D.; André, F.; Vereecken, H.; Lambot, S.

    2009-04-01

    Water is a vital resource for human needs, agriculture, sanitation and industrial supply. The knowledge of soil water dynamics and solute transport is essential in agricultural and environmental engineering as it controls plant growth, hydrological processes, and the contamination of surface and subsurface water. Increased irrigation efficiency has also an important role for water conservation, reducing drainage and mitigating some of the water pollution and soil salinity. Geophysical methods are effective techniques for monitoring the vadose zone. In particular, electromagnetic induction (EMI) can provide in a non-invasive way important information about the soil electrical properties at the field scale, which are mainly correlated to important variables such as soil water content, salinity, and texture. EMI is based on the radiation of a VLF EM wave into the soil. Depending on its electrical conductivity, Foucault currents are generated and produce a secondary EM field which is then recorded by the EMI system. Advanced techniques for EMI data interpretation resort to inverse modeling. Yet, a major gap in current knowledge is the limited accuracy of the forward model used for describing the EMI-subsurface system, usually relying on strongly simplifying assumptions. We present a new low frequency EMI method based on Vector Network Analyzer (VNA) technology and advanced forward modeling using a linear system of complex transfer functions for describing the EMI loop antenna and a three-dimensional solution of Maxwell's equations for wave propagation in multilayered media. VNA permits simple, international standard calibration of the EMI system. We derived a Green's function for the zero-offset, off-ground horizontal loop antenna and also proposed an optimal integration path for faster evaluation of the spatial-domain Green's function from its spectral counterpart. This new integration path shows fewer oscillations compared with the real path and permits to avoid the

  4. THE IMPACT OF ACCURATE EXTINCTION MEASUREMENTS FOR X-RAY SPECTRAL MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Randall K. [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Valencic, Lynne A. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Corrales, Lia, E-mail: lynne.a.valencic@nasa.gov [MIT Kavli Institute for Astrophysics and Space Research, 77 Massachusetts Avenue, 37-241, Cambridge, MA 02139 (United States)

    2016-02-20

    Interstellar extinction includes both absorption and scattering of photons from interstellar gas and dust grains, and it has the effect of altering a source's spectrum and its total observed intensity. However, while multiple absorption models exist, there are no useful scattering models in standard X-ray spectrum fitting tools, such as XSPEC. Nonetheless, X-ray halos, created by scattering from dust grains, are detected around even moderately absorbed sources, and the impact on an observed source spectrum can be significant, if modest, compared to direct absorption. By convolving the scattering cross section with dust models, we have created a spectral model as a function of energy, type of dust, and extraction region that can be used with models of direct absorption. This will ensure that the extinction model is consistent and enable direct connections to be made between a source's X-ray spectral fits and its UV/optical extinction.

  5. Efficient and Accurate Log-Levy Approximations of Levy-Driven LIBOR Models

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David

    2012-01-01

    The LIBOR market model is very popular for pricing interest rate derivatives but is known to have several pitfalls. In addition, if the model is driven by a jump process, then the complexity of the drift term grows exponentially fast (as a function of the tenor length). We consider a Lévy-driven ...

  6. Fault Tolerance for Industrial Actuators in Absence of Accurate Models and Hardware Redundancy

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios; Blanke, Mogens; Niemann, Hans Henrik

    2015-01-01

    This paper investigates Fault-Tolerant Control for closed-loop systems where only coarse models are available and there is lack of actuator and sensor redundancies. The problem is approached in the form of a typical servomotor in closed-loop. A linear model is extracted from input/output data to ...

  7. Efficient and Accurate Log-Levy Approximations of Levy-Driven LIBOR Models

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David

    2012-01-01

    The LIBOR market model is very popular for pricing interest rate derivatives but is known to have several pitfalls. In addition, if the model is driven by a jump process, then the complexity of the drift term grows exponentially fast (as a function of the tenor length). We consider a Lévy...

  8. Efficient and accurate log-Lévy approximations to Lévy driven LIBOR models

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David

    2011-01-01

    The LIBOR market model is very popular for pricing interest rate derivatives, but is known to have several pitfalls. In addition, if the model is driven by a jump process, then the complexity of the drift term is growing exponentially fast (as a function of the tenor length). In this work, we...

  9. Highly Accurate Tree Models Derived from Terrestrial Laser Scan Data: A Method Description

    Directory of Open Access Journals (Sweden)

    Jan Hackenberg

    2014-05-01

    Full Text Available This paper presents a method for fitting cylinders into a point cloud, derived from a terrestrial laser-scanned tree. Utilizing high scan quality data as the input, the resulting models describe the branching structure of the tree, capable of detecting branches with a diameter smaller than a centimeter. The cylinders are stored as a hierarchical tree-like data structure encapsulating parent-child neighbor relations and incorporating the tree’s direction of growth. This structure enables the efficient extraction of tree components, such as the stem or a single branch. The method was validated both by applying a comparison of the resulting cylinder models with ground truth data and by an analysis between the input point clouds and the models. Tree models were accomplished representing more than 99% of the input point cloud, with an average distance from the cylinder model to the point cloud within sub-millimeter accuracy. After validation, the method was applied to build two allometric models based on 24 tree point clouds as an example of the application. Computation terminated successfully within less than 30 min. For the model predicting the total above ground volume, the coefficient of determination was 0.965, showing the high potential of terrestrial laser-scanning for forest inventories.

  10. Accurate Fabrication of Hydroxyapatite Bone Models with Porous Scaffold Structures by Using Stereolithography

    Energy Technology Data Exchange (ETDEWEB)

    Maeda, Chiaki; Tasaki, Satoko; Kirihara, Soshu, E-mail: c-maeda@jwri.osaka-u.ac.jp [Joining and Welding Research Institute, Osaka University, 11-1 Mihogaoka, Ibaraki City, Osaka 567-0047 (Japan)

    2011-05-15

    Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.

  11. Accurate Fabrication of Hydroxyapatite Bone Models with Porous Scaffold Structures by Using Stereolithography

    International Nuclear Information System (INIS)

    Maeda, Chiaki; Tasaki, Satoko; Kirihara, Soshu

    2011-01-01

    Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.

  12. submitter A model for the accurate computation of the lateral scattering of protons in water

    CERN Document Server

    Bellinzona, EV; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T

    2016-01-01

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  13. Sea Ice Trends in Climate Models Only Accurate in Runs with Biased Global Warming

    Science.gov (United States)

    Rosenblum, Erica; Eisenman, Ian

    2017-08-01

    Observations indicate that the Arctic sea ice cover is rapidly retreating while the Antarctic sea ice cover is steadily expanding. State-of-the-art climate models, by contrast, typically simulate a moderate decrease in both the Arctic and Antarctic sea ice covers. However, in each hemisphere there is a small subset of model simulations that have sea ice trends similar to the observations. Based on this, a number of recent studies have suggested that the models are consistent with the observations in each hemisphere when simulated internal climate variability is taken into account. Here we examine sea ice changes during 1979-2013 in simulations from the most recent Coupled Model Intercomparison Project (CMIP5) as well as the Community Earth System Model Large Ensemble (CESM-LE), drawing on previous work that found a close relationship in climate models between global-mean surface temperature and sea ice extent. We find that all of the simulations with 1979-2013 Arctic sea ice retreat as fast as observed have considerably more global warming than observations during this time period. Using two separate methods to estimate the sea ice retreat that would occur under the observed level of global warming in each simulation in both ensembles, we find that simulated Arctic sea ice retreat as fast as observed would occur less than 1% of the time. This implies that the models are not consistent with the observations. In the Antarctic, we find that simulated sea ice expansion as fast as observed typically corresponds with too little global warming, although these results are more equivocal. We show that because of this, the simulations do not capture the observed asymmetry between Arctic and Antarctic sea ice trends. This suggests that the models may be getting the right sea ice trends for the wrong reasons in both polar regions.

  14. A tri-stage cluster identification model for accurate analysis of seismic catalogs

    Directory of Open Access Journals (Sweden)

    S. J. Nanda

    2013-02-01

    Full Text Available In this paper we propose a tri-stage cluster identification model that is a combination of a simple single iteration distance algorithm and an iterative K-means algorithm. In this study of earthquake seismicity, the model considers event location, time and magnitude information from earthquake catalog data to efficiently classify events as either background or mainshock and aftershock sequences. Tests on a synthetic seismicity catalog demonstrate the efficiency of the proposed model in terms of accuracy percentage (94.81% for background and 89.46% for aftershocks. The close agreement between lambda and cumulative plots for the ideal synthetic catalog and that generated by the proposed model also supports the accuracy of the proposed technique. There is flexibility in the model design to allow for proper selection of location and magnitude ranges, depending upon the nature of the mainshocks present in the catalog. The effectiveness of the proposed model also is evaluated by the classification of events in three historic catalogs: California, Japan and Indonesia. As expected, for both synthetic and historic catalog analysis it is observed that the density of events classified as background is almost uniform throughout the region, whereas the density of aftershock events are higher near the mainshocks.

  15. Making it Easy to Construct Accurate Hydrological Models that Exploit High Performance Computers (Invited)

    Science.gov (United States)

    Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.

    2013-12-01

    This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.

  16. Accurate model annotation of a near-atomic resolution cryo-EM map.

    Science.gov (United States)

    Hryc, Corey F; Chen, Dong-Hua; Afonine, Pavel V; Jakana, Joanita; Wang, Zhao; Haase-Pettingell, Cameron; Jiang, Wen; Adams, Paul D; King, Jonathan A; Schmid, Michael F; Chiu, Wah

    2017-03-21

    Electron cryomicroscopy (cryo-EM) has been used to determine the atomic coordinates (models) from density maps of biological assemblies. These models can be assessed by their overall fit to the experimental data and stereochemical information. However, these models do not annotate the actual density values of the atoms nor their positional uncertainty. Here, we introduce a computational procedure to derive an atomic model from a cryo-EM map with annotated metadata. The accuracy of such a model is validated by a faithful replication of the experimental cryo-EM map computed using the coordinates and associated metadata. The functional interpretation of any structural features in the model and its utilization for future studies can be made in the context of its measure of uncertainty. We applied this protocol to the 3.3-Å map of the mature P22 bacteriophage capsid, a large and complex macromolecular assembly. With this protocol, we identify and annotate previously undescribed molecular interactions between capsid subunits that are crucial to maintain stability in the absence of cementing proteins or cross-linking, as occur in other bacteriophages.

  17. Accurate modeling of a DOI capable small animal PET scanner using GATE

    International Nuclear Information System (INIS)

    Zagni, F.; D'Ambrosio, D.; Spinelli, AE.; Cicoria, G.; Fanti, S.; Marengo, M.

    2013-01-01

    In this work we developed a Monte Carlo (MC) model of the Sedecal Argus pre-clinical PET scanner, using GATE (Geant4 Application for Tomographic Emission). This is a dual-ring scanner which features DOI compensation by means of two layers of detector crystals (LYSO and GSO). Geometry of detectors and sources, pulses readout and selection of coincidence events were modeled with GATE, while a separate code was developed in order to emulate the processing of digitized data (for example, customized time windows and data flow saturation), the final binning of the lines of response and to reproduce the data output format of the scanner's acquisition software. Validation of the model was performed by modeling several phantoms used in experimental measurements, in order to compare the results of the simulations. Spatial resolution, sensitivity, scatter fraction, count rates and NECR were tested. Moreover, the NEMA NU-4 phantom was modeled in order to check for the image quality yielded by the model. Noise, contrast of cold and hot regions and recovery coefficient were calculated and compared using images of the NEMA phantom acquired with our scanner. The energy spectrum of coincidence events due to the small amount of 176 Lu in LYSO crystals, which was suitably included in our model, was also compared with experimental measurements. Spatial resolution, sensitivity and scatter fraction showed an agreement within 7%. Comparison of the count rates curves resulted satisfactory, being the values within the uncertainties, in the range of activities practically used in research scans. Analysis of the NEMA phantom images also showed a good agreement between simulated and acquired data, within 9% for all the tested parameters. This work shows that basic MC modeling of this kind of system is possible using GATE as a base platform; extension through suitably written customized code allows for an adequate level of accuracy in the results. Our careful validation against experimental

  18. Color-SIFT model: a robust and an accurate shot boundary detection algorithm

    Science.gov (United States)

    Sharmila Kumari, M.; Shekar, B. H.

    2010-02-01

    In this paper, a new technique called color-SIFT model is devised for shot boundary detection. Unlike scale invariant feature transform model that uses only grayscale information and misses important visual information regarding color, here we have adopted different color planes to extract keypoints which are subsequently used to detect shot boundaries. The basic SIFT model has four stages namely scale-space peak selection, keypoint localization, orientation assignment and keypoint descriptor and all these four stages were employed to extract key descriptors in each color plane. The proposed model works on three different color planes and a fusion has been made to take a decision on number of keypoint matches for shot boundary identification and hence is different from the color global scale invariant feature transform that works on quantized images. In addition, the proposed algorithm possess invariance to linear transformation and robust to occlusion and noisy environment. Experiments have been conducted on the standard TRECVID video database to reveal the performance of the proposed model.

  19. Modelling of Limestone Dissolution in Wet FGD Systems: The Importance of an Accurate Particle Size Distribution

    DEFF Research Database (Denmark)

    Kiil, Søren; Johnsson, Jan Erik; Dam-Johansen, Kim

    1999-01-01

    Danish limestone types with very different particle size distributions (PSDs). All limestones were of a high purity. Model predictions were found to be qualitatively in good agreement with experimental data without any use of adjustable parameters. Deviations between measurements and simulations were...... attributed primarily to the PSD measurements of the limestone particles, which were used as model inputs. The PSDs, measured using a laser diffrac-tion-based Malvern analyser, were probably not representative of the limestone samples because agglomeration phenomena took place when the particles were......In wet flue gas desulphurisation (FGD) plants, the most common sorbent is limestone. Over the past 25 years, many attempts to model the transient dissolution of limestone particles in aqueous solutions have been performed, due to the importance for the development of reliable FGD simu-lation tools...

  20. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions

    DEFF Research Database (Denmark)

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...... the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise....

  1. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    International Nuclear Information System (INIS)

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-01-01

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  2. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    Energy Technology Data Exchange (ETDEWEB)

    Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036, Spain and Servei de Física Mèdica i Protecció Radiològica, Institut Català d’Oncologia, L’Hospitalet de Llobregat 08907 (Spain); Roé, Nuria [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036 (Spain); Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Complexo Hospitalario Universitario de Santiago de Compostela 15706, Spain and Grupo de Imagen Molecular, Instituto de Investigacións Sanitarias de Santiago de Compostela (IDIS), Galicia 15782 (Spain); Falcon, Carles; Ros, Domènec [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 08036, Spain and CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Pavía, Javier [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 080836 (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); and Servei de Medicina Nuclear, Hospital Clínic, Barcelona 08036 (Spain)

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  3. Affine-response model of molecular solvation of ions: Accurate predictions of asymmetric charging free energies

    Czech Academy of Sciences Publication Activity Database

    Bardhan, J. P.; Jungwirth, Pavel; Makowski, L.

    Roč. 137, č. 12 ( 2012 ), 124101/1-124101/6 ISSN 0021-9606 R&D Projects: GA MŠk LH12001 Institutional research plan: CEZ:AV0Z40550506 Keywords : ion solvation * continuum models * linear response Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012

  4. Fast and accurate calculations for cumulative first-passage time distributions in Wiener diffusion models

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Kesselmeier, M.; Gondan, Matthias

    2012-01-01

    We propose an improved method for calculating the cumulative first-passage time distribution in Wiener diffusion models with two absorbing barriers. This distribution function is frequently used to describe responses and error probabilities in choice reaction time tasks. The present work extends ...

  5. A fast and accurate SystemC-AMS model for PLL

    NARCIS (Netherlands)

    Ma, K.; Leuken, R. van; Vidojkovic, M.; Romme, J.; Rampu, S.; Pflug, H.; Huang, L.; Dolmans, G.

    2011-01-01

    PLLs have become an important part of electrical systems. When designing a PLL, an efficient and reliable simulation platform for system evaluation is needed. However, the closed loop simulation of a PLL is time consuming. To address this problem, in this paper, a new PLL model containing both

  6. Efficient accurate syntactic direct translation models: one tree at a time

    NARCIS (Netherlands)

    Hassan, H.; Sima'an, K.; Way, A.

    2011-01-01

    A challenging aspect of Statistical Machine Translation from Arabic to English lies in bringing the Arabic source morpho-syntax to bear on the lexical as well as word-order choices of the English target string. In this article, we extend the feature-rich discriminative Direct Translation Model 2

  7. Analysis of computational models for an accurate study of electronic excitations in GFP

    DEFF Research Database (Denmark)

    Schwabe, Tobias; Beerepoot, Maarten; Olsen, Jógvan Magnus Haugaard

    2015-01-01

    Using the chromophore of the green fluorescent protein (GFP), the performance of a hybrid RI-CC2 / polarizable embedding (PE) model is tested against a quantum chemical cluster pproach. Moreover, the effect of the rest of the protein environment is studied by systematically increasing the size...

  8. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions

    DEFF Research Database (Denmark)

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via movi...

  9. Accurate reduction of a model of circadian rhythms by delayed quasi steady state assumptions

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš

    2014-01-01

    Roč. 139, č. 4 (2014), s. 577-585 ISSN 0862-7959 Grant - others:European Commission(XE) StochDetBioModel(328008) Program:FP7 Institutional support: RVO:67985840 Keywords : biochemical networks * gene regulatory networks * oscillating systems * periodic solution Subject RIV: BA - General Mathematics http://hdl.handle.net/10338.dmlcz/144135

  10. A semi-implicit, second-order-accurate numerical model for multiphase underexpanded volcanic jets

    Directory of Open Access Journals (Sweden)

    S. Carcano

    2013-11-01

    Full Text Available An improved version of the PDAC (Pyroclastic Dispersal Analysis Code, Esposti Ongaro et al., 2007 numerical model for the simulation of multiphase volcanic flows is presented and validated for the simulation of multiphase volcanic jets in supersonic regimes. The present version of PDAC includes second-order time- and space discretizations and fully multidimensional advection discretizations in order to reduce numerical diffusion and enhance the accuracy of the original model. The model is tested on the problem of jet decompression in both two and three dimensions. For homogeneous jets, numerical results are consistent with experimental results at the laboratory scale (Lewis and Carlson, 1964. For nonequilibrium gas–particle jets, we consider monodisperse and bidisperse mixtures, and we quantify nonequilibrium effects in terms of the ratio between the particle relaxation time and a characteristic jet timescale. For coarse particles and low particle load, numerical simulations well reproduce laboratory experiments and numerical simulations carried out with an Eulerian–Lagrangian model (Sommerfeld, 1993. At the volcanic scale, we consider steady-state conditions associated with the development of Vulcanian and sub-Plinian eruptions. For the finest particles produced in these regimes, we demonstrate that the solid phase is in mechanical and thermal equilibrium with the gas phase and that the jet decompression structure is well described by a pseudogas model (Ogden et al., 2008. Coarse particles, on the other hand, display significant nonequilibrium effects, which associated with their larger relaxation time. Deviations from the equilibrium regime, with maximum velocity and temperature differences on the order of 150 m s−1 and 80 K across shock waves, occur especially during the rapid acceleration phases, and are able to modify substantially the jet dynamics with respect to the homogeneous case.

  11. Skinfold Prediction Equations Fail to Provide an Accurate Estimate of Body Composition in Elite Rugby Union Athletes of Caucasian and Polynesian Ethnicity.

    Science.gov (United States)

    Zemski, Adam J; Broad, Elizabeth M; Slater, Gary J

    2018-01-01

    Body composition in elite rugby union athletes is routinely assessed using surface anthropometry, which can be utilized to provide estimates of absolute body composition using regression equations. This study aims to assess the ability of available skinfold equations to estimate body composition in elite rugby union athletes who have unique physique traits and divergent ethnicity. The development of sport-specific and ethnicity-sensitive equations was also pursued. Forty-three male international Australian rugby union athletes of Caucasian and Polynesian descent underwent surface anthropometry and dual-energy X-ray absorptiometry (DXA) assessment. Body fat percent (BF%) was estimated using five previously developed equations and compared to DXA measures. Novel sport and ethnicity-sensitive prediction equations were developed using forward selection multiple regression analysis. Existing skinfold equations provided unsatisfactory estimates of BF% in elite rugby union athletes, with all equations demonstrating a 95% prediction interval in excess of 5%. The equations tended to underestimate BF% at low levels of adiposity, whilst overestimating BF% at higher levels of adiposity, regardless of ethnicity. The novel equations created explained a similar amount of variance to those previously developed (Caucasians 75%, Polynesians 90%). The use of skinfold equations, including the created equations, cannot be supported to estimate absolute body composition. Until a population-specific equation is established that can be validated to precisely estimate body composition, it is advocated to use a proven method, such as DXA, when absolute measures of lean and fat mass are desired, and raw anthropometry data routinely to derive an estimate of body composition change.

  12. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California 90024 (United States)

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  13. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    International Nuclear Information System (INIS)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  14. An accurate two-phase approximate solution to the acute viral infection model

    Energy Technology Data Exchange (ETDEWEB)

    Perelson, Alan S [Los Alamos National Laboratory

    2009-01-01

    During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.

  15. Accurate Modeling of The Siemens S7 SCADA Protocol For Intrusion Detection And Digital Forensic

    Directory of Open Access Journals (Sweden)

    Amit Kleinmann

    2014-09-01

    Full Text Available The Siemens S7 protocol is commonly used in SCADA systems for communications between a Human Machine Interface (HMI and the Programmable Logic Controllers (PLCs. This paper presents a model-based Intrusion Detection Systems (IDS designed for S7 networks. The approach is based on the key observation that S7 traffic to and from a specific PLC is highly periodic; as a result, each HMI-PLC channel can be modeled using its own unique Deterministic Finite Automaton (DFA. The resulting DFA-based IDS is very sensitive and is able to flag anomalies such as a message appearing out of its position in the normal sequence or a message referring to a single unexpected bit. The intrusion detection approach was evaluated on traffic from two production systems. Despite its high sensitivity, the system had a very low false positive rate - over 99.82% of the traffic was identified as normal.

  16. Change in volume parameters induced by neoadjuvant chemotherapy provide accurate prediction of overall survival after resection in patients with oesophageal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Tamandl, Dietmar; Fueger, Barbara; Kinsperger, Patrick; Haug, Alexander; Ba-Ssalamah, Ahmed [Medical University of Vienna, Department of Biomedical Imaging and Image-Guided Therapy, Comprehensive Cancer Center GET-Unit, Vienna (Austria); Gore, Richard M. [University of Chicago Pritzker School of Medicine, Department of Radiology, Chicago, IL (United States); Hejna, Michael [Medical University of Vienna, Department of Internal Medicine, Division of Medical Oncology, Comprehensive Cancer Center GET-Unit, Vienna (Austria); Paireder, Matthias; Schoppmann, Sebastian F. [Medical University of Vienna, Department of Surgery, Upper-GI-Service, Comprehensive Cancer Center GET-Unit, Vienna (Austria)

    2016-02-15

    To assess the prognostic value of volumetric parameters measured with CT and PET/CT in patients with neoadjuvant chemotherapy (NACT) and resection for oesophageal cancer (EC). Patients with locally advanced EC, who were treated with NACT and resection, were retrospectively analysed. Data from CT volumetry and {sup 18} F-FDG PET/CT (maximum standardized uptake [SUVmax], metabolic tumour volume [MTV], and total lesion glycolysis [TLG]) were recorded before and after NACT. The impact of volumetric parameter changes induced by NACT (MTV{sub RATIO}, TLG{sub RATIO}, etc.) on overall survival (OS) was assessed using a Cox proportional hazards model. Eighty-four patients were assessed using CT volumetry; of those, 50 also had PET/CT before and after NACT. Low post-treatment CT volume and thickness, MTV, TLG, and SUVmax were all associated with longer OS (p < 0.05), as were CTthickness{sub RATIO}, MTV{sub RATIO}, TLG{sub RATIO}, and SUVmax{sub RATIO} (p < 0.05). In the multivariate analysis, only MTV{sub RATIO} (Hazard ratio, HR 2.52 [95 % Confidence interval, CI 1.33-4.78], p = 0.005), TLG{sub RATIO} (HR 3.89 [95%CI 1.46-10.34], p = 0.006), and surgical margin status (p < 0.05), were independent predictors of OS. MTV{sub RATIO} and TLG{sub RATIO} are independent prognostic factors for survival in patients after NACT and resection for EC. (orig.)

  17. Physical Model for Rapid and Accurate Determination of Nanopore Size via Conductance Measurement.

    Science.gov (United States)

    Wen, Chenyu; Zhang, Zhen; Zhang, Shi-Li

    2017-10-27

    Nanopores have been explored for various biochemical and nanoparticle analyses, primarily via characterizing the ionic current through the pores. At present, however, size determination for solid-state nanopores is experimentally tedious and theoretically unaccountable. Here, we establish a physical model by introducing an effective transport length, L eff , that measures, for a symmetric nanopore, twice the distance from the center of the nanopore where the electric field is the highest to the point along the nanopore axis where the electric field falls to e -1 of this maximum. By [Formula: see text], a simple expression S 0 = f (G, σ, h, β) is derived to algebraically correlate minimum nanopore cross-section area S 0 to nanopore conductance G, electrolyte conductivity σ, and membrane thickness h with β to denote pore shape that is determined by the pore fabrication technique. The model agrees excellently with experimental results for nanopores in graphene, single-layer MoS 2 , and ultrathin SiN x films. The generality of the model is verified by applying it to micrometer-size pores.

  18. An accurate Kriging-based regional ionospheric model using combined GPS/BeiDou observations

    Science.gov (United States)

    Abdelazeem, Mohamed; Çelik, Rahmi N.; El-Rabbany, Ahmed

    2018-01-01

    In this study, we propose a regional ionospheric model (RIM) based on both of the GPS-only and the combined GPS/BeiDou observations for single-frequency precise point positioning (SF-PPP) users in Europe. GPS/BeiDou observations from 16 reference stations are processed in the zero-difference mode. A least-squares algorithm is developed to determine the vertical total electron content (VTEC) bi-linear function parameters for a 15-minute time interval. The Kriging interpolation method is used to estimate the VTEC values at a 1 ° × 1 ° grid. The resulting RIMs are validated for PPP applications using GNSS observations from another set of stations. The SF-PPP accuracy and convergence time obtained through the proposed RIMs are computed and compared with those obtained through the international GNSS service global ionospheric maps (IGS-GIM). The results show that the RIMs speed up the convergence time and enhance the overall positioning accuracy in comparison with the IGS-GIM model, particularly the combined GPS/BeiDou-based model.

  19. Real-time PCR with molecular beacons provides a highly accurate assay for detection of Tay-Sachs alleles in single cells.

    Science.gov (United States)

    Rice, John E; Sanchez, J Aquiles; Pierce, Kenneth E; Wangh, Lawrence J

    2002-12-01

    The results presented here provide the first single-cell genetic assay for Tay-Sachs disease based on real-time PCR. Individual lymphoblasts were lysed with an optimized lysis buffer and assayed using one pair of primers that amplifies both the wild type and 1278 + TATC Tay-Sachs alleles. The resulting amplicons were detected in real time with two molecular beacons each with a different colored fluorochrome. The kinetics of amplicon accumulation generate objective criteria by which to evaluate the validity of each reaction. The assay had an overall utility of 95%, based on the detection of at least one signal in 235 of the 248 attempted tests and an efficiency of 97%, as 7 of the 235 samples were excluded from further analysis for objective quantitative reasons. The accuracy of the assay was 99.1%, because 228 of 230 samples gave signals consistent with the genotype of the cells. Only two of the 135 heterozygous samples were allele drop-outs, a rate far lower than previously reported for single-cell Tay-Sachs assays using conventional methods of PCR. Copyright 2002 John Wiley & Sons, Ltd.

  20. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation of struct......This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines and other models applied to fast evaluation...... response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...

  1. Pseudospectral Maxwell solvers for an accurate modeling of Doppler harmonic generation on plasma mirrors with particle-in-cell codes

    Science.gov (United States)

    Blaclard, G.; Vincenti, H.; Lehe, R.; Vay, J. L.

    2017-09-01

    With the advent of petawatt class lasers, the very large laser intensities attainable on target should enable the production of intense high-order Doppler harmonics from relativistic laser-plasma mirror interactions. At present, the modeling of these harmonics with particle-in-cell (PIC) codes is extremely challenging as it implies an accurate description of tens to hundreds of harmonic orders on a broad range of angles. In particular, we show here that due to the numerical dispersion of waves they induce in vacuum, standard finite difference time domain (FDTD) Maxwell solvers employed in most PIC codes can induce a spurious angular deviation of harmonic beams potentially degrading simulation results. This effect was extensively studied and a simple toy model based on the Snell-Descartes law was developed that allows us to finely predict the angular deviation of harmonics depending on the spatiotemporal resolution and the Maxwell solver used in the simulations. Our model demonstrates that the mitigation of this numerical artifact with FDTD solvers mandates very high spatiotemporal resolution preventing realistic three-dimensional (3D) simulations even on the largest computers available at the time of writing. We finally show that nondispersive pseudospectral analytical time domain solvers can considerably reduce the spatiotemporal resolution required to mitigate this spurious deviation and should enable in the near future 3D accurate modeling on supercomputers in a realistic time to solution.

  2. Tumor-Specific Fluorescent Antibody Imaging Enables Accurate Staging Laparoscopy in an Orthotopic Model of Pancreatic Cancer

    Science.gov (United States)

    Cao, Hop S Tran; Kaushal, Sharmeela; Metildi, Cristina A; Menen, Rhiana S; Lee, Claudia; Snyder, Cynthia S; Messer, Karen; Pu, Minya; Luiken, George A; Talamini, Mark A; Hoffman, Robert M; Bouvet, Michael

    2014-01-01

    Background/Aims Laparoscopy is important in staging pancreatic cancer, but false negatives remain problematic. Making tumors fluorescent has the potential to improve the accuracy of staging laparoscopy. Methodology Orthotopic and carcinomatosis models of pancreatic cancer were established with BxPC-3 human pancreatic cancer cells in nude mice. Alexa488-anti-CEA conjugates were injected via tail vein 24 hours prior to laparoscopy. Mice were examined under bright field laparoscopic (BL) and fluorescence laparoscopic (FL) modes. Outcomes measured included time to identification of primary tumor for the orthotopic model and number of metastases identified within 2 minutes for the carcinomatosis model. Results FL enabled more rapid and accurate identification and localization of primary tumors and metastases than BL. Using BL took statistically significantly longer time than FL. More metastatic lesions were detected and localized under FL compared to BL and with greater accuracy, with sensitivities of 96% vs. 40%, respectively, when compared to control. FL was sensitive enough to detect metastatic lesions laparoscopy with tumors labeled with fluorophore-conjugated anti-CEA antibody permits rapid detection and accurate localization of primary and metastatic pancreatic cancer in an orthotopic model. The results of the present report demonstrate the future clinical potential of fluorescence laparoscopy. PMID:22369743

  3. Accurate Finite Element Modelling of Chipboard Single-Stud Floor Panels subjected to Dynamic Loads

    DEFF Research Database (Denmark)

    Sjöström, A.; Flodén, O.; Persson, K.

    2012-01-01

    In multi-storey buildings, the use of lightweight material has many advantages. The low weight, the low energy consumption and the sustainability of the material are some attractive benefits from using lightweight materials. Compared with heavier structures i.e. concrete the challenge...... in constructing a building compliant with building codes vis-a-vis the propagation of sound and vibrations within the structure is a challenge. Focusing on junctions in a multi-storey lightweight buildings, a modular finite element model is developed to be used for analyses of vibration transmission...

  4. Can an Atmospherically Forced Ocean Model Accurately Simulate Sea Surface Temperature During ENSO Events?

    Science.gov (United States)

    2010-01-01

    directly provided by the originator. This clima - tology does not take the existence of ice into account (i.e. treats it as a data void). Thus, we...SST for each month is obtained using daily SSTs during 1993-2003. The mean January SST clima - tology is formed using monthly January SSTs over 11 yr...these high latitude belts. The reason is that the MODAS clima - tology lacks a realistic ice field, resulting in warmer SST than HYCOM by >2°C. On

  5. Multiconjugate adaptive optics applied to an anatomically accurate human eye model

    Science.gov (United States)

    Bedggood, P. A.; Ashman, R.; Smith, G.; Metha, A. B.

    2006-09-01

    Aberrations of both astronomical telescopes and the human eye can be successfully corrected with conventional adaptive optics. This produces diffraction-limited imagery over a limited field of view called the isoplanatic patch. A new technique, known as multiconjugate adaptive optics, has been developed recently in astronomy to increase the size of this patch. The key is to model atmospheric turbulence as several flat, discrete layers. A human eye, however, has several curved, aspheric surfaces and a gradient index lens, complicating the task of correcting aberrations over a wide field of view. Here we utilize a computer model to determine the degree to which this technology may be applied to generate high resolution, wide-field retinal images, and discuss the considerations necessary for optimal use with the eye. The Liou and Brennan schematic eye simulates the aspheric surfaces and gradient index lens of real human eyes. We show that the size of the isoplanatic patch of the human eye is significantly increased through multiconjugate adaptive optics.

  6. TTLEM: Open access tool for building numerically accurate landscape evolution models in MATLAB

    Science.gov (United States)

    Campforts, Benjamin; Schwanghart, Wolfgang; Govers, Gerard

    2017-04-01

    Despite a growing interest in LEMs, accuracy assessment of the numerical methods they are based on has received little attention. Here, we present TTLEM which is an open access landscape evolution package designed to develop and test your own scenarios and hypothesises. TTLEM uses a higher order flux-limiting finite-volume method to simulate river incision and tectonic displacement. We show that this scheme significantly influences the evolution of simulated landscapes and the spatial and temporal variability of erosion rates. Moreover, it allows the simulation of lateral tectonic displacement on a fixed grid. Through the use of a simple GUI the software produces visible output of evolving landscapes through model run time. In this contribution, we illustrate numerical landscape evolution through a set of movies spanning different spatial and temporal scales. We focus on the erosional domain and use both spatially constant and variable input values for uplift, lateral tectonic shortening, erodibility and precipitation. Moreover, we illustrate the relevance of a stochastic approach for realistic hillslope response modelling. TTLEM is a fully open source software package, written in MATLAB and based on the TopoToolbox platform (topotoolbox.wordpress.com). Installation instructions can be found on this website and the therefore designed GitHub repository.

  7. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    2007-01-01

    This paper concerns the formulation of lumped-parameter models for rigid footings on homogenous or stratified soil with focus on the horizontal sliding and rocking. Such models only contain a few degrees of freedom, which makes them ideal for inclusion in aero-elastic codes for wind turbines......-parameter models with respect to the prediction of the maximum response during excitation and the geometrical damping related to free vibrations of a footing....

  8. Swimming behavior of zebrafish is accurately classified by direct modeling and behavioral space analysis

    Science.gov (United States)

    Feng, Ruopei; Chemla, Yann; Gruebele, Martin

    Larval zebrafish is a popular organism in the search for the correlation between locomotion behavior and neural pathways because of their highly stereotyped and temporally episodic swimming motion. This correlation is usually investigated using electrophysiological recordings of neural activities in partially immobilized fish. Seeking for a way to study animal behavior without constraints or intruding electrodes, which can in turn modify their behavior, our lab has introduced a parameter-free approach which allows automated classification of the locomotion behaviors of freely swimming fish. We looked into several types of swimming bouts including free swimming and two modes of escape responses and established a new classification of these behaviors. Combined with a neurokinematic model, our analysis showed the capability to probe intrinsic properties of the underlying neural pathways of freely swimming larval zebrafish by inspecting swimming movies only.

  9. Accurate modeling of benchmark x-ray spectra from highly charged ions of tungsten

    International Nuclear Information System (INIS)

    Ralchenko, Yuri; Tan, Joseph N.; Gillaspy, J. D.; Pomeroy, Joshua M.; Silver, Eric

    2006-01-01

    We present detailed collisional-radiative modeling for a benchmark x-ray spectrum of highly charged tungsten ions in the range between 3 and 10 A ring produced in an electron beam ion trap (EBIT) with a beam energy of 4.08 keV. Remarkably good agreement between calculated and measured spectra was obtained without adjustable parameters, highlighting the well-controlled experimental conditions and the sophistication of the kinetic simulation of the non-Maxwellian tungsten plasma. This agreement permitted the identification of spectral lines from Cu-like W 45+ and Ni-like W 46+ ions, led to the reinterpretation of a previously known line in Ni-like ion as an overlap of electric-quadrupole and magnetic-octupole lines, and revealed subtle features in the x-ray spectrum arising from the dominance of forbidden transitions between excited states. The importance of level population mechanisms specific to the EBIT plasma is discussed as well

  10. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    Energy Technology Data Exchange (ETDEWEB)

    Andrade-Ines, Eduardo [Institute de Mécanique Céleste et des Calcul des Éphémérides—Observatoire de Paris, 77 Avenue Denfert Rochereau, F-75014 Paris (France); Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, 91109 Pasadena, CA (United States)

    2017-04-01

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-order models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.

  11. Polarizable charge equilibration model for predicting accurate electrostatic interactions in molecules and solids

    Science.gov (United States)

    Naserifar, Saber; Brooks, Daniel J.; Goddard, William A.; Cvicek, Vaclav

    2017-03-01

    Electrostatic interactions play a critical role in determining the properties, structures, and dynamics of chemical, biochemical, and material systems. These interactions are described well at the level of quantum mechanics (QM) but not so well for the various models used in force field simulations of these systems. We propose and validate a new general methodology, denoted PQEq, to predict rapidly and dynamically the atomic charges and polarization underlying the electrostatic interactions. Here the polarization is described using an atomic sized Gaussian shaped electron density that can polarize away from the core in response to internal and external electric fields, while at the same time adjusting the charge on each core (described as a Gaussian function) so as to achieve a constant chemical potential across all atoms of the system. The parameters for PQEq are derived from experimental atomic properties of all elements up to Nobelium (atomic no. = 102). We validate PQEq by comparing to QM interaction energy as probe dipoles are brought along various directions up to 30 molecules containing H, C, N, O, F, Si, P, S, and Cl atoms. We find that PQEq predicts interaction energies in excellent agreement with QM, much better than other common charge models such as obtained from QM using Mulliken or ESP charges and those from standard force fields (OPLS and AMBER). Since PQEq increases the accuracy of electrostatic interactions and the response to external electric fields, we expect that PQEq will be useful for a large range of applications including ligand docking to proteins, catalytic reactions, electrocatalysis, ferroelectrics, and growth of ceramics and films, where it could be incorporated into standard force fields as OPLS, AMBER, CHARMM, Dreiding, ReaxFF, and UFF.

  12. Modeling inter-signal arrival times for accurate detection of CAN bus signal injection attacks

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Michael Roy [ORNL; Bridges, Robert A [ORNL; Combs, Frank L [ORNL; Starr, Michael S [ORNL; Prowell, Stacy J [ORNL

    2017-01-01

    Modern vehicles rely on hundreds of on-board electronic control units (ECUs) communicating over in-vehicle networks. As external interfaces to the car control networks (such as the on-board diagnostic (OBD) port, auxiliary media ports, etc.) become common, and vehicle-to-vehicle / vehicle-to-infrastructure technology is in the near future, the attack surface for vehicles grows, exposing control networks to potentially life-critical attacks. This paper addresses the need for securing the CAN bus by detecting anomalous traffic patterns via unusual refresh rates of certain commands. While previous works have identified signal frequency as an important feature for CAN bus intrusion detection, this paper provides the first such algorithm with experiments on five attack scenarios. Our data-driven anomaly detection algorithm requires only five seconds of training time (on normal data) and achieves true positive / false discovery rates of 0.9998/0.00298, respectively (micro-averaged across the five experimental tests).

  13. Fast and accurate computational modeling of adsorption on graphene: a dispersion interaction challenge.

    Science.gov (United States)

    Gordeev, Evgeniy G; Polynski, Mikhail V; Ananikov, Valentine P

    2013-11-21

    Understanding molecular interactions of graphene is a question of key importance to design new materials and catalytic systems for practical usage. Although for small models good accuracy was demonstrated in theoretical analysis with ab initio and density functional methods, the application to real-size systems with thousands of atoms is currently hardly possible on routine bases due to the high computational cost. In the present study we report that incorporation of dispersion correction led to the principal improvement in the description of graphene systems at a semi-empirical level. The accuracy and the scope of the calculations were explored for a wide range of molecules adsorbed on graphene surfaces (H2, N2, CO, CO2, NH3, CH4, H2O, benzene, naphthalene, coronene, ovalene and cyclohexane). As a challenging parameter, the calculated adsorption energy of aromatic hydrocarbons on graphene Eads = -1.8 ± 0.1 kcal mol(-1) (per one carbon atom) at the PM6-DH2 level was in excellent agreement with the experimentally determined value of Eads = -1.7 ± 0.3 kcal mol(-1). The dispersion corrected semi-empirical method was found to be a remarkable computational tool suitable for everyday laboratory studies of real-size graphene systems. Significant performance improvement (ca. 10(3) times faster) and excellent accuracy were found as compared to the ωB97X-D density functional calculations.

  14. The human skin/chick chorioallantoic membrane model accurately predicts the potency of cosmetic allergens.

    Science.gov (United States)

    Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S

    2009-04-01

    The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin.

  15. GENERATING ACCURATE 3D MODELS OF ARCHITECTURAL HERITAGE STRUCTURES USING LOW-COST CAMERA AND OPEN SOURCE ALGORITHMS

    Directory of Open Access Journals (Sweden)

    M. Zacharek

    2017-05-01

    Full Text Available These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters, but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  16. Generating Accurate 3d Models of Architectural Heritage Structures Using Low-Cost Camera and Open Source Algorithms

    Science.gov (United States)

    Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.

    2017-05-01

    These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  17. Non-isothermal kinetics model to predict accurate phase transformation and hardness of 22MnB5 boron steel

    Energy Technology Data Exchange (ETDEWEB)

    Bok, H.-H.; Kim, S.N.; Suh, D.W. [Graduate Institute of Ferrous Technology, POSTECH, San 31, Hyoja-dong, Nam-gu, Pohang, Gyeongsangbuk-do (Korea, Republic of); Barlat, F., E-mail: f.barlat@postech.ac.kr [Graduate Institute of Ferrous Technology, POSTECH, San 31, Hyoja-dong, Nam-gu, Pohang, Gyeongsangbuk-do (Korea, Republic of); Lee, M.-G., E-mail: myounglee@korea.ac.kr [Department of Materials Science and Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul (Korea, Republic of)

    2015-02-25

    A non-isothermal phase transformation kinetics model obtained by modifying the well-known JMAK approach is proposed for application to a low carbon boron steel (22MnB5) sheet. In the modified kinetics model, the parameters are functions of both temperature and cooling rate, and can be identified by a numerical optimization method. Moreover, in this approach the transformation start and finish temperatures are variable instead of the constants that depend on chemical composition. These variable reference temperatures are determined from the measured CCT diagram using dilatation experiments. The kinetics model developed in this work captures the complex transformation behavior of the boron steel sheet sample accurately. In particular, the predicted hardness and phase fractions in the specimens subjected to a wide range of cooling rates were validated by experiments.

  18. Non-isothermal kinetics model to predict accurate phase transformation and hardness of 22MnB5 boron steel

    International Nuclear Information System (INIS)

    Bok, H.-H.; Kim, S.N.; Suh, D.W.; Barlat, F.; Lee, M.-G.

    2015-01-01

    A non-isothermal phase transformation kinetics model obtained by modifying the well-known JMAK approach is proposed for application to a low carbon boron steel (22MnB5) sheet. In the modified kinetics model, the parameters are functions of both temperature and cooling rate, and can be identified by a numerical optimization method. Moreover, in this approach the transformation start and finish temperatures are variable instead of the constants that depend on chemical composition. These variable reference temperatures are determined from the measured CCT diagram using dilatation experiments. The kinetics model developed in this work captures the complex transformation behavior of the boron steel sheet sample accurately. In particular, the predicted hardness and phase fractions in the specimens subjected to a wide range of cooling rates were validated by experiments

  19. Model organoids provide new research opportunities for ductal pancreatic cancer

    NARCIS (Netherlands)

    Boj, Sylvia F|info:eu-repo/dai/nl/304074799; Hwang, Chang-Il; Baker, Lindsey A; Engle, Dannielle D; Tuveson, David A; Clevers, Hans|info:eu-repo/dai/nl/07164282X

    We recently established organoid models from normal and neoplastic murine and human pancreas tissues. These organoids exhibit ductal- and disease stage-specific characteristics and, after orthotopic transplantation, recapitulate the full spectrum of tumor progression. Pancreatic organoid technology

  20. Statistical and RBF NN models : providing forecasts and risk assessment

    OpenAIRE

    Marček, Milan

    2009-01-01

    Forecast accuracy of economic and financial processes is a popular measure for quantifying the risk in decision making. In this paper, we develop forecasting models based on statistical (stochastic) methods, sometimes called hard computing, and on a soft method using granular computing. We consider the accuracy of forecasting models as a measure for risk evaluation. It is found that the risk estimation process based on soft methods is simplified and less critical to the question w...

  1. Fast and Accurate Hybrid Stream PCRTMSOLAR Radiative Transfer Model for Reflected Solar Spectrum Simulation in the Cloudy Atmosphere

    Science.gov (United States)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.

    2016-01-01

    A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.

  2. Conceptual Models of the Individual Public Service Provider

    DEFF Research Database (Denmark)

    Andersen, Lotte Bøgh; Bhatti, Yosef; Petersen, Ole Helby

    Individual public service providers’ motivation can be conceptualized as either extrinsic, autonomous or prosocial, and the question is how we can best theoretically understand this complexity without losing too much coherence and parsimony. Drawing on Allison’s approach (1969), three perspectives...... are used to gain insight on the motivation of public service providers; namely principal-agent theory, self-determination theory and public service motivation theory. We situate the theoretical discussions in the context of public service providers being transferred to private organizations...... as a consequence of outsourcing by the public sector. Although this empirical setting is interesting in itself, here it serves primarily as grist for a wider discussion on strategies for applying multiple theoretical approaches and crafting a theoretical synthesis. The key contribution of the paper is thus...

  3. A new model of dispersion for metals leading to a more accurate modeling of plasmonic structures using the FDTD method

    Energy Technology Data Exchange (ETDEWEB)

    Vial, A.; Dridi, M.; Cunff, L. le [Universite de Technologie de Troyes, Institut Charles Delaunay, CNRS UMR 6279, Laboratoire de Nanotechnologie et d' Instrumentation Optique, 12, rue Marie Curie, BP-2060, Troyes Cedex (France); Laroche, T. [Universite de Franche-Comte, Institut FEMTO-ST, CNRS UMR 6174, Departement de Physique et de Metrologie des Oscillateurs, Besancon Cedex (France)

    2011-06-15

    We present FDTD simulations results obtained using the Drude critical points model. This model enables spectroscopic studies of metallic structures over wider wavelength ranges than usually used, and it facilitates the study of structures made of several metals. (orig.)

  4. Experimental studies on power transformer model winding provided with MOVs

    Directory of Open Access Journals (Sweden)

    G.H. Kusumadevi

    2017-05-01

    Full Text Available Surge voltage distribution across a HV transformer winding due to appearance of very fast rise time (rise time of order 1 μs transient voltages is highly non-uniform along the length of the winding for initial time instant of occurrence of surge. In order to achieve nearly uniform initial time instant voltage distribution along the length of the HV winding, investigations have been carried out on transformer model winding. By connecting similar type of metal oxide varistors across sections of HV transformer model winding, it is possible to improve initial time instant surge voltage distribution across length of the HV transformer winding. Transformer windings with α values 5.3, 9.5 and 19 have been analyzed. The experimental studies have been carried out using high speed oscilloscope of good accuracy. The initial time instant voltage distribution across sections of winding with MOV remains nearly uniform along length of the winding. Also results of fault diagnostics carried out with and without connection of MOVs across sections of winding are reported.

  5. SU-E-T-475: An Accurate Linear Model of Tomotherapy MLC-Detector System for Patient Specific Delivery QA

    International Nuclear Information System (INIS)

    Chen, Y; Mo, X; Chen, M; Olivera, G; Parnell, D; Key, S; Lu, W; Reeher, M; Galmarini, D

    2014-01-01

    Purpose: An accurate leaf fluence model can be used in applications such as patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is known that the total fluence is not a linear combination of individual leaf fluence due to leakage-transmission, tongue-and-groove, and source occlusion effect. Here we propose a method to model the nonlinear effects as linear terms thus making the MLC-detector system a linear system. Methods: A leaf pattern basis (LPB) consisting of no-leaf-open, single-leaf-open, double-leaf-open and triple-leaf-open patterns are chosen to represent linear and major nonlinear effects of leaf fluence as a linear system. An arbitrary leaf pattern can be expressed as (or decomposed to) a linear combination of the LPB either pulse by pulse or weighted by dwelling time. The exit detector responses to the LPB are obtained by processing returned detector signals resulting from the predefined leaf patterns for each jaw setting. Through forward transformation, detector signal can be predicted given a delivery plan. An equivalent leaf open time (LOT) sinogram containing output variation information can also be inversely calculated from the measured detector signals. Twelve patient plans were delivered in air. The equivalent LOT sinograms were compared with their planned sinograms. Results: The whole calibration process was done in 20 minutes. For two randomly generated leaf patterns, 98.5% of the active channels showed differences within 0.5% of the local maximum between the predicted and measured signals. Averaged over the twelve plans, 90% of LOT errors were within +/−10 ms. The LOT systematic error increases and shows an oscillating pattern when LOT is shorter than 50 ms. Conclusion: The LPB method models the MLC-detector response accurately, which improves patient specific delivery QA and in-vivo dosimetry for TomoTherapy systems. It is sensitive enough to detect systematic LOT errors as small as 10 ms

  6. The CPA Equation of State and an Activity Coefficient Model for Accurate Molar Enthalpy Calculations of Mixtures with Carbon Dioxide and Water/Brine

    Energy Technology Data Exchange (ETDEWEB)

    Myint, P. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hao, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Firoozabadi, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-03-27

    Thermodynamic property calculations of mixtures containing carbon dioxide (CO2) and water, including brines, are essential in theoretical models of many natural and industrial processes. The properties of greatest practical interest are density, solubility, and enthalpy. Many models for density and solubility calculations have been presented in the literature, but there exists only one study, by Spycher and Pruess, that has compared theoretical molar enthalpy predictions with experimental data [1]. In this report, we recommend two different models for enthalpy calculations: the CPA equation of state by Li and Firoozabadi [2], and the CO2 activity coefficient model by Duan and Sun [3]. We show that the CPA equation of state, which has been demonstrated to provide good agreement with density and solubility data, also accurately calculates molar enthalpies of pure CO2, pure water, and both CO2-rich and aqueous (H2O-rich) mixtures of the two species. It is applicable to a wider range of conditions than the Spycher and Pruess model. In aqueous sodium chloride (NaCl) mixtures, we show that Duan and Sun’s model yields accurate results for the partial molar enthalpy of CO2. It can be combined with another model for the brine enthalpy to calculate the molar enthalpy of H2O-CO2-NaCl mixtures. We conclude by explaining how the CPA equation of state may be modified to further improve agreement with experiments. This generalized CPA is the basis of our future work on this topic.

  7. Accurate Mapping of Multilevel Rydberg Atoms on Interacting Spin-1 /2 Particles for the Quantum Simulation of Ising Models

    Science.gov (United States)

    de Léséleuc, Sylvain; Weber, Sebastian; Lienhard, Vincent; Barredo, Daniel; Büchler, Hans Peter; Lahaye, Thierry; Browaeys, Antoine

    2018-03-01

    We study a system of atoms that are laser driven to n D3 /2 Rydberg states and assess how accurately they can be mapped onto spin-1 /2 particles for the quantum simulation of anisotropic Ising magnets. Using nonperturbative calculations of the pair potentials between two atoms in the presence of electric and magnetic fields, we emphasize the importance of a careful selection of experimental parameters in order to maintain the Rydberg blockade and avoid excitation of unwanted Rydberg states. We benchmark these theoretical observations against experiments using two atoms. Finally, we show that in these conditions, the experimental dynamics observed after a quench is in good agreement with numerical simulations of spin-1 /2 Ising models in systems with up to 49 spins, for which numerical simulations become intractable.

  8. Accurate modeling of size and strain broadening in the Rietveld refinement: The open-quotes double-Voigtclose quotes approach

    International Nuclear Information System (INIS)

    Balzar, D.; Ledbetter, H.

    1995-01-01

    In the open-quotes double-Voigtclose quotes approach, an exact Voigt function describes both size- and strain-broadened profiles. The lattice strain is defined in terms of physically credible mean-square strain averaged over a distance in the diffracting domains. Analysis of Fourier coefficients in a harmonic approximation for strain coefficients leads to the Warren-Averbach method for the separation of size and strain contributions to diffraction line broadening. The model is introduced in the Rietveld refinement program in the following way: Line widths are modeled with only four parameters in the isotropic case. Varied parameters are both surface- and volume-weighted domain sizes and root-mean-square strains averaged over two distances. Refined parameters determine the physically broadened Voigt line profile. Instrumental Voigt line profile parameters are added to obtain the observed (Voigt) line profile. To speed computation, the corresponding pseudo-Voigt function is calculated and used as a fitting function in refinement. This approach allows for both fast computer code and accurate modeling in terms of physically identifiable parameters

  9. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM+-up scheme

    International Nuclear Information System (INIS)

    Chang, Chih-Hao; Liou, Meng-Sing

    2007-01-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion

  10. Mathematical modeling provides kinetic details of the human immune response to vaccination

    Directory of Open Access Journals (Sweden)

    Dustin eLe

    2015-01-01

    Full Text Available With major advances in experimental techniques to track antigen-specific immune responses many basic questions on the kinetics of virus-specific immunity in humans remain unanswered. To gain insights into kinetics of T and B cell responses in human volunteers we combine mathematical models and experimental data from recent studies employing vaccines against yellow fever and smallpox. Yellow fever virus-specific CD8 T cell population expanded slowly with the average doubling time of 2 days peaking 2.5 weeks post immunization. Interestingly, we found that the peak of the yellow fever-specific CD8 T cell response is determined by the rate of T cell proliferation and not by the precursor frequency of antigen-specific cells as has been suggested in several studies in mice. We also found that while the frequency of virus-specific T cells increases slowly, the slow increase can still accurately explain clearance of yellow fever virus in the blood. Our additional mathematical model describes well the kinetics of virus-specific antibody-secreting cell and antibody response to vaccinia virus in vaccinated individuals suggesting that most of antibodies in 3 months post immunization are derived from the population of circulating antibody-secreting cells. Taken together, our analysis provides novel insights into mechanisms by which live vaccines induce immunity to viral infections and highlight challenges of applying methods of mathematical modeling to the current, state-of-the-art yet limited immunological data.

  11. Mathematical modeling provides kinetic details of the human immune response to vaccination.

    Science.gov (United States)

    Le, Dustin; Miller, Joseph D; Ganusov, Vitaly V

    2014-01-01

    With major advances in experimental techniques to track antigen-specific immune responses many basic questions on the kinetics of virus-specific immunity in humans remain unanswered. To gain insights into kinetics of T and B cell responses in human volunteers we combined mathematical models and experimental data from recent studies employing vaccines against yellow fever and smallpox. Yellow fever virus-specific CD8 T cell population expanded slowly with the average doubling time of 2 days peaking 2.5 weeks post immunization. Interestingly, we found that the peak of the yellow fever-specific CD8 T cell response was determined by the rate of T cell proliferation and not by the precursor frequency of antigen-specific cells as has been suggested in several studies in mice. We also found that while the frequency of virus-specific T cells increased slowly, the slow increase could still accurately explain clearance of yellow fever virus in the blood. Our additional mathematical model described well the kinetics of virus-specific antibody-secreting cell and antibody response to vaccinia virus in vaccinated individuals suggesting that most of antibodies in 3 months post immunization were derived from the population of circulating antibody-secreting cells. Taken together, our analysis provided novel insights into mechanisms by which live vaccines induce immunity to viral infections and highlighted challenges of applying methods of mathematical modeling to the current, state-of-the-art yet limited immunological data.

  12. Groundwater recharge: Accurately representing evapotranspiration

    CSIR Research Space (South Africa)

    Bugan, Richard DH

    2011-09-01

    Full Text Available Groundwater recharge is the basis for accurate estimation of groundwater resources, for determining the modes of water allocation and groundwater resource susceptibility to climate change. Accurate estimations of groundwater recharge with models...

  13. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  14. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    Directory of Open Access Journals (Sweden)

    Shiyao Wang

    2016-02-01

    Full Text Available A high-performance differential global positioning system (GPS  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU/dead reckoning (DR data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  15. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    Science.gov (United States)

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  16. High-order accurate finite-volume formulations for the pressure gradient force in layered ocean models

    Science.gov (United States)

    Engwirda, Darren; Kelley, Maxwell; Marshall, John

    2017-08-01

    Discretisation of the horizontal pressure gradient force in layered ocean models is a challenging task, with non-trivial interactions between the thermodynamics of the fluid and the geometry of the layers often leading to numerical difficulties. We present two new finite-volume schemes for the pressure gradient operator designed to address these issues. In each case, the horizontal acceleration is computed as an integration of the contact pressure force that acts along the perimeter of an associated momentum control-volume. A pair of new schemes are developed by exploring different control-volume geometries. Non-linearities in the underlying equation-of-state definitions and thermodynamic profiles are treated using a high-order accurate numerical integration framework, designed to preserve hydrostatic balance in a non-linear manner. Numerical experiments show that the new methods achieve high levels of consistency, maintaining hydrostatic and thermobaric equilibrium in the presence of strongly-sloping layer geometries, non-linear equations-of-state and non-uniform vertical stratification profiles. These results suggest that the new pressure gradient formulations may be appropriate for general circulation models that employ hybrid vertical coordinates and/or terrain-following representations.

  17. Mathematical models for accurate prediction of atmospheric visibility with particular reference to the seasonal and environmental patterns in Hong Kong.

    Science.gov (United States)

    Mui, K W; Wong, L T; Chung, L Y

    2009-11-01

    Atmospheric visibility impairment has gained increasing concern as it is associated with the existence of a number of aerosols as well as common air pollutants and produces unfavorable conditions for observation, dispersion, and transportation. This study analyzed the atmospheric visibility data measured in urban and suburban Hong Kong (two selected stations) with respect to time-matched mass concentrations of common air pollutants including nitrogen dioxide (NO(2)), nitrogen monoxide (NO), respirable suspended particulates (PM(10)), sulfur dioxide (SO(2)), carbon monoxide (CO), and meteorological parameters including air temperature, relative humidity, and wind speed. No significant difference in atmospheric visibility was reported between the two measurement locations (p > or = 0.6, t test); and good atmospheric visibility was observed more frequently in summer and autumn than in winter and spring (p atmospheric visibility increased with temperature but decreased with the concentrations of SO(2), CO, PM(10), NO, and NO(2). The results showed that atmospheric visibility was season dependent and would have significant correlations with temperature, the mass concentrations of PM(10) and NO(2), and the air pollution index API (correlation coefficients mid R: R mid R: > or = 0.7, p atmospheric visibility were thus proposed. By comparison, the proposed visibility prediction models were more accurate than some existing regional models. In addition to improving visibility prediction accuracy, this study would be useful for understanding the context of low atmospheric visibility, exploring possible remedial measures, and evaluating the impact of air pollution and atmospheric visibility impairment in this region.

  18. Combining DSMC Simulations and ROSINA/COPS Data of Comet 67P/Churyumov-Gerasimenko to Develop a Realistic Empirical Coma Model and to Determine Accurate Production Rates

    Science.gov (United States)

    Hansen, K. C.; Fougere, N.; Bieler, A. M.; Altwegg, K.; Combi, M. R.; Gombosi, T. I.; Huang, Z.; Rubin, M.; Tenishev, V.; Toth, G.; Tzou, C. Y.

    2015-12-01

    We have previously published results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model and its characterization of the neutral coma of comet 67P/Churyumov-Gerasimenko through detailed comparison with data collected by the ROSINA/COPS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/COmet Pressure Sensor) instrument aboard the Rosetta spacecraft [Bieler, 2015]. Results from these DSMC models have been used to create an empirical model of the near comet coma (empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. The model is a significant improvement over more simple empirical models, such as the Haser model. While the DSMC results are a more accurate representation of the coma at any given time, the advantage of a mean state, empirical model is the ease and speed of use. One use of such an empirical model is in the calculation of a total cometary coma production rate from the ROSINA/COPS data. The COPS data are in situ measurements of gas density and velocity along the ROSETTA spacecraft track. Converting the measured neutral density into a production rate requires knowledge of the neutral gas distribution in the coma. Our empirical model provides this information and therefore allows us to correct for the spacecraft location to calculate a production rate as a function of heliocentric distance. We will present the full empirical model as well as the calculated neutral production rate for the period of August 2014 - August 2015 (perihelion).

  19. Biological Model Development as an Opportunity to Provide Content Auditing for the Foundational Model of Anatomy Ontology.

    Science.gov (United States)

    Wang, Lucy L; Grunblatt, Eli; Jung, Hyunggu; Kalet, Ira J; Whipple, Mark E

    2015-01-01

    Constructing a biological model using an established ontology provides a unique opportunity to perform content auditing on the ontology. We built a Markov chain model to study tumor metastasis in the regional lymphatics of patients with head and neck squamous cell carcinoma (HNSCC). The model attempts to determine regions with high likelihood for metastasis, which guides surgeons and radiation oncologists in selecting the boundaries of treatment. To achieve consistent anatomical relationships, the nodes in our model are populated using lymphatic objects extracted from the Foundational Model of Anatomy (FMA) ontology. During this process, we discovered several classes of inconsistencies in the lymphatic representations within the FMA. We were able to use this model building opportunity to audit the entities and connections in this region of interest (ROI). We found five subclasses of errors that are computationally detectable and resolvable, one subclass of errors that is computationally detectable but unresolvable, requiring the assistance of a content expert, and also errors of content, which cannot be detected through computational means. Mathematical descriptions of detectable errors along with expert review were used to discover inconsistencies and suggest concepts for addition and removal. Out of 106 organ and organ parts in the ROI, 8 unique entities were affected, leading to the suggestion of 30 concepts for addition and 4 for removal. Out of 27 lymphatic chain instances, 23 were found to have errors, with a total of 32 concepts suggested for addition and 15 concepts for removal. These content corrections are necessary for the accurate functioning of the FMA and provide benefits for future research and educational uses.

  20. Time separation technique: Accurate solution for 4D C-Arm-CT perfusion imaging using a temporal decomposition model.

    Science.gov (United States)

    Bannasch, Sebastian; Frysch, Robert; Pfeiffer, Tim; Warnecke, Gerald; Rose, Georg

    2018-03-01

    The issue of perfusion imaging using a temporal decomposition model is to enable the reconstruction of undersampled measurements acquired with a slowly rotating x-ray-based imaging system, for example, a C-arm-based cone beam computed tomography (CB-CT). The aim of this work is to integrate prior knowledge into the dynamic CT task in order to reduce the required number of views and the computational effort as well as to save dose. The prior knowledge comprises of a mathematical model and clinical perfusion data. In case of model-based perfusion imaging via superposition of specified orthogonal temporal basis functions, a priori knowledge is incorporated into the reconstructions. Instead of estimating the dynamic attenuation of each voxel by a weighting sum, the modeling approach is done as a preprocessing step in the projection space. This point of view provides a method that decomposes the temporal and spatial domain of dynamic CT data. The resulting projection set consists of spatial information that can be treated as individual static CT tasks. Consequently, the high-dimensional model-based CT system can be completely transformed, allowing for the use of an arbitrary reconstruction algorithm. For CT, reconstructions of preprocessed dynamic in silico data are illustrated and evaluated by means of conventional clinical parameters for stroke diagnostics. The time separation technique presented here, provides the expected accuracy of model-based CT perfusion imaging. Consequently, the model-based handled 4D task can be solved approximately as fast as the corresponding static 3D task. For C-arm-based CB-CT, the algorithm presented here provides a solution for resorting to model-based perfusion reconstruction without its connected high computational cost. Thus, this algorithm is potentially able to have recourse to the benefit from model-based perfusion imaging for practical application. This study is a proof of concept. © 2018 American Association of Physicists in

  1. Computed-tomography-based finite-element models of long bones can accurately capture strain response to bending and torsion.

    Science.gov (United States)

    Varghese, Bino; Short, David; Penmetsa, Ravi; Goswami, Tarun; Hangartner, Thomas

    2011-04-29

    Finite element (FE) models of long bones constructed from computed-tomography (CT) data are emerging as an invaluable tool in the field of bone biomechanics. However, the performance of such FE models is highly dependent on the accurate capture of geometry and appropriate assignment of material properties. In this study, a combined numerical-experimental study is performed comparing FE-predicted surface strains with strain-gauge measurements. Thirty-six major, cadaveric, long bones (humerus, radius, femur and tibia), which cover a wide range of bone sizes, were tested under three-point bending and torsion. The FE models were constructed from trans-axial volumetric CT scans, and the segmented bone images were corrected for partial-volume effects. The material properties (Young's modulus for cortex, density-modulus relationship for trabecular bone and Poisson's ratio) were calibrated by minimizing the error between experiments and simulations among all bones. The R(2) values of the measured strains versus load under three-point bending and torsion were 0.96-0.99 and 0.61-0.99, respectively, for all bones in our dataset. The errors of the calculated FE strains in comparison to those measured using strain gauges in the mechanical tests ranged from -6% to 7% under bending and from -37% to 19% under torsion. The observation of comparatively low errors and high correlations between the FE-predicted strains and the experimental strains, across the various types of bones and loading conditions (bending and torsion), validates our approach to bone segmentation and our choice of material properties. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. ReFOLD: a server for the refinement of 3D protein models guided by accurate quality estimates.

    Science.gov (United States)

    Shuid, Ahmad N; Kempster, Robert; McGuffin, Liam J

    2017-07-03

    ReFOLD is a novel hybrid refinement server with integrated high performance global and local Accuracy Self Estimates (ASEs). The server attempts to identify and to fix likely errors in user supplied 3D models of proteins via successive rounds of refinement. The server is unique in providing output for multiple alternative refined models in a way that allows users to quickly visualize the key residue locations, which are likely to have been improved. This is important, as global refinement of a full chain model may not always be possible, whereas local regions, or individual domains, can often be much improved. Thus, users may easily compare the specific regions of the alternative refined models in which they are most interested e.g. key interaction sites or domains. ReFOLD was used to generate hundreds of alternative refined models for the CASP12 experiment, boosting our group's performance in the main tertiary structure prediction category. Our successful refinement of initial server models combined with our built-in ASEs were instrumental to our second place ranking on Template Based Modeling (TBM) and Free Modeling (FM)/TBM targets. The ReFOLD server is freely available at: http://www.reading.ac.uk/bioinf/ReFOLD/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Importance of housekeeping gene selection for accurate reverse transcription-quantitative polymerase chain reaction in a wound healing model.

    Science.gov (United States)

    Turabelidze, Anna; Guo, Shujuan; DiPietro, Luisa A

    2010-01-01

    Studies in the field of wound healing have utilized a variety of different housekeeping genes for reverse transcription-quantitative polymerase chain reaction (RT-qPCR) analysis. However, nearly all of these studies assume that the selected normalization gene is stably expressed throughout the course of the repair process. The purpose of our current investigation was to identify the most stable housekeeping genes for studying gene expression in mouse wound healing using RT-qPCR. To identify which housekeeping genes are optimal for studying gene expression in wound healing, we examined all articles published in Wound Repair and Regeneration that cited RT-qPCR during the period of January/February 2008 until July/August 2009. We determined that ACTβ, GAPDH, 18S, and β2M were the most frequently used housekeeping genes in human, mouse, and pig studies. We also investigated nine commonly used housekeeping genes that are not generally used in wound healing models: GUS, TBP, RPLP2, ATP5B, SDHA, UBC, CANX, CYC1, and YWHAZ. We observed that wounded and unwounded tissues have contrasting housekeeping gene expression stability. The results demonstrate that commonly used housekeeping genes must be validated as accurate normalizing genes for each individual experimental condition. © 2010 by the Wound Healing Society.

  4. Accurate Monte Carlo modeling of cyclotrons for optimization of shielding and activation calculations in the biomedical field

    International Nuclear Information System (INIS)

    Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano

    2015-01-01

    Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H ⁎ (10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18 F, produced by the well-known 18 O(p,n) 18 F reaction, was calculated and compared with the IAEA

  5. Accurate Monte Carlo modeling of cyclotrons for optimization of shielding and activation calculations in the biomedical field

    Science.gov (United States)

    Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano

    2015-11-01

    Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended

  6. A gauged finite-element potential formulation for accurate inductive and galvanic modelling of 3-D electromagnetic problems

    Science.gov (United States)

    Ansari, S. M.; Farquharson, C. G.; MacLachlan, S. P.

    2017-07-01

    In this paper, a new finite-element solution to the potential formulation of the geophysical electromagnetic (EM) problem that explicitly implements the Coulomb gauge, and that accurately computes the potentials and hence inductive and galvanic components, is proposed. The modelling scheme is based on using unstructured tetrahedral meshes for domain subdivision, which enables both realistic Earth models of complex geometries to be considered and efficient spatially variable refinement of the mesh to be done. For the finite-element discretization edge and nodal elements are used for approximating the vector and scalar potentials respectively. The issue of non-unique, incorrect potentials from the numerical solution of the usual incomplete-gauged potential system is demonstrated for a benchmark model from the literature that uses an electric-type EM source, through investigating the interface continuity conditions for both the normal and tangential components of the potential vectors, and by showing inconsistent results obtained from iterative and direct linear equation solvers. By explicitly introducing the Coulomb gauge condition as an extra equation, and by augmenting the Helmholtz equation with the gradient of a Lagrange multiplier, an explicitly gauged system for the potential formulation is formed. The solution to the discretized form of this system is validated for the above-mentioned example and for another classic example that uses a magnetic EM source. In order to stabilize the iterative solution of the gauged system, a block diagonal pre-conditioning scheme that is based upon the Schur complement of the potential system is used. For all examples, both the iterative and direct solvers produce the same responses for the potentials, demonstrating the uniqueness of the numerical solution for the potentials and fixing the problems with the interface conditions between cells observed for the incomplete-gauged system. These solutions of the gauged system also

  7. RCK: accurate and efficient inference of sequence- and structure-based protein-RNA binding models from RNAcompete data.

    Science.gov (United States)

    Orenstein, Yaron; Wang, Yuhao; Berger, Bonnie

    2016-06-15

    Protein-RNA interactions, which play vital roles in many processes, are mediated through both RNA sequence and structure. CLIP-based methods, which measure protein-RNA binding in vivo, suffer from experimental noise and systematic biases, whereas in vitro experiments capture a clearer signal of protein RNA-binding. Among them, RNAcompete provides binding affinities of a specific protein to more than 240 000 unstructured RNA probes in one experiment. The computational challenge is to infer RNA structure- and sequence-based binding models from these data. The state-of-the-art in sequence models, Deepbind, does not model structural preferences. RNAcontext models both sequence and structure preferences, but is outperformed by GraphProt. Unfortunately, GraphProt cannot detect structural preferences from RNAcompete data due to the unstructured nature of the data, as noted by its developers, nor can it be tractably run on the full RNACompete dataset. We develop RCK, an efficient, scalable algorithm that infers both sequence and structure preferences based on a new k-mer based model. Remarkably, even though RNAcompete data is designed to be unstructured, RCK can still learn structural preferences from it. RCK significantly outperforms both RNAcontext and Deepbind in in vitro binding prediction for 244 RNAcompete experiments. Moreover, RCK is also faster and uses less memory, which enables scalability. While currently on par with existing methods in in vivo binding prediction on a small scale test, we demonstrate that RCK will increasingly benefit from experimentally measured RNA structure profiles as compared to computationally predicted ones. By running RCK on the entire RNAcompete dataset, we generate and provide as a resource a set of protein-RNA structure-based models on an unprecedented scale. Software and models are freely available at http://rck.csail.mit.edu/ bab@mit.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by

  8. Thermodynamically accurate modeling of the catalytic cycle of photosynthetic oxygen evolution: a mathematical solution to asymmetric Markov chains.

    Science.gov (United States)

    Vinyard, David J; Zachary, Chase E; Ananyev, Gennady; Dismukes, G Charles

    2013-07-01

    Forty-three years ago, Kok and coworkers introduced a phenomenological model describing period-four oscillations in O2 flash yields during photosynthetic water oxidation (WOC), which had been first reported by Joliot and coworkers. The original two-parameter Kok model was subsequently extended in its level of complexity to better simulate diverse data sets, including intact cells and isolated PSII-WOCs, but at the expense of introducing physically unrealistic assumptions necessary to enable numerical solutions. To date, analytical solutions have been found only for symmetric Kok models (inefficiencies are equally probable for all intermediates, called "S-states"). However, it is widely accepted that S-state reaction steps are not identical and some are not reversible (by thermodynamic restraints) thereby causing asymmetric cycles. We have developed a mathematically more rigorous foundation that eliminates unphysical assumptions known to be in conflict with experiments and adopts a new experimental constraint on solutions. This new algorithm termed STEAMM for S-state Transition Eigenvalues of Asymmetric Markov Models enables solutions to models having fewer adjustable parameters and uses automated fitting to experimental data sets, yielding higher accuracy and precision than the classic Kok or extended Kok models. This new tool provides a general mathematical framework for analyzing damped oscillations arising from any cycle period using any appropriate Markov model, regardless of symmetry. We illustrate applications of STEAMM that better describe the intrinsic inefficiencies for photon-to-charge conversion within PSII-WOCs that are responsible for damped period-four and period-two oscillations of flash O2 yields across diverse species, while using simpler Markov models free from unrealistic assumptions. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Combined endeavor of Neutrosophic Set and Chan-Vese model to extract accurate liver image from CT scan.

    Science.gov (United States)

    Siri, Sangeeta K; Latte, Mrityunjaya V

    2017-11-01

    Many different diseases can occur in the liver, including infections such as hepatitis, cirrhosis, cancer and over effect of medication or toxins. The foremost stage for computer-aided diagnosis of liver is the identification of liver region. Liver segmentation algorithms extract liver image from scan images which helps in virtual surgery simulation, speedup the diagnosis, accurate investigation and surgery planning. The existing liver segmentation algorithms try to extort exact liver image from abdominal Computed Tomography (CT) scan images. It is an open problem because of ambiguous boundaries, large variation in intensity distribution, variability of liver geometry from patient to patient and presence of noise. A novel approach is proposed to meet challenges in extracting the exact liver image from abdominal CT scan images. The proposed approach consists of three phases: (1) Pre-processing (2) CT scan image transformation to Neutrosophic Set (NS) and (3) Post-processing. In pre-processing, the noise is removed by median filter. The "new structure" is designed to transform a CT scan image into neutrosophic domain which is expressed using three membership subset: True subset (T), False subset (F) and Indeterminacy subset (I). This transform approximately extracts the liver image structure. In post processing phase, morphological operation is performed on indeterminacy subset (I) and apply Chan-Vese (C-V) model with detection of initial contour within liver without user intervention. This resulted in liver boundary identification with high accuracy. Experiments show that, the proposed method is effective, robust and comparable with existing algorithm for liver segmentation of CT scan images. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Mixture models reveal multiple positional bias types in RNA-Seq data and lead to accurate transcript concentration estimates.

    Directory of Open Access Journals (Sweden)

    Andreas Tuerk

    2017-05-01

    Full Text Available Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare", a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.

  11. ModFOLD6: an accurate web server for the global and local quality estimation of 3D protein models.

    Science.gov (United States)

    Maghrabi, Ali H A; McGuffin, Liam J

    2017-07-03

    Methods that reliably estimate the likely similarity between the predicted and native structures of proteins have become essential for driving the acceptance and adoption of three-dimensional protein models by life scientists. ModFOLD6 is the latest version of our leading resource for Estimates of Model Accuracy (EMA), which uses a pioneering hybrid quasi-single model approach. The ModFOLD6 server integrates scores from three pure-single model methods and three quasi-single model methods using a neural network to estimate local quality scores. Additionally, the server provides three options for producing global score estimates, depending on the requirements of the user: (i) ModFOLD6_rank, which is optimized for ranking/selection, (ii) ModFOLD6_cor, which is optimized for correlations of predicted and observed scores and (iii) ModFOLD6 global for balanced performance. The ModFOLD6 methods rank among the top few for EMA, according to independent blind testing by the CASP12 assessors. The ModFOLD6 server is also continuously automatically evaluated as part of the CAMEO project, where significant performance gains have been observed compared to our previous server and other publicly available servers. The ModFOLD6 server is freely available at: http://www.reading.ac.uk/bioinf/ModFOLD/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Image charge models for accurate construction of the electrostatic self-energy of 3D layered nanostructure devices

    Science.gov (United States)

    Barker, John R.; Martinez, Antonio

    2018-04-01

    Efficient analytical image charge models are derived for the full spatial variation of the electrostatic self-energy of electrons in semiconductor nanostructures that arises from dielectric mismatch using semi-classical analysis. The methodology provides a fast, compact and physically transparent computation for advanced device modeling. The underlying semi-classical model for the self-energy has been established and validated during recent years and depends on a slight modification of the macroscopic static dielectric constants for individual homogeneous dielectric regions. The model has been validated for point charges as close as one interatomic spacing to a sharp interface. A brief introduction to image charge methodology is followed by a discussion and demonstration of the traditional failure of the methodology to derive the electrostatic potential at arbitrary distances from a source charge. However, the self-energy involves the local limit of the difference between the electrostatic Green functions for the full dielectric heterostructure and the homogeneous equivalent. It is shown that high convergence may be achieved for the image charge method for this local limit. A simple re-normalisation technique is introduced to reduce the number of image terms to a minimum. A number of progressively complex 3D models are evaluated analytically and compared with high precision numerical computations. Accuracies of 1% are demonstrated. Introducing a simple technique for modeling the transition of the self-energy between disparate dielectric structures we generate an analytical model that describes the self-energy as a function of position within the source, drain and gated channel of a silicon wrap round gate field effect transistor on a scale of a few nanometers cross-section. At such scales the self-energies become large (typically up to ~100 meV) close to the interfaces as well as along the channel. The screening of a gated structure is shown to reduce the self

  13. The Need for Accurate Risk Prediction Models for Road Mapping, Shared Decision Making and Care Planning for the Elderly with Advanced Chronic Kidney Disease.

    Science.gov (United States)

    Stryckers, Marijke; Nagler, Evi V; Van Biesen, Wim

    2016-11-01

    As people age, chronic kidney disease becomes more common, but it rarely leads to end-stage kidney disease. When it does, the choice between dialysis and conservative care can be daunting, as much depends on life expectancy and personal expectations of medical care. Shared decision making implies adequately informing patients about their options, and facilitating deliberation of the available information, such that decisions are tailored to the individual's values and preferences. Accurate estimations of one's risk of progression to end-stage kidney disease and death with or without dialysis are essential for shared decision making to be effective. Formal risk prediction models can help, provided they are externally validated, well-calibrated and discriminative; include unambiguous and measureable variables; and come with readily applicable equations or scores. Reliable, externally validated risk prediction models for progression of chronic kidney disease to end-stage kidney disease or mortality in frail elderly with or without chronic kidney disease are scant. Within this paper, we discuss a number of promising models, highlighting both the strengths and limitations physicians should understand for using them judiciously, and emphasize the need for external validation over new development for further advancing the field.

  14. Accurate nonlinear modeling for flexible manipulators using mixed finite element formulation in order to obtain maximum allowable load

    Energy Technology Data Exchange (ETDEWEB)

    Esfandiar, Habib; KoraYem, Moharam Habibnejad [Islamic Azad University, Tehran (Iran, Islamic Republic of)

    2015-09-15

    In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.

  15. Can Impacts of Climate Change and Agricultural Adaptation Strategies Be Accurately Quantified if Crop Models Are Annually Re-Initialized?

    Science.gov (United States)

    Basso, Bruno; Hyndman, David W; Kendall, Anthony D; Grace, Peter R; Robertson, G Philip

    2015-01-01

    Estimates of climate change impacts on global food production are generally based on statistical or process-based models. Process-based models can provide robust predictions of agricultural yield responses to changing climate and management. However, applications of these models often suffer from bias due to the common practice of re-initializing soil conditions to the same state for each year of the forecast period. If simulations neglect to include year-to-year changes in initial soil conditions and water content related to agronomic management, adaptation and mitigation strategies designed to maintain stable yields under climate change cannot be properly evaluated. We apply a process-based crop system model that avoids re-initialization bias to demonstrate the importance of simulating both year-to-year and cumulative changes in pre-season soil carbon, nutrient, and water availability. Results are contrasted with simulations using annual re-initialization, and differences are striking. We then demonstrate the potential for the most likely adaptation strategy to offset climate change impacts on yields using continuous simulations through the end of the 21st century. Simulations that annually re-initialize pre-season soil carbon and water contents introduce an inappropriate yield bias that obscures the potential for agricultural management to ameliorate the deleterious effects of rising temperatures and greater rainfall variability.

  16. Accurate nonlinear modeling for flexible manipulators using mixed finite element formulation in order to obtain maximum allowable load

    International Nuclear Information System (INIS)

    Esfandiar, Habib; KoraYem, Moharam Habibnejad

    2015-01-01

    In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.

  17. Fast and Accurate Icepak-PSpice Co-Simulation of IGBTs under Short-Circuit with an Advanced PSpice Model

    DEFF Research Database (Denmark)

    Wu, Rui; Iannuzzo, Francesco; Wang, Huai

    2014-01-01

    A basic problem in the IGBT short-circuit failure mechanism study is to obtain realistic temperature distribution inside the chip, which demands accurate electrical simulation to obtain power loss distribution as well as detailed IGBT geometry and material information. This paper describes an unp...

  18. Accurate Theoretical Methane Line Lists in the Infrared up to 3000 K and Quasi-continuum Absorption/Emission Modeling for Astrophysical Applications

    Science.gov (United States)

    Rey, Michael; Nikitin, Andrei V.; Tyuterev, Vladimir G.

    2017-10-01

    Modeling atmospheres of hot exoplanets and brown dwarfs requires high-T databases that include methane as the major hydrocarbon. We report a complete theoretical line list of 12CH4 in the infrared range 0-13,400 cm-1 up to T max = 3000 K computed via a full quantum-mechanical method from ab initio potential energy and dipole moment surfaces. Over 150 billion transitions were generated with the lower rovibrational energy cutoff 33,000 cm-1 and intensity cutoff down to 10-33 cm/molecule to ensure convergent opacity predictions. Empirical corrections for 3.7 million of the strongest transitions permitted line position accuracies of 0.001-0.01 cm-1. Full data are partitioned into two sets. “Light lists” contain strong and medium transitions necessary for an accurate description of sharp features in absorption/emission spectra. For a fast and efficient modeling of quasi-continuum cross sections, billions of tiny lines are compressed in “super-line” libraries according to Rey et al. These combined data will be freely accessible via the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru), which provides a user-friendly interface for simulations of absorption coefficients, cross-sectional transmittance, and radiance. Comparisons with cold, room, and high-T experimental data show that the data reported here represent the first global theoretical methane lists suitable for high-resolution astrophysical applications.

  19. Testing the importance of accurate meteorological input fields and parameterizations in atmospheric transport modelling using DREAM - Validation against ETEX-1

    DEFF Research Database (Denmark)

    Brandt, J.; Bastrup-Birk, A.; Christensen, J.H.

    1998-01-01

    A tracer model, the DREAM, which is based on a combination of a near-range Lagrangian model and a long-range Eulerian model, has been developed. The meteorological meso-scale model, MM5V1, is implemented as a meteorological driver for the tracer model. The model system is used for studying...

  20. StatSTEM: An efficient approach for accurate and precise model-based quantification of atomic resolution electron microscopy images.

    Science.gov (United States)

    De Backer, A; van den Bos, K H W; Van den Broek, W; Sijbers, J; Van Aert, S

    2016-12-01

    An efficient model-based estimation algorithm is introduced to quantify the atomic column positions and intensities from atomic resolution (scanning) transmission electron microscopy ((S)TEM) images. This algorithm uses the least squares estimator on image segments containing individual columns fully accounting for overlap between neighbouring columns, enabling the analysis of a large field of view. For this algorithm, the accuracy and precision with which measurements for the atomic column positions and scattering cross-sections from annular dark field (ADF) STEM images can be estimated, has been investigated. The highest attainable precision is reached even for low dose images. Furthermore, the advantages of the model-based approach taking into account overlap between neighbouring columns are highlighted. This is done for the estimation of the distance between two neighbouring columns as a function of their distance and for the estimation of the scattering cross-section which is compared to the integrated intensity from a Voronoi cell. To provide end-users this well-established quantification method, a user friendly program, StatSTEM, is developed which is freely available under a GNU public license. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Accurate Laser Measurements of the Water Vapor Self-Continuum Absorption in Four Near Infrared Atmospheric Windows. a Test of the MT_CKD Model.

    Science.gov (United States)

    Campargue, Alain; Kassi, Samir; Mondelain, Didier; Romanini, Daniele; Lechevallier, Loïc; Vasilchenko, Semyon

    2017-06-01

    The semi empirical MT_CKD model of the absorption continuum of water vapor is widely used in atmospheric radiative transfer codes of the atmosphere of Earth and exoplanets but lacks of experimental validation in the atmospheric windows. Recent laboratory measurements by Fourier transform Spectroscopy have led to self-continuum cross-sections much larger than the MT_CKD values in the near infrared transparency windows. In the present work, we report on accurate water vapor absorption continuum measurements by Cavity Ring Down Spectroscopy (CRDS) and Optical-Feedback-Cavity Enhanced Laser Spectroscopy (OF-CEAS) at selected spectral points of the transparency windows centered around 4.0, 2.1 and 1.25 μm. The temperature dependence of the absorption continuum at 4.38 μm and 3.32 μm is measured in the 23-39 °C range. The self-continuum water vapor absorption is derived either from the baseline variation of spectra recorded for a series of pressure values over a small spectral interval or from baseline monitoring at fixed laser frequency, during pressure ramps. In order to avoid possible bias approaching the water saturation pressure, the maximum pressure value was limited to about 16 Torr, corresponding to a 75% humidity rate. After subtraction of the local water monomer lines contribution, self-continuum cross-sections, C_{S}, were determined with a few % accuracy from the pressure squared dependence of the spectra base line level. Together with our previous CRDS and OF-CEAS measurements in the 2.1 and 1.6 μm windows, the derived water vapor self-continuum provides a unique set of water vapor self-continuum cross-sections for a test of the MT_CKD model in four transparency windows. Although showing some important deviations of the absolute values (up to a factor of 4 at the center of the 2.1 μm window), our accurate measurements validate the overall frequency dependence of the MT_CKD2.8 model.

  2. Development of the Japanese version of an information aid to provide accurate information on prognosis to patients with advanced non-small-cell lung cancer receiving chemotherapy: a pilot study.

    Science.gov (United States)

    Nakano, Kikuo; Kitahara, Yoshihiro; Mito, Mineyo; Seno, Misato; Sunada, Shoji

    2018-02-27

    Without explicit prognostic information, patients may overestimate their life expectancy and make poor choices at the end of life. We sought to design the Japanese version of an information aid (IA) to provide accurate information on prognosis to patients with advanced non-small-cell lung cancer (NSCLC) and to assess the effects of the IA on hope, psychosocial status, and perception of curability. We developed the Japanese version of an IA, which provided information on survival and cure rates as well as numerical survival estimates for patients with metastatic NSCLC receiving first-line chemotherapy. We then assessed the pre- and post-intervention effects of the IA on hope, anxiety, and perception of curability and treatment benefits. A total of 20 (95%) of 21 patients (65% male; median age, 72 years) completed the IA pilot test. Based on the results, scores on the Distress and Impact Thermometer screening tool for adjustment disorders and major depression tended to decrease (from 4.5 to 2.5; P = 0.204), whereas no significant changes were seen in scores for anxiety on the Japanese version of the Support Team Assessment Schedule or in scores on the Hearth Hope Index (from 41.9 to 41.5; p = 0.204). The majority of the patients (16/20, 80%) had high expectations regarding the curative effects of chemotherapy. The Japanese version of the IA appeared to help patients with NSCLC maintain hope, and did not increase their anxiety when they were given explicit prognostic information; however, the IA did not appear to help such patients understand the goal of chemotherapy. Further research is needed to test the findings in a larger sample and measure the outcomes of explicit prognostic information on hope, psychological status, and perception of curability.

  3. Temperature Field Accurate Modeling and Cooling Performance Evaluation of Direct-Drive Outer-Rotor Air-Cooling In-Wheel Motor

    Directory of Open Access Journals (Sweden)

    Feng Chai

    2016-10-01

    Full Text Available High power density outer-rotor motors commonly use water or oil cooling. A reasonable thermal design for outer-rotor air-cooling motors can effectively enhance the power density without the fluid circulating device. Research on the heat dissipation mechanism of an outer-rotor air-cooling motor can provide guidelines for the selection of the suitable cooling mode and the design of the cooling structure. This study investigates the temperature field of the motor through computational fluid dynamics (CFD and presents a method to overcome the difficulties in building an accurate temperature field model. The proposed method mainly includes two aspects: a new method for calculating the equivalent thermal conductivity (ETC of the air-gap in the laminar state and an equivalent treatment to the thermal circuit that comprises a hub, shaft, and bearings. Using an outer-rotor air-cooling in-wheel motor as an example, the temperature field of this motor is calculated numerically using the proposed method; the results are experimentally verified. The heat transfer rate (HTR of each cooling path is obtained using the numerical results and analytic formulas. The influences of the structural parameters on temperature increases and the HTR of each cooling path are analyzed. Thereafter, the overload capability of the motor is analyzed in various overload conditions.

  4. The type IIP supernova 2012aw in M95: Hydrodynamical modeling of the photospheric phase from accurate spectrophotometric monitoring

    International Nuclear Information System (INIS)

    Dall'Ora, M.; Botticella, M. T.; Della Valle, M.; Pumo, M. L.; Zampieri, L.; Tomasella, L.; Cappellaro, E.; Benetti, S.; Pignata, G.; Bufano, F.; Bayless, A. J.; Pritchard, T. A.; Taubenberger, S.; Benitez, S.; Kotak, R.; Inserra, C.; Fraser, M.; Elias-Rosa, N.; Haislip, J. B.; Harutyunyan, A.

    2014-01-01

    We present an extensive optical and near-infrared photometric and spectroscopic campaign of the Type IIP supernova SN 2012aw. The data set densely covers the evolution of SN 2012aw shortly after the explosion through the end of the photospheric phase, with two additional photometric observations collected during the nebular phase, to fit the radioactive tail and estimate the 56 Ni mass. Also included in our analysis is the previously published Swift UV data, therefore providing a complete view of the ultraviolet-optical-infrared evolution of the photospheric phase. On the basis of our data set, we estimate all the relevant physical parameters of SN 2012aw with our radiation-hydrodynamics code: envelope mass M env ∼ 20 M ☉ , progenitor radius R ∼ 3 × 10 13 cm (∼430 R ☉ ), explosion energy E ∼ 1.5 foe, and initial 56 Ni mass ∼0.06 M ☉ . These mass and radius values are reasonably well supported by independent evolutionary models of the progenitor, and may suggest a progenitor mass higher than the observational limit of 16.5 ± 1.5 M ☉ of the Type IIP events.

  5. The type IIP supernova 2012aw in M95: Hydrodynamical modeling of the photospheric phase from accurate spectrophotometric monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Ora, M.; Botticella, M. T.; Della Valle, M. [INAF, Osservatorio Astronomico di Capodimonte, Napoli (Italy); Pumo, M. L.; Zampieri, L.; Tomasella, L.; Cappellaro, E.; Benetti, S. [INAF, Osservatorio Astronomico di Padova, I-35122 Padova (Italy); Pignata, G.; Bufano, F. [Departamento de Ciencias Fisicas, Universidad Andres Bello, Avda. Republica 252, Santiago (Chile); Bayless, A. J. [Southwest Research Institute, Department of Space Science, 6220 Culebra Road, San Antonio, TX 78238 (United States); Pritchard, T. A. [Department of Astronomy and Astrophysics, Penn State University, 525 Davey Lab, University Park, PA 16802 (United States); Taubenberger, S.; Benitez, S. [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85741 Garching (Germany); Kotak, R.; Inserra, C.; Fraser, M. [Astrophysics Research Centre, School of Mathematics and Physics, Queen' s University Belfast, Belfast, BT7 1NN (United Kingdom); Elias-Rosa, N. [Institut de Ciències de l' Espai (CSIC-IEEC) Campus UAB, Torre C5, Za plata, E-08193 Bellaterra, Barcelona (Spain); Haislip, J. B. [Department of Physics and Astronomy, University of North Carolina at Chapel Hill, 120 E. Cameron Ave., Chapel Hill, NC 27599 (United States); Harutyunyan, A. [Fundación Galileo Galilei - Telescopio Nazionale Galileo, Rambla José Ana Fernández Pérez 7, E-38712 Breña Baja, TF - Spain (Spain); and others

    2014-06-01

    We present an extensive optical and near-infrared photometric and spectroscopic campaign of the Type IIP supernova SN 2012aw. The data set densely covers the evolution of SN 2012aw shortly after the explosion through the end of the photospheric phase, with two additional photometric observations collected during the nebular phase, to fit the radioactive tail and estimate the {sup 56}Ni mass. Also included in our analysis is the previously published Swift UV data, therefore providing a complete view of the ultraviolet-optical-infrared evolution of the photospheric phase. On the basis of our data set, we estimate all the relevant physical parameters of SN 2012aw with our radiation-hydrodynamics code: envelope mass M {sub env} ∼ 20 M {sub ☉}, progenitor radius R ∼ 3 × 10{sup 13} cm (∼430 R {sub ☉}), explosion energy E ∼ 1.5 foe, and initial {sup 56}Ni mass ∼0.06 M {sub ☉}. These mass and radius values are reasonably well supported by independent evolutionary models of the progenitor, and may suggest a progenitor mass higher than the observational limit of 16.5 ± 1.5 M {sub ☉} of the Type IIP events.

  6. SU-E-T-250: Determining VMAT Machine Limitations of An Elekta Linear Accelerator with Agility MLC for Accurate Modeling in RayStation and Robust Delivery

    International Nuclear Information System (INIS)

    Yang, K; Yu, Z; Chen, H; Mourtada, F

    2015-01-01

    Purpose: To implement VMAT in RayStation with the Elekta Synergy linac with the new Agility MLC, and to utilize the same vendor softwares to determine the optimum Elekta VMAT machine parameters in RayStation for accurate modeling and robust delivery. Methods: iCOMCat is utilized to create various beam patterns with user defined dose rate, gantry, MLC and jaw speed for each control point. The accuracy and stability of the output and beam profile are qualified for each isolated functional component of VMAT delivery using ion chamber and Profiler2 with isocentric mounting fixture. Service graphing on linac console is used to verify the mechanical motion accuracy. The determined optimum Elekta VMAT machine parameters were configured in RayStation v4.5.1. To evaluate the system overall performance, TG-119 test cases and nine retrospective VMAT patients were planned on RayStation, and validated using both ArcCHECK (with plug and ion chamber) and MapCHECK2. Results: Machine output and profile varies <0.3% when only variable is dose rate (35MU/min-600MU/min). <0.9% output and <0.3% profile variation are observed with additional gantry motion (0.53deg/s–5.8deg/s both directions). The output and profile variation are still <1% with additional slow leaf motion (<1.5cm/s both direction). However, the profile becomes less symmetric, and >1.5% output and 7% profile deviation is seen with >2.5cm/s leaf motion. All clinical cases achieved comparable plan quality as treated IMRT plans. The gamma passing rate is 99.5±0.5% on ArcCheck (<3% iso center dose deviation) and 99.1±0.8% on MapCheck2 using 3%/3mm gamma (10% lower threshold). Mechanical motion accuracy in all VMAT deliveries is <1°/1mm. Conclusion: Accurate RayStation modeling and robust VMAT delivery is achievable on Elekta Agility for <2.5cm/s leaf motion and full range of dose rate and gantry speed determined by the same vendor softwares. Our TG-119 and patient results have provided us with the confidence to use VMAT

  7. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter.

    Science.gov (United States)

    Chowdhury, Amor; Sarjaš, Andrej

    2016-09-15

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.

  8. Application of anatomically accurate, patient-specific 3D printed models from MRI data in urological oncology

    International Nuclear Information System (INIS)

    Wake, N.; Chandarana, H.; Huang, W.C.; Taneja, S.S.; Rosenkrantz, A.B.

    2016-01-01

    Highlights: • We examine 3D printing in the context of urologic oncology. • Patient-specific 3D printed kidney and prostate tumor models were created. • 3D printed models extend the current capabilities of conventional 3D visualization. • 3D printed models may be used for surgical planning and intraoperative guidance.

  9. Providing or designing? Constructing models in primary maths education (IF. 0.756)

    NARCIS (Netherlands)

    van Dijk, I.M.A.W.; van Oers, H.J.M.; Terwel, J.

    2003-01-01

    The goal of this exploratory study was to uncover the construction processes which occur when pupils are taught to work with models in primary maths education. Two approaches were studied: 'providing models' versus 'designing models in co-construction'. A qualitative observational study involved two

  10. 3D reconstruction of coronary arteries from 2D angiographic projections using non-uniform rational basis splines (NURBS for accurate modelling of coronary stenoses.

    Directory of Open Access Journals (Sweden)

    Francesca Galassi

    Full Text Available Assessment of coronary stenosis severity is crucial in clinical practice. This study proposes a novel method to generate 3D models of stenotic coronary arteries, directly from 2D coronary images, and suitable for immediate assessment of the stenosis severity.From multiple 2D X-ray coronary arteriogram projections, 2D vessels were extracted. A 3D centreline was reconstructed as intersection of surfaces from corresponding branches. Next, 3D luminal contours were generated in a two-step process: first, a Non-Uniform Rational B-Spline (NURBS circular contour was designed and, second, its control points were adjusted to interpolate computed 3D boundary points. Finally, a 3D surface was generated as an interpolation across the control points of the contours and used in the analysis of the severity of a lesion. To evaluate the method, we compared 3D reconstructed lesions with Optical Coherence Tomography (OCT, an invasive imaging modality that enables high-resolution endoluminal visualization of lesion anatomy.Validation was performed on routine clinical data. Analysis of paired cross-sectional area discrepancies indicated that the proposed method more closely represented OCT contours than conventional approaches in luminal surface reconstruction, with overall root-mean-square errors ranging from 0.213mm2 to 1.013mm2, and maximum error of 1.837mm2. Comparison of volume reduction due to a lesion with corresponding FFR measurement suggests that the method may help in estimating the physiological significance of a lesion.The algorithm accurately reconstructed 3D models of lesioned arteries and enabled quantitative assessment of stenoses. The proposed method has the potential to allow immediate analysis of the stenoses in clinical practice, thereby providing incremental diagnostic and prognostic information to guide treatments in real time and without the need for invasive techniques.

  11. Accurate Theoretical Methane Line Lists in the Infrared up to 3000 K and Quasi-continuum Absorption/Emission Modeling for Astrophysical Applications

    Energy Technology Data Exchange (ETDEWEB)

    Rey, Michael; Tyuterev, Vladimir G. [Groupe de Spectrométrie Moléculaire et Atmosphérique, UMR CNRS 7331, BP 1039, F-51687, Reims Cedex 2 (France); Nikitin, Andrei V., E-mail: michael.rey@univ-reims.fr [Laboratory of Theoretical Spectroscopy, Institute of Atmospheric Optics, SB RAS, 634055 Tomsk (Russian Federation)

    2017-10-01

    Modeling atmospheres of hot exoplanets and brown dwarfs requires high- T databases that include methane as the major hydrocarbon. We report a complete theoretical line list of {sup 12}CH{sub 4} in the infrared range 0–13,400 cm{sup −1} up to T {sub max} = 3000 K computed via a full quantum-mechanical method from ab initio potential energy and dipole moment surfaces. Over 150 billion transitions were generated with the lower rovibrational energy cutoff 33,000 cm{sup −1} and intensity cutoff down to 10{sup −33} cm/molecule to ensure convergent opacity predictions. Empirical corrections for 3.7 million of the strongest transitions permitted line position accuracies of 0.001–0.01 cm{sup −1}. Full data are partitioned into two sets. “Light lists” contain strong and medium transitions necessary for an accurate description of sharp features in absorption/emission spectra. For a fast and efficient modeling of quasi-continuum cross sections, billions of tiny lines are compressed in “super-line” libraries according to Rey et al. These combined data will be freely accessible via the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru), which provides a user-friendly interface for simulations of absorption coefficients, cross-sectional transmittance, and radiance. Comparisons with cold, room, and high- T experimental data show that the data reported here represent the first global theoretical methane lists suitable for high-resolution astrophysical applications.

  12. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    Science.gov (United States)

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement

  13. An Accurate Analytical Model for 802.11e EDCA under Different Traffic Conditions with Contention-Free Bursting

    Directory of Open Access Journals (Sweden)

    Nada Chendeb Taher

    2011-01-01

    Full Text Available Extensive research addressing IEEE 802.11e enhanced distributed channel access (EDCA performance analysis, by means of analytical models, exist in the literature. Unfortunately, the currently proposed models, even though numerous, do not reach this accuracy due to the great number of simplifications that have been done. Particularly, none of these models considers the 802.11e contention free burst (CFB mode which allows a given station to transmit a burst of frames without contention during a given transmission opportunity limit (TXOPLimit time interval. Despite its influence on the global performance, TXOPLimit is ignored in almost all existing models. To fill in this gap, we develop in this paper a new and complete analytical model that (i reflects the correct functioning of EDCA, (ii includes all the 802.11e EDCA differentiation parameters, (iii takes into account all the features of the protocol, and (iv can be applied to all network conditions, going from nonsaturation to saturation conditions. Additionally, this model is developed in order to be used in admission control procedure, so it was designed to have a low complexity and an acceptable response time. The proposed model is validated by means of both calculations and extensive simulations.

  14. Improving optimal control of grid-connected lithium-ion batteries through more accurate battery and degradation modelling

    Science.gov (United States)

    Reniers, Jorn M.; Mulder, Grietus; Ober-Blöbaum, Sina; Howey, David A.

    2018-03-01

    The increased deployment of intermittent renewable energy generators opens up opportunities for grid-connected energy storage. Batteries offer significant flexibility but are relatively expensive at present. Battery lifetime is a key factor in the business case, and it depends on usage, but most techno-economic analyses do not account for this. For the first time, this paper quantifies the annual benefits of grid-connected batteries including realistic physical dynamics and nonlinear electrochemical degradation. Three lithium-ion battery models of increasing realism are formulated, and the predicted degradation of each is compared with a large-scale experimental degradation data set (Mat4Bat). A respective improvement in RMS capacity prediction error from 11% to 5% is found by increasing the model accuracy. The three models are then used within an optimal control algorithm to perform price arbitrage over one year, including degradation. Results show that the revenue can be increased substantially while degradation can be reduced by using more realistic models. The estimated best case profit using a sophisticated model is a 175% improvement compared with the simplest model. This illustrates that using a simplistic battery model in a techno-economic assessment of grid-connected batteries might substantially underestimate the business case and lead to erroneous conclusions.

  15. New Provider Models for Sweden and Spain: Public, Private or Non-profit? Comment on "Governance, Government, and the Search for New Provider Models".

    Science.gov (United States)

    Jeurissen, Patrick P T; Maarse, Hans

    2016-06-29

    Sweden and Spain experiment with different provider models to reform healthcare provision. Both models have in common that they extend the role of the for-profit sector in healthcare. As the analysis of Saltman and Duran demonstrates, privatisation is an ambiguous and contested strategy that is used for quite different purposes. In our comment, we emphasize that their analysis leaves questions open on the consequences of privatisation for the performance of healthcare and the role of the public sector in healthcare provision. Furthermore, we briefly address the absence of the option of healthcare provision by not-for-profit providers in the privatisation strategy of Sweden and Spain. © 2016 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  16. A deformation model of flexible, HAMR objects for accurate propagation under perturbations and the self-shadowing effects

    Science.gov (United States)

    Channumsin, Sittiporn; Ceriotti, Matteo; Radice, Gianmarco

    2018-02-01

    A new type of space debris in near geosynchronous orbit (GEO) was recently discovered and later identified as exhibiting unique characteristics associated with high area-to-mass ratio (HAMR) objects, such as high rotation rates and high reflection properties. Observations have shown that this debris type is very sensitive to environmental disturbances, particularly solar radiation pressure, due to the fact that its motion depends on the actual effective area, orientation of that effective area, reflection properties and the area-to-mass ratio of the object is not stable over time. Previous investigations have modelled this type of debris as rigid bodies (constant area-to-mass ratios) or discrete deformed body; however, these simplifications will lead to inaccurate long term orbital predictions. This paper proposes a simple yet reliable model of a thin, deformable membrane based on multibody dynamics. The membrane is modelled as a series of flat plates, connected through joints, representing the flexibility of the membrane itself. The mass of the membrane, albeit low, is taken into account through lump masses at the joints. The attitude and orbital motion of this flexible membrane model is then propagated near GEO to predict its orbital evolution under the perturbations of solar radiation pressure, Earth's gravity field (J2), third body gravitational fields (the Sun and Moon) and self-shadowing. These results are then compared to those obtained for two rigid body models (cannonball and flat rigid plate). In addition, Monte Carlo simulations of the flexible model by varying initial attitude and deformation angle (different shape) are investigated and compared with the two rigid models (cannonball and flat rigid plate) over a period of 100 days. The numerical results demonstrate that cannonball and rigid flat plate are not appropriate to capture the true dynamical evolution of these objects, at the cost of increased computational time.

  17. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  18. Low- and high-order accurate boundary conditions: From Stokes to Darcy porous flow modeled with standard and improved Brinkman lattice Boltzmann schemes

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Goncalo, E-mail: goncalo.nuno.silva@gmail.com [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France); Talon, Laurent, E-mail: talon@fast.u-psud.fr [CNRS (UMR 7608), Laboratoire FAST, Batiment 502, Campus University, 91405 Orsay (France); Ginzburg, Irina, E-mail: irina.ginzburg@irstea.fr [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France)

    2017-04-15

    and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.

  19. HPC Institutional Computing Project: W15_lesreactiveflow KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Carrington, David Bradley [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Waters, Jiajia [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-05

    KIVA-hpFE is a high performance computer software for solving the physics of multi-species and multiphase turbulent reactive flow in complex geometries having immersed moving parts. The code is written in Fortran 90/95 and can be used on any computer platform with any popular complier. The code is in two versions, a serial version and a parallel version utilizing MPICH2 type Message Passing Interface (MPI or Intel MPI) for solving distributed domains. The parallel version is at least 30x faster than the serial version and much faster than our previous generation of parallel engine modeling software, by many factors. The 5th generation algorithm construction is a Galerkin type Finite Element Method (FEM) solving conservative momentum, species, and energy transport equations along with two-equation turbulent model k-ω Reynolds Averaged Navier-Stokes (RANS) model and a Vreman type dynamic Large Eddy Simulation (LES) method. The LES method is capable modeling transitional flow from laminar to fully turbulent; therefore, this LES method does not require special hybrid or blending to walls. The FEM projection method also uses a Petrov-Galerkin (P-G) stabilization along with pressure stabilization. We employ hierarchical basis sets, constructed on the fly with enrichment in areas associated with relatively larger error as determined by error estimation methods. In addition, when not using the hp-adaptive module, the code employs Lagrangian basis or shape functions. The shape functions are constructed for hexahedral, prismatic and tetrahedral elements. The software is designed to solve many types of reactive flow problems, from burners to internal combustion engines and turbines. In addition, the formulation allows for direct integration of solid bodies (conjugate heat transfer), as in heat transfer through housings, parts, cylinders. It can also easily be extended to stress modeling of solids, used in fluid structure interactions problems, solidification, porous media

  20. Combining IVUS and Optical Coherence Tomography for More Accurate Coronary Cap Thickness Quantification and Stress/Strain Calculations: A Patient-Specific Three-Dimensional Fluid-Structure Interaction Modeling Approach.

    Science.gov (United States)

    Guo, Xiaoya; Giddens, Don P; Molony, David; Yang, Chun; Samady, Habib; Zheng, Jie; Mintz, Gary S; Maehara, Akiko; Wang, Liang; Pei, Xuan; Li, Zhi-Yong; Tang, Dalin

    2018-04-01

    Accurate cap thickness and stress/strain quantifications are of fundamental importance for vulnerable plaque research. Virtual histology intravascular ultrasound (VH-IVUS) sets cap thickness to zero when cap is under resolution limit and IVUS does not see it. An innovative modeling approach combining IVUS and optical coherence tomography (OCT) is introduced for cap thickness quantification and more accurate cap stress/strain calculations. In vivo IVUS and OCT coronary plaque data were acquired with informed consent obtained. IVUS and OCT images were merged to form the IVUS + OCT data set, with biplane angiography providing three-dimensional (3D) vessel curvature. For components where VH-IVUS set zero cap thickness (i.e., no cap), a cap was added with minimum cap thickness set as 50 and 180 μm to generate IVUS50 and IVUS180 data sets for model construction, respectively. 3D fluid-structure interaction (FSI) models based on IVUS + OCT, IVUS50, and IVUS180 data sets were constructed to investigate cap thickness impact on stress/strain calculations. Compared to IVUS + OCT, IVUS50 underestimated mean cap thickness (27 slices) by 34.5%, overestimated mean cap stress by 45.8%, (96.4 versus 66.1 kPa). IVUS50 maximum cap stress was 59.2% higher than that from IVUS + OCT model (564.2 versus 354.5 kPa). Differences between IVUS and IVUS + OCT models for cap strain and flow shear stress (FSS) were modest (cap strain <12%; FSS <6%). IVUS + OCT data and models could provide more accurate cap thickness and stress/strain calculations which will serve as basis for further plaque investigations.

  1. Development of accurate UWB dielectric properties dispersion at CST simulation tool for modeling microwave interactions with numerical breast phantoms

    International Nuclear Information System (INIS)

    Maher, A.; Quboa, K. M.

    2011-01-01

    In this paper, a reformulation for the recently published dielectric properties dispersion models of the breast tissues is carried out to be used by CST simulation tool. The reformulation includes tabulation of the real and imaginary parts versus frequency on ultra-wideband (UWB) for these models by MATLAB programs. The tables are imported and fitted by CST simulation tool to second or first order general equations. The results have shown good agreement between the original and the imported data. The MATLAB programs written in MATLAB code are included in the appendix.

  2. The LNT model provides the best approach for practical implementation of radiation protection.

    Science.gov (United States)

    Martin, C J

    2005-01-01

    This contribution argues the case that, at the present time, the linear-no-threshold (LNT) model provides the only rational framework on which practical radiation protection can be organized. Political, practical and healthcare difficulties with attempting to introduce an alternative approach, e.g. a threshold model, are discussed.

  3. Effectiveness of Video Modeling Provided by Mothers in Teaching Play Skills to Children with Autism

    Science.gov (United States)

    Besler, Fatma; Kurt, Onur

    2016-01-01

    Video modeling is an evidence-based practice that can be used to provide instruction to individuals with autism. Studies show that this instructional practice is effective in teaching many types of skills such as self-help skills, social skills, and academic skills. However, in previous studies, videos used in the video modeling process were…

  4. Even faster and even more accurate first-passage time densities and distributions for the Wiener diffusion model

    DEFF Research Database (Denmark)

    Gondan, Matthias; Blurton, Steven Paul; Kesselmeier, Miriam

    2014-01-01

    The Wiener diffusion model with two absorbing barriers is often used to describe response times and error probabilities in two-choice decisions. Different representations exist for the density and cumulative distribution of first-passage times, all including infinite series, but with different co...... of Mathematical Psychology)....

  5. Retrieving Backbone String Neighbors Provides Insights Into Structural Modeling of Membrane Proteins*

    Science.gov (United States)

    Sun, Jiang-Ming; Li, Tong-Hua; Cong, Pei-Sheng; Tang, Sheng-Nan; Xiong, Wen-Wei

    2012-01-01

    Identification of protein structural neighbors to a query is fundamental in structure and function prediction. Here we present BS-align, a systematic method to retrieve backbone string neighbors from primary sequences as templates for protein modeling. The backbone conformation of a protein is represented by the backbone string, as defined in Ramachandran space. The backbone string of a query can be accurately predicted by two innovative technologies: a knowledge-driven sequence alignment and encoding of a backbone string element profile. Then, the predicted backbone string is employed to align against a backbone string database and retrieve a set of backbone string neighbors. The backbone string neighbors were shown to be close to native structures of query proteins. BS-align was successfully employed to predict models of 10 membrane proteins with lengths ranging between 229 and 595 residues, and whose high-resolution structural determinations were difficult to elucidate both by experiment and prediction. The obtained TM-scores and root mean square deviations of the models confirmed that the models based on the backbone string neighbors retrieved by the BS-align were very close to the native membrane structures although the query and the neighbor shared a very low sequence identity. The backbone string system represents a new road for the prediction of protein structure from sequence, and suggests that the similarity of the backbone string would be more informative than describing a protein as belonging to a fold. PMID:22415040

  6. Retrieving backbone string neighbors provides insights into structural modeling of membrane proteins.

    Science.gov (United States)

    Sun, Jiang-Ming; Li, Tong-Hua; Cong, Pei-Sheng; Tang, Sheng-Nan; Xiong, Wen-Wei

    2012-07-01

    Identification of protein structural neighbors to a query is fundamental in structure and function prediction. Here we present BS-align, a systematic method to retrieve backbone string neighbors from primary sequences as templates for protein modeling. The backbone conformation of a protein is represented by the backbone string, as defined in Ramachandran space. The backbone string of a query can be accurately predicted by two innovative technologies: a knowledge-driven sequence alignment and encoding of a backbone string element profile. Then, the predicted backbone string is employed to align against a backbone string database and retrieve a set of backbone string neighbors. The backbone string neighbors were shown to be close to native structures of query proteins. BS-align was successfully employed to predict models of 10 membrane proteins with lengths ranging between 229 and 595 residues, and whose high-resolution structural determinations were difficult to elucidate both by experiment and prediction. The obtained TM-scores and root mean square deviations of the models confirmed that the models based on the backbone string neighbors retrieved by the BS-align were very close to the native membrane structures although the query and the neighbor shared a very low sequence identity. The backbone string system represents a new road for the prediction of protein structure from sequence, and suggests that the similarity of the backbone string would be more informative than describing a protein as belonging to a fold.

  7. Accurate Hardening Modeling As Basis For The Realistic Simulation Of Sheet Forming Processes With Complex Strain-Path Changes

    International Nuclear Information System (INIS)

    Levkovitch, Vladislav; Svendsen, Bob

    2007-01-01

    Sheet metal forming involves large strains and severe strain-path changes. Large plastic strains lead in many metals to the development of persistent dislocation structures resulting in strong flow anisotropy. This induced anisotropic behavior manifests itself in the case of a strain path change through very different stress-strain responses depending on the type of the strain-path change. While many metals exhibit a drop of the yield stress (Bauschinger effect) after a load reversal, some metals show an increase of the yield stress after an orthogonal strain-path change (so-called cross hardening). To model the Bauschinger effect, kinematic hardening has been successfully used for years. However, the usage of the kinematic hardening leads automatically to a drop of the yield stress after an orthogonal strain-path change contradicting tests exhibiting the cross hardening effect. Another effect, not accounted for in the classical elasto-plasticity, is the difference between the tensile and compressive strength, exhibited e.g. by some steel materials. In this work we present a phenomenological material model whose structure is motivated by polycrystalline modeling that takes into account the evolution of polarized dislocation structures on the grain level - the main cause of the induced flow anisotropy on the macroscopic level. The model considers besides the movement of the yield surface and its proportional expansion, as it is the case in conventional plasticity, also the changes of the yield surface shape (distortional hardening) and accounts for the pressure dependence of the flow stress. All these additional attributes turn out to be essential to model the stress-strain response of dual phase high strength steels subjected to non-proportional loading

  8. Accurate hardening modeling as basis for the realistic simulation of sheet forming processes with complex strain-path changes

    International Nuclear Information System (INIS)

    Levkovitch, Vladislav; Svendsen, Bob

    2007-01-01

    Sheet metal forming involves large strains and severe strain-path changes. Large plastic strains lead in many metals to the development of persistent dislocation structures resulting in strong flow anisotropy. This induced anisotropic behavior manifests itself in the case of a strain path change through very different stress-strain responses depending on the type of the strain-path change. While many metals exhibit a drop of the yield stress (Bauschinger effect) after a load reversal, some metals show an increase of the yield stress after an orthogonal strain-path change (so-called cross hardening). To model the Bauschinger effect, kinematic hardening has been successfully used for years. However, the usage of the kinematic hardening leads automatically to a drop of the yield stress after an orthogonal strain-path change contradicting tests exhibiting the cross hardening effect. Another effect, not accounted for in the classical elasto-plasticity, is the difference between the tensile and compressive strength, exhibited e.g. by some steel materials. In this work we present a phenomenological material model whose structure is motivated by polycrystalline modeling that takes into account the evolution of polarized dislocation structures on the grain level - the main cause of the induced flow anisotropy on the macroscopic level. The model considers besides the movement of the yield surface and its proportional expansion, as it is the case in conventional plasticity, also the changes of the yield surface shape (distortional hardening) and accounts for the pressure dependence of the flow stress. All these additional attributes turn out to be essential to model the stress-strain response of dual phase high strength steels subjected to non-proportional loading

  9. Accurate small and wide angle x-ray scattering profiles from atomic models of proteins and nucleic acids.

    Science.gov (United States)

    Nguyen, Hung T; Pabit, Suzette A; Meisburger, Steve P; Pollack, Lois; Case, David A

    2014-12-14

    A new method is introduced to compute X-ray solution scattering profiles from atomic models of macromolecules. The three-dimensional version of the Reference Interaction Site Model (RISM) from liquid-state statistical mechanics is employed to compute the solvent distribution around the solute, including both water and ions. X-ray scattering profiles are computed from this distribution together with the solute geometry. We describe an efficient procedure for performing this calculation employing a Lebedev grid for the angular averaging. The intensity profiles (which involve no adjustable parameters) match experiment and molecular dynamics simulations up to wide angle for two proteins (lysozyme and myoglobin) in water, as well as the small-angle profiles for a dozen biomolecules taken from the BioIsis.net database. The RISM model is especially well-suited for studies of nucleic acids in salt solution. Use of fiber-diffraction models for the structure of duplex DNA in solution yields close agreement with the observed scattering profiles in both the small and wide angle scattering (SAXS and WAXS) regimes. In addition, computed profiles of anomalous SAXS signals (for Rb(+) and Sr(2+)) emphasize the ionic contribution to scattering and are in reasonable agreement with experiment. In cases where an absolute calibration of the experimental data at q = 0 is available, one can extract a count of the excess number of waters and ions; computed values depend on the closure that is assumed in the solution of the Ornstein-Zernike equations, with results from the Kovalenko-Hirata closure being closest to experiment for the cases studied here.

  10. Development of an Anatomically Accurate Finite Element Human Ocular Globe Model for Blast-Related Fluid-Structure Interaction Studies

    Science.gov (United States)

    2017-02-01

    primary blast wave loading on the eye. Watson et al.16 evaluated primary blast wave insult through a combined experimental-computational approach...analysis model of orbital biomechanics. Vision Res. 2006;46(11):1724–1731. 16. Watson R, Gray W, Sponsel WE, Lund BJ, Glickman RD, Groth SL, Reilly MA...ISRN Ophthalmology; 2011. Article ID No.: 146813. doi:10.5402/2011/146813. 39. Roberts KF, Artes PH, OLeary N, Reis AS, Sharpe GP, Hutchison DM

  11. Fast, accurate photon beam accelerator modeling using BEAMnrc: A systematic investigation of efficiency enhancing methods and cross-section data

    Energy Technology Data Exchange (ETDEWEB)

    Fragoso, Margarida; Kawrakow, Iwan; Faddegon, Bruce A.; Solberg, Timothy D.; Chetty, Indrin J. [Henry Ford Health System, Detroit, Michigan 48202 (United States); National Research Council of Canada, Ottawa, Ontario K1A OR6 (Canada); University of California, San Francisco, California 94143-0226 (United States); UT Southwestern Medical Center, Dallas, Texas 75390-9183 (United States); Henry Ford Health System, Detroit, Michigan 48202 (United States)

    2009-12-15

    splitting (DBS) with no electron splitting. When DBS was used with electron splitting and combined with augmented charged particle range rejection, a technique recently introduced in BEAMnrc, relative efficiencies were {approx}420 ({approx}253 min on a single processor) and {approx}175 ({approx}58 min on a single processor) for the 10x10 and 40x40 cm{sup 2} field sizes, respectively. Calculations of the Siemens Primus treatment head with VMC++ produced relative efficiencies of {approx}1400 ({approx}6 min on a single processor) and {approx}60 ({approx}4 min on a single processor) for the 10x10 and 40x40 cm{sup 2} field sizes, respectively. BEAMnrc PHSP calculations with DBS alone or DBS in combination with charged particle range rejection were more efficient than the other efficiency enhancing techniques used. Using VMC++, accurate simulations of the entire linac treatment head were performed within minutes on a single processor. Noteworthy differences ({+-}1%-3%) in the mean energy, planar fluence, and angular and spectral distributions were observed with the NIST bremsstrahlung cross sections compared with those of Bethe-Heitler (BEAMnrc default bremsstrahlung cross section). However, MC calculated dose distributions in water phantoms (using combinations of VRTs/AEITs and cross-section data) agreed within 2% of measurements. Furthermore, MC calculated dose distributions in a simulated water/air/water phantom, using NIST cross sections, were within 2% agreement with the BEAMnrc Bethe-Heitler default case.

  12. FRAMES-2.0 Software System: Providing Password Protection and Limited Access to Models and Simulations

    International Nuclear Information System (INIS)

    Whelan, Gene; Pelton, Mitch A.

    2007-01-01

    One of the most important concerns for regulatory agencies is the concept of reproducibility (i.e., reproducibility means credibility) of an assessment. One aspect of reproducibility deals with tampering of the assessment. In other words, when multiple groups are engaged in an assessment, it is important to lock down the problem that is to be solved and/or to restrict the models that are to be used to solve the problem. The objective of this effort is to provide the U.S. Nuclear Regulatory Commission (NRC) with a means to limit user access to models and to provide a mechanism to constrain the conceptual site models (CSMs) when appropriate. The purpose is to provide the user (i.e., NRC) with the ability to ''lock down'' the CSM (i.e., picture containing linked icons), restrict access to certain models, or both.

  13. Advanced Mechanistic 3D Spatial Modeling and Analysis Methods to Accurately Represent Nuclear Facility External Event Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Sezen, Halil [The Ohio State Univ., Columbus, OH (United States). Dept. of Civil, Environmental and Geodetic Engineering; Aldemir, Tunc [The Ohio State Univ., Columbus, OH (United States). College of Engineering, Nuclear Engineering Program, Dept. of Mechanical and Aerospace Engineering; Denning, R. [The Ohio State Univ., Columbus, OH (United States); Vaidya, N. [Rizzo Associates, Pittsburgh, PA (United States)

    2017-12-29

    Probabilistic risk assessment of nuclear power plants initially focused on events initiated by internal faults at the plant, rather than external hazards including earthquakes and flooding. Although the importance of external hazards risk analysis is now well recognized, the methods for analyzing low probability external hazards rely heavily on subjective judgment of specialists, often resulting in substantial conservatism. This research developed a framework to integrate the risk of seismic and flooding events using realistic structural models and simulation of response of nuclear structures. The results of four application case studies are presented.

  14. What are healthcare providers' understandings and experiences of compassion? The healthcare compassion model: a grounded theory study of healthcare providers in Canada.

    Science.gov (United States)

    Sinclair, Shane; Hack, Thomas F; Raffin-Bouchal, Shelley; McClement, Susan; Stajduhar, Kelli; Singh, Pavneet; Hagen, Neil A; Sinnarajah, Aynharan; Chochinov, Harvey Max

    2018-03-14

    Healthcare providers are considered the primary conduit of compassion in healthcare. Although most healthcare providers desire to provide compassion, and patients and families expect to receive it, an evidence-based understanding of the construct and its associated dimensions from the perspective of healthcare providers is needed. The aim of this study was to investigate healthcare providers' perspectives and experiences of compassion in order to generate an empirically derived, clinically informed model. Data were collected via focus groups with frontline healthcare providers and interviews with peer-nominated exemplary compassionate healthcare providers. Data were independently and collectively analysed by the research team in accordance with Straussian grounded theory. 57 healthcare providers were recruited from urban and rural palliative care services spanning hospice, home care, hospital-based consult teams, and a dedicated inpatient unit within Alberta, Canada. Five categories and 13 associated themes were identified, illustrated in the Healthcare Provider Compassion Model depicting the dimensions of compassion and their relationship to one another. Compassion was conceptualised as-a virtuous and intentional response to know a person, to discern their needs and ameliorate their suffering through relational understanding and action. An empirical foundation of healthcare providers' perspectives on providing compassionate care was generated. While the dimensions of the Healthcare Provider Compassion Model were congruent with the previously developed Patient Model, further insight into compassion is now evident. The Healthcare Provider Compassion Model provides a model to guide clinical practice and research focused on developing interventions, measures and resources to improve it. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly

  15. A new expression of Ns versus Ef to an accurate control charge model for AlGaAs/GaAs

    International Nuclear Information System (INIS)

    Bouneb, I; Kerrour, F.

    2016-01-01

    Semi-conductor components become the privileged support of information and communication, particularly appreciation to the development of the internet. Today, MOS transistors on silicon dominate largely the semi-conductors market, however the diminution of transistors grid length is not enough to enhance the performances and respect Moore law. Particularly, for broadband telecommunications systems, where faster components are required. For this reason, alternative structures proposed like hetero structures IV-IV or III-V [1] have been.The most effective components in this area (High Electron Mobility Transistor: HEMT) on IIIV substrate. This work investigates an approach for contributing to the development of a numerical model based on physical and numerical modelling of the potential at heterostructure in AlGaAs/GaAs interface. We have developed calculation using projective methods allowed the Hamiltonian integration using Green functions in Schrodinger equation, for a rigorous resolution “self coherent” with Poisson equation. A simple analytical approach for charge-control in quantum well region of an AlGaAs/GaAs HEMT structure was presented. A charge-control equation, accounting for a variable average distance of the 2-DEG from the interface was introduced. Our approach which have aim to obtain ns-Vg characteristics is mainly based on: A new linear expression of Fermi-level variation with two-dimensional electron gas density in high electron mobility and also is mainly based on the notion of effective doping and a new expression of AEc (paper)

  16. A new expression of Ns versus Ef to an accurate control charge model for AlGaAs/GaAs

    Science.gov (United States)

    Bouneb, I.; Kerrour, F.

    2016-03-01

    Semi-conductor components become the privileged support of information and communication, particularly appreciation to the development of the internet. Today, MOS transistors on silicon dominate largely the semi-conductors market, however the diminution of transistors grid length is not enough to enhance the performances and respect Moore law. Particularly, for broadband telecommunications systems, where faster components are required. For this reason, alternative structures proposed like hetero structures IV-IV or III-V [1] have been.The most effective components in this area (High Electron Mobility Transistor: HEMT) on IIIV substrate. This work investigates an approach for contributing to the development of a numerical model based on physical and numerical modelling of the potential at heterostructure in AlGaAs/GaAs interface. We have developed calculation using projective methods allowed the Hamiltonian integration using Green functions in Schrodinger equation, for a rigorous resolution “self coherent” with Poisson equation. A simple analytical approach for charge-control in quantum well region of an AlGaAs/GaAs HEMT structure was presented. A charge-control equation, accounting for a variable average distance of the 2-DEG from the interface was introduced. Our approach which have aim to obtain ns-Vg characteristics is mainly based on: A new linear expression of Fermi-level variation with two-dimensional electron gas density in high electron mobility and also is mainly based on the notion of effective doping and a new expression of AEc

  17. Development of an accurate molecular mechanics model for buckling behavior of multi-walled carbon nanotubes under axial compression.

    Science.gov (United States)

    Safaei, B; Naseradinmousavi, P; Rahmani, A

    2016-04-01

    In the present paper, an analytical solution based on a molecular mechanics model is developed to evaluate the elastic critical axial buckling strain of chiral multi-walled carbon nanotubes (MWCNTs). To this end, the total potential energy of the system is calculated with the consideration of the both bond stretching and bond angular variations. Density functional theory (DFT) in the form of generalized gradient approximation (GGA) is implemented to evaluate force constants used in the molecular mechanics model. After that, based on the principle of molecular mechanics, explicit expressions are proposed to obtain elastic surface Young's modulus and Poisson's ratio of the single-walled carbon nanotubes corresponding to different types of chirality. Selected numerical results are presented to indicate the influence of the type of chirality, tube diameter, and number of tube walls in detailed. An excellent agreement is found between the present numerical results and those found in the literature which confirms the validity as well as the accuracy of the present closed-form solution. It is found that the value of critical axial buckling strain exhibit significant dependency on the type of chirality and number of tube walls. Copyright © 2016. Published by Elsevier Inc.

  18. Accurate Evaluation of Quantum Integrals

    Science.gov (United States)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  19. Accurate Calibration of Empirical Viscous Fingering Models Calage précis de modèles empiriques de digitation

    Directory of Open Access Journals (Sweden)

    Fayers F. J.

    2006-11-01

    Full Text Available We review the use and calibration of empirical models for viscous fingering. The choice of parameters for the three principal approaches (Koval, Todd and Longstaff, and Fayers methods is outlined. The methods all give similar levels of accuracy when compared with linear experiments, but differ in performance in two-dimensional applications. This arises from differences in the formulation of the total mobility terms. The superiority of the Todd and Longstaff and Fayers methods is demonstrated for two-dimensional and gravity influenced flows by comparison with experiments and high resolution simulation. The use of high resolution simulation to calibrate empirical models in a systematic manner is described. Results from detailed simulation demonstrate the sensitivity of empirical model parameters to viscous to gravity ratio, recovery process (secondary, tertiary or WAG, and geological heterogeneity. It is shown that for large amplitude heterogeneities with short correlation lengths, the accuracy of the empirical models is not satisfactory, but is improved by the addition of a diffusive term. Cet article traite de l'utilisation et du calibrage des modèles empiriques de la digitation visqueuse. Il présente l'ensemble des paramètres intervenant dans les trois principales méthodes employées (Koval, Todd & Longstaff et Fayers. Ces méthodes offrent des degrés de précision comparables à ceux des expériences linéaires, mais leurs performances diffèrent dans les applications bi-dimensionnelles. Ceci est dû aux différences relevées dans la formulation des termes de la mobilité totale. Les auteurs démontrent la supériorité des méthodes Todd & Longstaff et Fayers par rapport aux expériences et à la simulation haute résolution pour les écoulements bi-dimensionnels et par gravité. La simulation haute résolution pour le calibrage systématique des modèles empiriques est décrite. Les résultats d'une simulation détaillée illustrent la

  20. Social models provide a norm of appropriate food intake for young women.

    Directory of Open Access Journals (Sweden)

    Lenny R Vartanian

    Full Text Available It is often assumed that social models influence people's eating behavior by providing a norm of appropriate food intake, but this hypothesis has not been directly tested. In three experiments, female participants were exposed to a low-intake model, a high-intake model, or no model (control condition. Experiments 1 and 2 used a remote-confederate manipulation and were conducted in the context of a cookie taste test. Experiment 3 used a live confederate and was conducted in the context of a task during which participants were given incidental access to food. Participants also rated the extent to which their food intake was influenced by a variety of factors (e.g., hunger, taste, how much others ate. In all three experiments, participants in the low-intake conditions ate less than did participants in the high-intake conditions, and also reported a lower perceived norm of appropriate intake. Furthermore, perceived norms of appropriate intake mediated the effects of the social model on participants' food intake. Despite the observed effects of the social models, participants were much more likely to indicate that their food intake was influenced by taste and hunger than by the behavior of the social models. Thus, social models appear to influence food intake by providing a norm of appropriate eating behavior, but people may be unaware of the influence of a social model on their behavior.

  1. Value-added strategy models to provide quality services in senior health business.

    Science.gov (United States)

    Yang, Ya-Ting; Lin, Neng-Pai; Su, Shyi; Chen, Ya-Mei; Chang, Yao-Mao; Handa, Yujiro; Khan, Hafsah Arshed Ali; Elsa Hsu, Yi-Hsin

    2017-06-20

    The rapid population aging is now a global issue. The increase in the elderly population will impact the health care industry and health enterprises; various senior needs will promote the growth of the senior health industry. Most senior health studies are focused on the demand side and scarcely on supply. Our study selected quality enterprises focused on aging health and analyzed different strategies to provide excellent quality services to senior health enterprises. We selected 33 quality senior health enterprises in Taiwan and investigated their excellent quality services strategies by face-to-face semi-structured in-depth interviews with CEO and managers of each enterprise in 2013. A total of 33 senior health enterprises in Taiwan. Overall, 65 CEOs and managers of 33 enterprises were interviewed individually. None. Core values and vision, organization structure, quality services provided, strategies for quality services. This study's results indicated four type of value-added strategy models adopted by senior enterprises to offer quality services: (i) residential care and co-residence model, (ii) home care and living in place model, (iii) community e-business experience model and (iv) virtual and physical portable device model. The common part in these four strategy models is that the services provided are elderly centered. These models offer virtual and physical integrations, and also offer total solutions for the elderly and their caregivers. Through investigation of successful strategy models for providing quality services to seniors, we identified opportunities to develop innovative service models and successful characteristics, also policy implications were summarized. The observations from this study will serve as a primary evidenced base for enterprises developing their senior market and, also for promoting the value co-creation possibility through dialogue between customers and those that deliver service. © The Author 2017. Published by Oxford

  2. Modelling of P3HT:PCBM interface using coarse-grained forcefield derived from accurate atomistic forcefield.

    Science.gov (United States)

    To, T T; Adams, S

    2014-03-14

    To understand the morphological evolution of P3HT:PCBM bulk heterojunction during thermal treatment process, we employed coarse-grained Molecular Dynamics (MD) simulations with a forcefield derived from atomistic model and experimental data such as crystal structure and melting temperature. The current study focuses on the differences between interfaces that PCBM forms with various P3HT orientations. Crystallinity analysis suggests that more ordered P3HT is observed near the interface for face-on and amorphous case, while no such trend is observed for edge-on and end-on configurations due to weaker interactions at the interface as evident from the considerably less negative interfacial energy. An analysis of pathways for C60 diffusion into P3HT using both an energy-based and solvent surface approach for amorphous P3HT reveals continuous chain motion-assisted pathways while for crystalline P3HT diffusion pathways remain restricted to grain boundaries. Based on these calculations, we propose a morphological evolution process for P3HT:PCBM bulk-heterojunction, which starts with nucleation crystallisation at the P3HT:PCBM interface, followed by PCBM diffusion along the grain boundaries and amorphous P3HT regions towards PCBM-rich domains.

  3. Fast and accurate multivariate Gaussian modeling of protein families: predicting residue contacts and protein-interaction partners.

    Directory of Open Access Journals (Sweden)

    Carlo Baldassi

    Full Text Available In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids, exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i the prediction of residue-residue contacts in proteins, and (ii the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code.

  4. Fast and accurate multivariate Gaussian modeling of protein families: predicting residue contacts and protein-interaction partners.

    Science.gov (United States)

    Baldassi, Carlo; Zamparo, Marco; Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea

    2014-01-01

    In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code.

  5. Accurate Masses, Radii, and Temperatures for the Eclipsing Binary V2154 Cyg, and Tests of Stellar Evolution Models

    Science.gov (United States)

    Bright, Jane; Torres, Guillermo

    2018-01-01

    We report new spectroscopic observations of the F-type triple system V2154 Cyg, in which two of the stars form an eclipsing binary with a period of 2.6306303 ± 0.0000038 days. We combine the results from our spectroscopic analysis with published light curves in the uvby Strömgren passbands to derive the first reported absolute dimensions of the stars in the eclipsing binary. The masses and radii are measured with high accuracy to better than 1.5% precision. For the primary and secondary respectively, we find that the masses are 1.269 ± 0.017 M⊙ and 0.7542 ± 0.0059 M⊙, the radii are 1.477 ± 0.012 R⊙ and 0.7232 ± 0.0091R⊙, and the temperatures are 6770 ± 150 K and 5020 ± 150 K. Current models of stellar evolution agree with the measured properties of the primary, but the secondary is larger than predicted. This may be due to activity in the secondary, as has been shown for other systems with a star of similar mass with this same discrepancy.The SAO REU program is funded by the National Science Foundation REU and Department of Defense ASSURE programs under NSF Grant AST-1659473, and by the Smithsonian Institution. GT acknowledges partial support for this work from NSF grant AST-1509375.

  6. Close Range Uav Accurate Recording and Modeling of St-Pierre Neo-Romanesque Church in Strasbourg (france)

    Science.gov (United States)

    Murtiyoso, A.; Grussenmeyer, P.; Freville, T.

    2017-02-01

    Close-range photogrammetry is an image-based technique which has often been used for the 3D documentation of heritage objects. Recently, advances in the field of image processing and UAVs (Unmanned Aerial Vehicles) have resulted in a renewed interest in this technique. However, commercially ready-to-use UAVs are often equipped with smaller sensors in order to minimize payload and the quality of the documentation is still an issue. In this research, two commercial UAVs (the Sensefly Albris and DJI Phantom 3 Professional) were setup to record the 19th century St-Pierre-le-Jeune church in Strasbourg, France. Several software solutions (commercial and open source) were used to compare both UAVs' images in terms of calibration, accuracy of external orientation, as well as dense matching. Results show some instability in regards to the calibration of Phantom 3, while the Albris had issues regarding its aerotriangulation results. Despite these shortcomings, both UAVs succeeded in producing dense point clouds of up to a few centimeters in accuracy, which is largely sufficient for the purposes of a city 3D GIS (Geographical Information System). The acquisition of close range images using UAVs also provides greater LoD flexibility in processing. These advantages over other methods such as the TLS (Terrestrial Laser Scanning) or terrestrial close range photogrammetry can be exploited in order for these techniques to complement each other.

  7. On the Accurate Determination of Shock Wave Time-Pressure Profile in the Experimental Models of Blast-Induced Neurotrauma

    Directory of Open Access Journals (Sweden)

    Maciej Skotak

    2018-02-01

    Full Text Available Measurement issues leading to the acquisition of artifact-free shock wave pressure-time profiles are discussed. We address the importance of in-house sensor calibration and data acquisition sampling rate. Sensor calibration takes into account possible differences between calibration methodology in a manufacturing facility, and those used in the specific laboratory. We found in-house calibration factors of brand new sensors differ by less than 10% from their manufacturer supplied data. Larger differences were noticeable for sensors that have been used for hundreds of experiments and were as high as 30% for sensors close to the end of their useful lifetime. These observations were despite the fact that typical overpressures in our experiments do not exceed 50 psi for sensors that are rated at 1,000 psi maximum pressure. We demonstrate that sampling rate of 1,000 kHz is necessary to capture the correct rise time values, but there were no statistically significant differences between peak overpressure and impulse values for low-intensity shock waves (Mach number <2 at lower rates. We discuss two sources of experimental errors originating from mechanical vibration and electromagnetic interference on the quality of a waveform recorded using state-of-the-art high-frequency pressure sensors. The implementation of preventive measures, pressure acquisition artifacts, and data interpretation with examples, are provided in this paper that will help the community at large to avoid these mistakes. In order to facilitate inter-laboratory data comparison, common reporting standards should be developed by the blast TBI research community. We noticed the majority of published literature on the subject limits reporting to peak overpressure; with much less attention directed toward other important parameters, i.e., duration, impulse, and dynamic pressure. These parameters should be included as a mandatory requirement in publications so the results can be properly

  8. A Complex of Business Process Management Models for a Service-Providing IT Company

    OpenAIRE

    Yatsenko Roman M.; Balykov Oleksii H.

    2017-01-01

    The article presents an analysis of a complex of business process management models that are designed to improve the performance of service-providing IT companies. This class of enterprises was selected because of their significant contribution to the Ukrainian economy: third place in the structure of exports, significant budget revenues, high development dynamics, and prospects in the global marketplace. The selected complex of models is designed as a sequence of stages that must be accompli...

  9. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Energy Technology Data Exchange (ETDEWEB)

    Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)

    2007-10-15

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  10. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Science.gov (United States)

    Rybynok, V. O.; Kyriacou, P. A.

    2007-10-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  11. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    International Nuclear Information System (INIS)

    Rybynok, V O; Kyriacou, P A

    2007-01-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media

  12. Using Models to Provide Predicted Ranges for Building-Human Interfaces: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Long, N.; Scheib, J.; Pless, S.; Schott, M.

    2013-09-01

    Most building energy consumption dashboards provide only a snapshot of building performance; whereas some provide more detailed historic data with which to compare current usage. This paper will discuss the Building Agent(tm) platform, which has been developed and deployed in a campus setting at the National Renewable Energy Laboratory as part of an effort to maintain the aggressive energyperformance achieved in newly constructed office buildings and laboratories. The Building Agent(tm) provides aggregated and coherent access to building data, including electric energy, thermal energy, temperatures, humidity, and lighting levels, and occupant feedback, which are displayed in various manners for visitors, building occupants, facility managers, and researchers. This paper focuseson the development of visualizations for facility managers, or an energy performance assurance role, where metered data are used to generate models that provide live predicted ranges of building performance by end use. These predicted ranges provide simple, visual context for displayed performance data without requiring users to also assess historical information or trends. Several energymodelling techniques were explored including static lookup-based performance targets, reduced-order models derived from historical data using main effect variables such as solar radiance for lighting performance, and integrated energy models using a whole-building energy simulation program.

  13. Comparing consumer-directed and agency models for providing supportive services at home.

    Science.gov (United States)

    Benjamin, A E; Matthias, R; Franke, T M

    2000-04-01

    To examine the service experiences and outcomes of low-income Medicaid beneficiaries with disabilities under two different models for organizing home-based personal assistance services: agency-directed and consumer-directed. A survey of a random sample of 1,095 clients, age 18 and over, who receive services in California's In-Home Supportive Services (IHSS) program funded primarily by Medicaid. Other data were obtained from the California Management and Payrolling System (CMIPS). The sample was stratified by service model (agency-directed or consumer-directed), client age (over or under age 65), and severity. Data were collected on client demographics, condition/functional status, and supportive service experience. Outcome measures were developed in three areas: safety, unmet need, and service satisfaction. Factor analysis was used to reduce multiple outcome measures to nine dimensions. Multiple regression analysis was used to assess the effect of service model on each outcome dimension, taking into account the client-provider relationship, client demographics, and case mix. Recipients of IHSS services as of mid-1996 were interviewed by telephone. The survey was conducted in late 1996 and early 1997. On various outcomes, recipients in the consumer-directed model report more positive outcomes than those in the agency model, or they report no difference. Statistically significant differences emerge on recipient safety, unmet needs, and service satisfaction. A family member present as a paid provider is also associated with more positive reported outcomes within the consumer-directed model, but model differences persist even when this is taken into account. Although both models have strengths and weaknesses, from a recipient perspective the consumer-directed model is associated with more positive outcomes. Although health professionals have expressed concerns about the capacity of consumer direction to assure quality, particularly with respect to safety, meeting unmet

  14. MODEL OF PROVIDING WITH DEVELOPMENT STRATEGY FOR INFORMATION TECHNOLOGIES IN AN ORGANIZATION

    Directory of Open Access Journals (Sweden)

    A. A. Kuzkin

    2015-03-01

    Full Text Available Subject of research. The paper presents research and instructional tools for assessment of providing with the development strategy for information technologies in an organization. Method. The corresponding assessment model is developed which takes into consideration IT-processes equilibrium according to selected efficiency factors of information technologies application. Basic results. The model peculiarity resides in applying neuro-fuzzy approximators where the conclusion is drawn upon fuzzy logic, and membership functions are adjusted through the use of neural networks. For the adequacy testing of the suggested model, due diligence result analysis has been carried out for the IT-strategy executed in the “Navigator” group of companies at the stage of implementation and support of new technologies and production methods. Data visualization with a circle diagram is applied for the comparative evaluation of the analysis results. The chosen model adequacy is proved by the agreement between predictive assessments for IT-strategy performance targets derived by means of the fuzzy cognitive model over 12 months planning horizon and the real values of these targets upon the expiry of the given planning term. Practical significance. The developed model application gives the possibility to solve the problem of sustainability assessment for the process of providing the required IT-strategy realization level based upon the fuzzy cognitive map analysis and to reveal IT-objectives changing tendencies for an organization over the stated planning interval.

  15. Non-Model-Based Control of a Wheeled Vehicle Pulling Two Trailers to Provide Early Powered Mobility and Driving Experiences.

    Science.gov (United States)

    Sanders Td Vr, David A

    2018-01-01

    Non-model-based control of a wheeled vehicle pulling two trailers is proposed. It is a fun train for disabled children consisting of a locomotive and two carriages. The fun train has afforded opportunities for both disabled and able bodied young people to share an activity and has provided early driving experiences for disabled children; it has introduced them to assistive and powered mobility. The train is a nonlinear system and subject to nonholonomic kinematic constraints, so that position and state depend on the path taken to get there. The train is described, and then, a robust control algorithm using proportional-derivative filtered errors is proposed to control the locomotive. The controller was not dependent on an accurate model of the train, because the mass of the vehicle and two carriages changed depending on the number, size, and shape of children and wheelchair seats on the train. The controller was robust and stable in uncertainty. Results are presented to show the effectiveness of the approach, and the suggested control algorithm is shown to be acceptable without knowing the exact plant dynamics.

  16. A New Model for Providing Cell-Free DNA and Risk Assessment for Chromosome Abnormalities in a Public Hospital Setting

    Directory of Open Access Journals (Sweden)

    Robert Wallerstein

    2014-01-01

    Full Text Available Objective. Cell-free DNA (cfDNA offers highly accurate noninvasive screening for Down syndrome. Incorporating it into routine care is complicated. We present our experience implementing a novel program for cfDNA screening, emphasizing patient education, genetic counseling, and resource management. Study Design. Beginning in January 2013, we initiated a new patient care model in which high-risk patients for aneuploidy received genetic counseling at 12 weeks of gestation. Patients were presented with four pathways for aneuploidy risk assessment and diagnosis: (1 cfDNA; (2 integrated screening; (3 direct-to-invasive testing (chorionic villus sampling or amniocentesis; or (4 no first trimester diagnostic testing/screening. Patients underwent follow-up genetic counseling and detailed ultrasound at 18–20 weeks to review first trimester testing and finalize decision for amniocentesis. Results. Counseling and second trimester detailed ultrasound were provided to 163 women. Most selected cfDNA screening (69% over integrated screening (0.6%, direct-to-invasive testing (14.1%, or no screening (16.6%. Amniocentesis rates decreased following implementation of cfDNA screening (19.0% versus 13.0%, P<0.05. Conclusion. When counseled about screening options, women often chose cfDNA over integrated screening. This program is a model for patient-directed, efficient delivery of a newly available high-level technology in a public health setting. Genetic counseling is an integral part of patient education and determination of plan of care.

  17. Improvement of AEP Predictions Using Diurnal CFD Modelling with Site-Specific Stability Weightings Provided from Mesoscale Simulation

    Science.gov (United States)

    Hristov, Y.; Oxley, G.; Žagar, M.

    2014-06-01

    The Bolund measurement campaign, performed by Danish Technical University (DTU) Wind Energy Department (also known as RISØ), provided significant insight into wind flow modeling over complex terrain. In the blind comparison study several modelling solutions were submitted with the vast majority being steady-state Computational Fluid Dynamics (CFD) approaches with two equation k-epsilon turbulence closure. This approach yielded the most accurate results, and was identified as the state-of-the-art tool for wind turbine generator (WTG) micro-siting. Based on the findings from Bolund, further comparison between CFD and field measurement data has been deemed essential in order to improve simulation accuracy for turbine load and long-term Annual Energy Production (AEP) estimations. Vestas Wind Systems A/S is a major WTG original equipment manufacturer (OEM) with an installed base of over 60GW in over 70 countries accounting for 19% of the global installed base. The Vestas Performance and Diagnostic Centre (VPDC) provides online live data to more than 47GW of these turbines allowing a comprehensive comparison between modelled and real-world energy production data. In previous studies, multiple sites have been simulated with a steady neutral CFD formulation for the atmospheric surface layer (ASL), and wind resource (RSF) files have been generated as a base for long-term AEP predictions showing significant improvement over predictions performed with the industry standard linear WAsP tool. In this study, further improvements to the wind resource file generation with CFD are examined using an unsteady diurnal cycle approach with a full atmospheric boundary layer (ABL) formulation, with the unique stratifications throughout the cycle weighted according to mesoscale simulated sectorwise stability frequencies.

  18. Simulation model for transcervical laryngeal injection providing real-time feedback.

    Science.gov (United States)

    Ainsworth, Tiffiny A; Kobler, James B; Loan, Gregory J; Burns, James A

    2014-12-01

    This study aimed to develop and evaluate a model for teaching transcervical laryngeal injections. A 3-dimensional printer was used to create a laryngotracheal framework based on de-identified computed tomography images of a human larynx. The arytenoid cartilages and intrinsic laryngeal musculature were created in silicone from clay casts and thermoplastic molds. The thyroarytenoid (TA) muscle was created with electrically conductive silicone using metallic filaments embedded in silicone. Wires connected TA muscles to an electrical circuit incorporating a cell phone and speaker. A needle electrode completed the circuit when inserted in the TA during simulated injection, providing real-time feedback of successful needle placement by producing an audible sound. Face validation by the senior author confirmed appropriate tactile feedback and anatomical realism. Otolaryngologists pilot tested the model and completed presimulation and postsimulation questionnaires. The high-fidelity simulation model provided tactile and audio feedback during needle placement, simulating transcervical vocal fold injections. Otolaryngology residents demonstrated higher comfort levels with transcervical thyroarytenoid injection on postsimulation questionnaires. This is the first study to describe a simulator for developing transcervical vocal fold injection skills. The model provides real-time tactile and auditory feedback that aids in skill acquisition. Otolaryngologists reported increased confidence with transcervical injection after using the simulator. © The Author(s) 2014.

  19. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Sargent, T.O.

    1981-01-01

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  20. Relationship between anaerobic parameters provided from MAOD and critical power model in specific table tennis test.

    Science.gov (United States)

    Zagatto, A M; Gobatto, C A

    2012-08-01

    The aim of this study was to verify the validity of the curvature constant parameter (W'), calculated from 2-parameter mathematical equations of critical power model, in estimating the anaerobic capacity and anaerobic work capacity from a table tennis-specific test. Specifically, we aimed to i) compare constants estimated from three critical intensity models in a table tennis-specific test (Cf); ii) correlate each estimated W' with the maximal accumulated oxygen deficit (MAOD); iii) correlate each W' with the total amount of anaerobic work (W ANAER) performed in each exercise bout performed during the Cf test. Nine national-standard male table tennis players participated in the study. MAOD was 63.0(10.8) mL · kg - 1 and W' values were 32.8(6.6) balls for the linear-frequency model, 38.3(6.9) balls for linear-total balls model, 48.7(8.9) balls for Nonlinear-2 parameter model. Estimated W' from the Nonlinear 2-parameter model was significantly different from W' from the other 2 models (P0.13). Thus, W' estimated from the 2-parameter mathematical equations did not correlate with MAOD or W ANAER in table tennis-specific tests, indicating that W' may not provide a strong and valid estimation of anaerobic capacity and anaerobic capacity work. © Georg Thieme Verlag KG Stuttgart · New York.

  1. Accurate and self-consistent procedure for determining pH in seawater desalination brines and its manifestation in reverse osmosis modeling.

    Science.gov (United States)

    Nir, Oded; Marvin, Esra; Lahav, Ori

    2014-11-01

    Measuring and modeling pH in concentrated aqueous solutions in an accurate and consistent manner is of paramount importance to many R&D and industrial applications, including RO desalination. Nevertheless, unified definitions and standard procedures have yet to be developed for solutions with ionic strength higher than ∼0.7 M, while implementation of conventional pH determination approaches may lead to significant errors. In this work a systematic yet simple methodology for measuring pH in concentrated solutions (dominated by Na(+)/Cl(-)) was developed and evaluated, with the aim of achieving consistency with the Pitzer ion-interaction approach. Results indicate that the addition of 0.75 M of NaCl to NIST buffers, followed by assigning a new standard pH (calculated based on the Pitzer approach), enabled reducing measured errors to below 0.03 pH units in seawater RO brines (ionic strength up to 2 M). To facilitate its use, the method was developed to be both conceptually and practically analogous to the conventional pH measurement procedure. The method was used to measure the pH of seawater RO retentates obtained at varying recovery ratios. The results matched better the pH values predicted by an accurate RO transport model. Calibrating the model by the measured pH values enabled better boron transport prediction. A Donnan-induced phenomenon, affecting pH in both retentate and permeate streams, was identified and quantified. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Creation of an idealized nasopharynx geometry for accurate computational fluid dynamics simulations of nasal airflow in patient-specific models lacking the nasopharynx anatomy.

    Science.gov (United States)

    A T Borojeni, Azadeh; Frank-Ito, Dennis O; Kimbell, Julia S; Rhee, John S; Garcia, Guilherme J M

    2017-05-01

    Virtual surgery planning based on computational fluid dynamics (CFD) simulations has the potential to improve surgical outcomes for nasal airway obstruction patients, but the benefits of virtual surgery planning must outweigh the risks of radiation exposure. Cone beam computed tomography (CT) scans represent an attractive imaging modality for virtual surgery planning due to lower costs and lower radiation exposures compared with conventional CT scans. However, to minimize the radiation exposure, the cone beam CT sinusitis protocol sometimes images only the nasal cavity, excluding the nasopharynx. The goal of this study was to develop an idealized nasopharynx geometry for accurate representation of outlet boundary conditions when the nasopharynx geometry is unavailable. Anatomically accurate models of the nasopharynx created from 30 CT scans were intersected with planes rotated at different angles to obtain an average geometry. Cross sections of the idealized nasopharynx were approximated as ellipses with cross-sectional areas and aspect ratios equal to the average in the actual patient-specific models. CFD simulations were performed to investigate whether nasal airflow patterns were affected when the CT-based nasopharynx was replaced by the idealized nasopharynx in 10 nasal airway obstruction patients. Despite the simple form of the idealized geometry, all biophysical variables (nasal resistance, airflow rate, and heat fluxes) were very similar in the idealized vs patient-specific models. The results confirmed the expectation that the nasopharynx geometry has a minimal effect in the nasal airflow patterns during inspiration. The idealized nasopharynx geometry will be useful in future CFD studies of nasal airflow based on medical images that exclude the nasopharynx. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Electric field calculations in brain stimulation based on finite elements: an optimized processing pipeline for the generation and usage of accurate individual head models.

    Science.gov (United States)

    Windhoff, Mirko; Opitz, Alexander; Thielscher, Axel

    2013-04-01

    The need for realistic electric field calculations in human noninvasive brain stimulation is undisputed to more accurately determine the affected brain areas. However, using numerical techniques such as the finite element method (FEM) is methodologically complex, starting with the creation of accurate head models to the integration of the models in the numerical calculations. These problems substantially limit a more widespread application of numerical methods in brain stimulation up to now. We introduce an optimized processing pipeline allowing for the automatic generation of individualized high-quality head models from magnetic resonance images and their usage in subsequent field calculations based on the FEM. The pipeline starts by extracting the borders between skin, skull, cerebrospinal fluid, gray and white matter. The quality of the resulting surfaces is subsequently improved, allowing for the creation of tetrahedral volume head meshes that can finally be used in the numerical calculations. The pipeline integrates and extends established (and mainly free) software for neuroimaging, computer graphics, and FEM calculations into one easy-to-use solution. We demonstrate the successful usage of the pipeline in six subjects, including field calculations for transcranial magnetic stimulation and transcranial direct current stimulation. The quality of the head volume meshes is validated both in terms of capturing the underlying anatomy and of the well-shapedness of the mesh elements. The latter is crucial to guarantee the numerical robustness of the FEM calculations. The pipeline will be released as open-source, allowing for the first time to perform realistic field calculations at an acceptable methodological complexity and moderate costs. Copyright © 2011 Wiley Periodicals, Inc.

  4. Avoiding fractional electrons in subsystem DFT based ab-initio molecular dynamics yields accurate models for liquid water and solvated OH radical

    International Nuclear Information System (INIS)

    Genova, Alessandro; Pavanello, Michele; Ceresoli, Davide

    2016-01-01

    In this work we achieve three milestones: (1) we present a subsystem DFT method capable of running ab-initio molecular dynamics simulations accurately and efficiently. (2) In order to rid the simulations of inter-molecular self-interaction error, we exploit the ability of semilocal frozen density embedding formulation of subsystem DFT to represent the total electron density as a sum of localized subsystem electron densities that are constrained to integrate to a preset, constant number of electrons; the success of the method relies on the fact that employed semilocal nonadditive kinetic energy functionals effectively cancel out errors in semilocal exchange–correlation potentials that are linked to static correlation effects and self-interaction. (3) We demonstrate this concept by simulating liquid water and solvated OH • radical. While the bulk of our simulations have been performed on a periodic box containing 64 independent water molecules for 52 ps, we also simulated a box containing 256 water molecules for 22 ps. The results show that, provided one employs an accurate nonadditive kinetic energy functional, the dynamics of liquid water and OH • radical are in semiquantitative agreement with experimental results or higher-level electronic structure calculations. Our assessments are based upon comparisons of radial and angular distribution functions as well as the diffusion coefficient of the liquid.

  5. Reliability constrained decision model for energy service provider incorporating demand response programs

    International Nuclear Information System (INIS)

    Mahboubi-Moghaddam, Esmaeil; Nayeripour, Majid; Aghaei, Jamshid

    2016-01-01

    Highlights: • The operation of Energy Service Providers (ESPs) in electricity markets is modeled. • Demand response as the cost-effective solution is used for energy service provider. • The market price uncertainty is modeled using the robust optimization technique. • The reliability of the distribution network is embedded into the framework. • The simulation results demonstrate the benefits of robust framework for ESPs. - Abstract: Demand response (DR) programs are becoming a critical concept for the efficiency of current electric power industries. Therefore, its various capabilities and barriers have to be investigated. In this paper, an effective decision model is presented for the strategic behavior of energy service providers (ESPs) to demonstrate how to participate in the day-ahead electricity market and how to allocate demand in the smart distribution network. Since market price affects DR and vice versa, a new two-step sequential framework is proposed, in which unit commitment problem (UC) is solved to forecast the expected locational marginal prices (LMPs), and successively DR program is applied to optimize the total cost of providing energy for the distribution network customers. This total cost includes the cost of purchased power from the market and distributed generation (DG) units, incentive cost paid to the customers, and compensation cost of power interruptions. To obtain compensation cost, the reliability evaluation of the distribution network is embedded into the framework using some innovative constraints. Furthermore, to consider the unexpected behaviors of the other market participants, the LMP prices are modeled as the uncertainty parameters using the robust optimization technique, which is more practical compared to the conventional stochastic approach. The simulation results demonstrate the significant benefits of the presented framework for the strategic performance of ESPs.

  6. Wind farms providing secondary frequency regulation: Evaluating the performance of model-based receding horizon control

    International Nuclear Information System (INIS)

    Shapiro, Carl R.; Meneveau, Charles; Gayme, Dennice F.; Meyers, Johan

    2016-01-01

    We investigate the use of wind farms to provide secondary frequency regulation for a power grid. Our approach uses model-based receding horizon control of a wind farm that is tested using a large eddy simulation (LES) framework. In order to enable real-time implementation, the control actions are computed based on a time-varying one-dimensional wake model. This model describes wake advection and interactions, both of which play an important role in wind farm power production. This controller is implemented in an LES model of an 84-turbine wind farm represented by actuator disk turbine models. Differences between the velocities at each turbine predicted by the wake model and measured in LES are used for closed-loop feedback. The controller is tested on two types of regulation signals, “RegA” and “RegD”, obtained from PJM, an independent system operator in the eastern United States. Composite performance scores, which are used by PJM to qualify plants for regulation, are used to evaluate the performance of the controlled wind farm. Our results demonstrate that the controlled wind farm consistently performs well, passing the qualification threshold for all fastacting RegD signals. For the RegA signal, which changes over slower time scales, the controlled wind farm's average performance surpasses the threshold, but further work is needed to enable the controlled system to achieve qualifying performance all of the time. (paper)

  7. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    Science.gov (United States)

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error model predicts the exposure of

  8. Can a numerically stable subgrid-scale model for turbulent flow computation be ideally accurate?: a preliminary theoretical study for the Gaussian filtered Navier-Stokes equations.

    Science.gov (United States)

    Ida, Masato; Taniguchi, Nobuyuki

    2003-09-01

    This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.

  9. Biomass transformation webs provide a unified approach to consumer-resource modelling.

    Science.gov (United States)

    Getz, Wayne M

    2011-02-01

    An approach to modelling food web biomass flows among live and dead compartments within and among species is formulated using metaphysiological principles that characterise population growth in terms of basal metabolism, feeding, senescence and exploitation. This leads to a unified approach to modelling interactions among plants, herbivores, carnivores, scavengers, parasites and their resources. Also, dichotomising sessile miners from mobile gatherers of resources, with relevance to feeding and starvation time scales, suggests a new classification scheme involving 10 primary categories of consumer types. These types, in various combinations, rigorously distinguish scavenger from parasite, herbivory from phytophagy and detritivore from decomposer. Application of the approach to particular consumer-resource interactions is demonstrated, culminating in the construction of an anthrax-centred food web model, with parameters applicable to Etosha National Park, Namibia, where deaths of elephants and zebra from the bacterial pathogen, Bacillus anthracis, provide significant subsidies to jackals, vultures and other scavengers. © 2010 Blackwell Publishing Ltd/CNRS.

  10. Biomass transformation webs provide a unified approach to consumer–resource modelling

    Science.gov (United States)

    Getz, Wayne M.

    2011-01-01

    An approach to modelling food web biomass flows among live and dead compartments within and among species is formulated using metaphysiological principles that characterise population growth in terms of basal metabolism, feeding, senescence and exploitation. This leads to a unified approach to modelling interactions among plants, herbivores, carnivores, scavengers, parasites and their resources. Also, dichotomising sessile miners from mobile gatherers of resources, with relevance to feeding and starvation time scales, suggests a new classification scheme involving 10 primary categories of consumer types. These types, in various combinations, rigorously distinguish scavenger from parasite, herbivory from phytophagy and detritivore from decomposer. Application of the approach to particular consumer–resource interactions is demonstrated, culminating in the construction of an anthrax-centred food web model, with parameters applicable to Etosha National Park, Namibia, where deaths of elephants and zebra from the bacterial pathogen, Bacillus anthracis, provide significant subsidies to jackals, vultures and other scavengers. PMID:21199247

  11. Modeling Key Drivers of Cholera Transmission Dynamics Provides New Perspectives for Parasitology.

    Science.gov (United States)

    Rinaldo, Andrea; Bertuzzo, Enrico; Blokesch, Melanie; Mari, Lorenzo; Gatto, Marino

    2017-08-01

    Hydroclimatological and anthropogenic factors are key drivers of waterborne disease transmission. Information on human settlements and host mobility on waterways along which pathogens and hosts disperse, and relevant hydroclimatological processes, can be acquired remotely and included in spatially explicit mathematical models of disease transmission. In the case of epidemic cholera, such models allowed the description of complex disease patterns and provided insight into the course of ongoing epidemics. The inclusion of spatial information in models of disease transmission can aid in emergency management and the assessment of alternative interventions. Here, we review the study of drivers of transmission via spatially explicit approaches and argue that, because many parasitic waterborne diseases share the same drivers as cholera, similar principles may apply. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A Complex of Business Process Management Models for a Service-Providing IT Company

    Directory of Open Access Journals (Sweden)

    Yatsenko Roman M.

    2017-10-01

    Full Text Available The article presents an analysis of a complex of business process management models that are designed to improve the performance of service-providing IT companies. This class of enterprises was selected because of their significant contribution to the Ukrainian economy: third place in the structure of exports, significant budget revenues, high development dynamics, and prospects in the global marketplace. The selected complex of models is designed as a sequence of stages that must be accomplished in order to optimize business processes. The first stage is an analysis of the nature of the process approach, approaches to strategic management, and the characteristics of service-providing IT companies. The second stage is to build the formal and hierarchical models to define the characteristics of the business processes and their structure, respectively. The third stage is to evaluate individual business processes (information model and the entire business process system (multi-level assessment of business processes. The fourth stage is to optimize the business processes at each level: strategic, tactical and operational. The fifth stage is to restructure the business processes after optimization. The sixth (final stage is to analyze the efficiency of the restructured system of business processes.

  13. Pharmacists providing care in the outpatient setting through telemedicine models: a narrative review

    Directory of Open Access Journals (Sweden)

    Littauer SL

    2017-12-01

    Full Text Available Telemedicine refers to the delivery of clinical services using technology that allows two-way, real time, interactive communication between the patient and the clinician at a distant site. Commonly, telemedicine is used to improve access to general and specialty care for patients in rural areas. This review aims to provide an overview of existing telemedicine models involving the delivery of care by pharmacists via telemedicine (including telemonitoring and video, but excluding follow-up telephone calls and to highlight the main areas of chronic-disease management where these models have been applied. Studies within the areas of hypertension, diabetes, asthma, anticoagulation and depression were identified, but only two randomized controlled trials with adequate sample size demonstrating the positive impact of telemonitoring combined with pharmacist care in hypertension were identified. The evidence for the impact of pharmacist-based telemedicine models is sparse and weak, with the studies conducted presenting serious threats to internal and external validity. Therefore, no definitive conclusions about the impact of pharmacist-led telemedicine models can be made at this time. In the Unites States, the increasing shortage of primary care providers and specialists represents an opportunity for pharmacists to assume a more prominent role managing patients with chronic disease in the ambulatory care setting. However, lack of reimbursement may pose a barrier to the provision of care by pharmacists using telemedicine.

  14. Agent-based organizational modelling for analysis of safety culture at an air navigation service provider

    International Nuclear Information System (INIS)

    Stroeve, Sybert H.; Sharpanskykh, Alexei; Kirwan, Barry

    2011-01-01

    Assessment of safety culture is done predominantly by questionnaire-based studies, which tend to reveal attitudes on immaterial characteristics (values, beliefs, norms). There is a need for a better understanding of the implications of the material aspects of an organization (structures, processes, etc.) for safety culture and their interactions with the immaterial characteristics. This paper presents a new agent-based organizational modelling approach for integrated and systematic evaluation of material and immaterial characteristics of socio-technical organizations in safety culture analysis. It uniquely considers both the formal organization and the value- and belief-driven behaviour of individuals in the organization. Results are presented of a model for safety occurrence reporting at an air navigation service provider. Model predictions consistent with questionnaire-based results are achieved. A sensitivity analysis provides insight in organizational factors that strongly influence safety culture indicators. The modelling approach can be used in combination with attitude-focused safety culture research, towards an integrated evaluation of material and immaterial characteristics of socio-technical organizations. By using this approach an organization is able to gain a deeper understanding of causes of diverse problems and inefficiencies both in the formal organization and in the behaviour of organizational agents, and to systematically identify and evaluate improvement options.

  15. Modeling fMRI signals can provide insights into neural processing in the cerebral cortex.

    Science.gov (United States)

    Vanni, Simo; Sharifian, Fariba; Heikkinen, Hanna; Vigário, Ricardo

    2015-08-01

    Every stimulus or task activates multiple areas in the mammalian cortex. These distributed activations can be measured with functional magnetic resonance imaging (fMRI), which has the best spatial resolution among the noninvasive brain imaging methods. Unfortunately, the relationship between the fMRI activations and distributed cortical processing has remained unclear, both because the coupling between neural and fMRI activations has remained poorly understood and because fMRI voxels are too large to directly sense the local neural events. To get an idea of the local processing given the macroscopic data, we need models to simulate the neural activity and to provide output that can be compared with fMRI data. Such models can describe neural mechanisms as mathematical functions between input and output in a specific system, with little correspondence to physiological mechanisms. Alternatively, models can be biomimetic, including biological details with straightforward correspondence to experimental data. After careful balancing between complexity, computational efficiency, and realism, a biomimetic simulation should be able to provide insight into how biological structures or functions contribute to actual data processing as well as to promote theory-driven neuroscience experiments. This review analyzes the requirements for validating system-level computational models with fMRI. In particular, we study mesoscopic biomimetic models, which include a limited set of details from real-life networks and enable system-level simulations of neural mass action. In addition, we discuss how recent developments in neurophysiology and biophysics may significantly advance the modelling of fMRI signals. Copyright © 2015 the American Physiological Society.

  16. The climate4impact platform: Providing, tailoring and facilitating climate model data access

    Science.gov (United States)

    Pagé, Christian; Pagani, Andrea; Plieger, Maarten; Som de Cerff, Wim; Mihajlovski, Andrej; de Vreede, Ernst; Spinuso, Alessandro; Hutjes, Ronald; de Jong, Fokke; Bärring, Lars; Vega, Manuel; Cofiño, Antonio; d'Anca, Alessandro; Fiore, Sandro; Kolax, Michael

    2017-04-01

    One of the main objectives of climate4impact is to provide standardized web services and tools that are reusable in other portals. These services include web processing services, web coverage services and web mapping services (WPS, WCS and WMS). Tailored portals can be targeted to specific communities and/or countries/regions while making use of those services. Easier access to climate data is very important for the climate change impact communities. To fulfill this objective, the climate4impact (http://climate4impact.eu/) web portal and services has been developed, targeting climate change impact modellers, impact and adaptation consultants, as well as other experts using climate change data. It provides to users harmonized access to climate model data through tailored services. It features static and dynamic documentation, Use Cases and best practice examples, an advanced search interface, an integrated authentication and authorization system with the Earth System Grid Federation (ESGF), a visualization interface with ADAGUC web mapping tools. In the latest version, statistical downscaling services, provided by the Santander Meteorology Group Downscaling Portal, were integrated. An innovative interface to integrate statistical downscaling services will be released in the upcoming version. The latter will be a big step in bridging the gap between climate scientists and the climate change impact communities. The climate4impact portal builds on the infrastructure of an international distributed database that has been set to disseminate the results from the global climate model results of the Coupled Model Intercomparison project Phase 5 (CMIP5). This database, the ESGF, is an international collaboration that develops, deploys and maintains software infrastructure for the management, dissemination, and analysis of climate model data. The European FP7 project IS-ENES, Infrastructure for the European Network for Earth System modelling, supports the European

  17. Capabilities of stochastic rainfall models as data providers for urban hydrology

    Science.gov (United States)

    Haberlandt, Uwe

    2017-04-01

    For planning of urban drainage systems using hydrological models, long, continuous precipitation series with high temporal resolution are needed. Since observed time series are often too short or not available everywhere, the use of synthetic precipitation is a common alternative. This contribution compares three precipitation models regarding their suitability to provide 5 minute continuous rainfall time series for a) sizing of drainage networks for urban flood protection and b) dimensioning of combined sewage systems for pollution reduction. The rainfall models are a parametric stochastic model (Haberlandt et al., 2008), a non-parametric probabilistic approach (Bárdossy, 1998) and a stochastic downscaling of dynamically simulated rainfall (Berg et al., 2013); all models are operated both as single site and multi-site generators. The models are applied with regionalised parameters assuming that there is no station at the target location. Rainfall and discharge characteristics are utilised for evaluation of the model performance. The simulation results are compared against results obtained from reference rainfall stations not used for parameter estimation. The rainfall simulations are carried out for the federal states of Baden-Württemberg and Lower Saxony in Germany and the discharge simulations for the drainage networks of the cities of Hamburg, Brunswick and Freiburg. Altogether, the results show comparable simulation performance for the three models, good capabilities for single site simulations but low skills for multi-site simulations. Remarkably, there is no significant difference in simulation performance comparing the tasks flood protection with pollution reduction, so the models are finally able to simulate both the extremes and the long term characteristics of rainfall equally well. Bárdossy, A., 1998. Generating precipitation time series using simulated annealing. Wat. Resour. Res., 34(7): 1737-1744. Berg, P., Wagner, S., Kunstmann, H., Schädler, G

  18. Analytical modeling provides new insight into complex mutual coupling between surface loops at ultrahigh fields.

    Science.gov (United States)

    Avdievich, N I; Pfrommer, A; Giapitzakis, I A; Henning, A

    2017-10-01

    Ultrahigh-field (UHF) (≥7 T) transmit (Tx) human head surface loop phased arrays improve both the Tx efficiency (B 1 + /√P) and homogeneity in comparison with single-channel quadrature Tx volume coils. For multi-channel arrays, decoupling becomes one of the major problems during the design process. Further insight into the coupling between array elements and its dependence on various factors can facilitate array development. The evaluation of the entire impedance matrix Z for an array loaded with a realistic voxel model or phantom is a time-consuming procedure when performed using electromagnetic (EM) solvers. This motivates the development of an analytical model, which could provide a quick assessment of the Z-matrix. In this work, an analytical model based on dyadic Green's functions was developed and validated using an EM solver and bench measurements. The model evaluates the complex coupling, including both the electric (mutual resistance) and magnetic (mutual inductance) coupling. Validation demonstrated that the model does well to describe the coupling at lower fields (≤3 T). At UHFs, the model also performs well for a practical case of low magnetic coupling. Based on the modeling, the geometry of a 400-MHz, two-loop transceiver array was optimized, such that, by simply overlapping the loops, both the mutual inductance and the mutual resistance were compensated at the same time. As a result, excellent decoupling (below -40 dB) was obtained without any additional decoupling circuits. An overlapped array prototype was compared (signal-to-noise ratio, Tx efficiency) favorably to a gapped array, a geometry which has been utilized previously in designs of UHF Tx arrays. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Providing a more complete view of ice-age palaeoclimates using model inversion and data interpolation

    Science.gov (United States)

    Cleator, Sean; Harrison, Sandy P.; Roulstone, Ian; Nichols, Nancy K.; Prentice, Iain Colin

    2017-04-01

    Site-based pollen records have been used to provide quantitative reconstructions of Last Glacial Maximum (LGM) climates, but there are too few such records to provide continuous climate fields for the evaluation of climate model simulations. Furthermore, many of the reconstructions were made using modern-analogue techniques, which do not account for the direct impact of CO2 on water-use efficiency and therefore reconstruct considerably drier conditions under low CO2 at the LGM than indicated by other sources of information. We have shown that it is possible to correct analogue-based moisture reconstructions for this effect by inverting a simple light-use efficiency model of productivity, based on the principle that the rate of water loss per unit carbon gain of a plant is the same under conditions of the true moisture, palaeotemperature and palaeo CO2 concentration as under reconstructed moisture, modern CO2 concentration and modern temperature (Prentice et al., 2016). In this study, we use data from the Bartlein el al. (2011) dataset, which provides reconstructions of one or more of six climate variables (mean annual temperature, mean temperature of the warmest and coldest months, the length of the growing seasons, mean annual precipitation, and the ratio of actual to potential evapotranspiration) at individual LGM sites. We use the SPLASH water-balance model to derive a moisture index (MI) at each site from mean annual precipitation and monthly values of sunshine fraction and average temperature, and correct this MI using the Prentice et al. (2016) inversion approach. We then use a three-dimensional variational (3D-Var) data assimilation scheme with the SPLASH model and Prentice et al. (2016) inversion approach to derive reconstructions of all six climate variables at each site, using the Bartlein et al. (2011) data set as a target. We use two alternative background climate states (or priors): modern climate derived from the CRU CL v2.0 data set (New et al., 2002

  20. Family child care home providers as role models for children: Cause for concern?

    Directory of Open Access Journals (Sweden)

    Alison Tovar

    2017-03-01

    Full Text Available Health behaviors associated with chronic disease, particularly healthy eating and regular physical activity, are important role modeling opportunities for individuals working in child care programs. Prior studies have not explored these risk factors in family child care home (FCCH providers which care for vulnerable and at-risk populations. To address this gap, we describe the socio-demographic and health risk behavior profiles in a sample of providers (n = 166 FCCH taken from baseline data of an ongoing cluster-randomized controlled intervention (2011–2016 in North Carolina. Data were collected during on-site visits where providers completed self-administered questionnaires (socio-demographics, physical activity, fruit and vegetable consumption, number of hours of sleep per night and perceived stress and had their height and weight measured. A risk score (range: 0–6; 0 no risk to 6 high risk was calculated based on how many of the following were present: not having health insurance, being overweight/obese, not meeting physical activity, fruit and vegetable, and sleep recommendations, and having high stress. Mean and frequency distributions of participant and FCCH characteristics were calculated. Close to one third (29.3% of providers reported not having health insurance. Almost all providers (89.8% were overweight or obese with approximately half not meeting guidelines for physical activity, fruit and vegetable consumption, and sleep. Over half reported a “high” stress score. The mean risk score was 3.39 (±1.2, with close to half of the providers having a risk score of 4, 5 or 6 (45.7%. These results stress the need to promote the health of these important care providers.

  1. CT evaluation of living liver donor: Can 100-kVp plus iterative reconstruction protocol provide accurate liver volume and vascular anatomy for liver transplantation with reduced radiation and contrast dose?

    Science.gov (United States)

    Yoshida, Morikatsu; Utsunomiya, Daisuke; Kidoh, Masafumi; Yuki, Hideaki; Oda, Seitaro; Shiraishi, Shinya; Yamamoto, Hidekazu; Inomata, Yukihiro; Yamashita, Yasuyuki

    2017-06-01

    We evaluated whether donor computed tomography (CT) with a combined technique of lower tube voltage and iterative reconstruction (IR) can provide sufficient preoperative information for liver transplantation.We retrospectively reviewed CT of 113 liver donor candidates. Dynamic contrast-enhanced CT of the liver was performed on the following protocol: protocol A (n = 70), 120-kVp with filtered back projection (FBP); protocol B (n = 43), 100-kVp with IR. To equalize the background covariates, one-to-one propensity-matched analysis was used. We visually compared the score of the hepatic artery (A-score), portal vein (P-score), and hepatic vein (V-score) of the 2 protocols and quantitatively correlated the graft volume obtained by CT volumetry (graft-CTv) under the 2 protocols with the actual graft weight.In total, 39 protocol-A and protocol-B candidates showed comparable preoperative clinical characteristics with propensity matching. For protocols A and B, the A-score was 3.87 ± 0.73 and 4.51 ± 0.56 (P 100-kVp plus IR protocol provides better visualization for vascular structures than that under 120-kVp plus FBP protocol with comparable accuracy for graft-CTv, while lowering radiation exposure by more than 40% and reducing contrast-medium dose by 20%.

  2. NSG Mice Provide a Better Spontaneous Model of Breast Cancer Metastasis than Athymic (Nude Mice.

    Directory of Open Access Journals (Sweden)

    Madhavi Puchalapalli

    Full Text Available Metastasis is the most common cause of mortality in breast cancer patients worldwide. To identify improved mouse models for breast cancer growth and spontaneous metastasis, we examined growth and metastasis of both estrogen receptor positive (T47D and negative (MDA-MB-231, SUM1315, and CN34BrM human breast cancer cells in nude and NSG mice. Both primary tumor growth and spontaneous metastases were increased in NSG mice compared to nude mice. In addition, a pattern of metastasis similar to that observed in human breast cancer patients (metastases to the lungs, liver, bones, brain, and lymph nodes was found in NSG mice. Furthermore, there was an increase in the metastatic burden in NSG compared to nude mice that were injected with MDA-MB-231 breast cancer cells in an intracardiac experimental metastasis model. This data demonstrates that NSG mice provide a better model for studying human breast cancer metastasis compared to the current nude mouse model.

  3. State and Alternative Fuel Provider Fleets - Fleet Compliance Annual Report: Model Year 2015, Fiscal Year 2016

    Energy Technology Data Exchange (ETDEWEB)

    2016-12-01

    The U.S. Department of Energy (DOE) regulates covered state government and alternative fuel provider fleets, pursuant to the Energy Policy Act of 1992 (EPAct), as amended. Covered fleets may meet their EPAct requirements through one of two compliance methods: Standard Compliance or Alternative Compliance. For model year (MY) 2015, the compliance rate with this program for the more than 3011 reporting fleets was 100%. More than 294 fleets used Standard Compliance and exceeded their aggregate MY 2015 acquisition requirements by 8% through acquisitions alone. The seven covered fleets that used Alternative Compliance exceeded their aggregate MY 2015 petroleum use reduction requirements by 46%.

  4. Soft Independent Modeling of Class Analogy (SIMCA) Modeling of Laser-Induced Plasma Emission Spectra of Edible Salts for Accurate Classification.

    Science.gov (United States)

    Lee, Yonghoon; Han, Song-Hee; Nam, Sang-Ho

    2017-09-01

    We report soft independent modeling of class analogy (SIMCA) analysis of laser-induced plasma emission spectra of edible salts from 12 different geographical origins for their classification model. The spectra were recorded by using a simple laser-induced breakdown spectroscopy (LIBS) device. Each class was modeled by principal component analysis (PCA) of the LIBS spectra. For the classification of a separate test data set, the SIMCA model showed 97% accuracy in classification. An additional insight could be obtained by comparing the SIMCA classification result with that of partial least squares discriminant analysis (PLS-DA). Different from SIMCA, the PLS-DA classification accuracy seems to be sensitive to addition of new sample classes to the whole data set. This indicates that the individual modeling approach (SIMCA) can be an alternative to global modeling (PLS-DA), particularly for the classification problems with a relatively large number of sample classes.

  5. Accurate predictions of population-level changes in sequence and structural properties of HIV-1 Env using a volatility-controlled diffusion model.

    Science.gov (United States)

    DeLeon, Orlando; Hodis, Hagit; O'Malley, Yunxia; Johnson, Jacklyn; Salimi, Hamid; Zhai, Yinjie; Winter, Elizabeth; Remec, Claire; Eichelberger, Noah; Van Cleave, Brandon; Puliadi, Ramya; Harrington, Robert D; Stapleton, Jack T; Haim, Hillel

    2017-04-01

    The envelope glycoproteins (Envs) of HIV-1 continuously evolve in the host by random mutations and recombination events. The resulting diversity of Env variants circulating in the population and their continuing diversification process limit the efficacy of AIDS vaccines. We examined the historic changes in Env sequence and structural features (measured by integrity of epitopes on the Env trimer) in a geographically defined population in the United States. As expected, many Env features were relatively conserved during the 1980s. From this state, some features diversified whereas others remained conserved across the years. We sought to identify "clues" to predict the observed historic diversification patterns. Comparison of viruses that cocirculate in patients at any given time revealed that each feature of Env (sequence or structural) exists at a defined level of variance. The in-host variance of each feature is highly conserved among individuals but can vary between different HIV-1 clades. We designate this property "volatility" and apply it to model evolution of features as a linear diffusion process that progresses with increasing genetic distance. Volatilities of different features are highly correlated with their divergence in longitudinally monitored patients. Volatilities of features also correlate highly with their population-level diversification. Using volatility indices measured from a small number of patient samples, we accurately predict the population diversity that developed for each feature over the course of 30 years. Amino acid variants that evolved at key antigenic sites are also predicted well. Therefore, small "fluctuations" in feature values measured in isolated patient samples accurately describe their potential for population-level diversification. These tools will likely contribute to the design of population-targeted AIDS vaccines by effectively capturing the diversity of currently circulating strains and addressing properties of variants

  6. Quantitative Hydraulic Models Of Early Land Plants Provide Insight Into Middle Paleozoic Terrestrial Paleoenvironmental Conditions

    Science.gov (United States)

    Wilson, J. P.; Fischer, W. W.

    2010-12-01

    Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative

  7. Measuring the Quality of Services Provided for Outpatients in Kowsar Clinic in Ardebil City Based on the SERVQUAL Model

    Directory of Open Access Journals (Sweden)

    Hasan Ghobadi

    2014-12-01

    Full Text Available Background & objectives: Today, the concept of q uality of services is particularly important in health care and customer satisfaction can be defined by comparing the expectations of the services with perception of provided services. The aim of this study was to evaluate the quality of services provided for outpatients in clinic of Ardebil city based on the SERVQUAL model.   Methods: This descriptive study was conducted on 650 patients referred to outpatient clinic since July to September 201 3 using a standardized SERVQUAL questionnaire (1988 with confirmed reliability and validity. The paired t-test and Friedman test were used for analysis of data by SPSS software.   Results: 56.1 % of respondents were male and 43.9 % of them were female . The mean age of patients was 33 ± 11.91 , 68.9 % of patients were in Ardabil and 27.3 % of them had bachelor's or higher. The results showed that there is a significant difference between perceptions and expectations of the patients about five dimensions of the service quality (tangibility, reliability, assurance, responsiveness, and empathy in the studied clinic (P< 0.001. The highest mean gap and minimum gap were related to empathy and assurance, respectively.   Conclusion: Regarding to observed differences in quality , the managers and also planners have to evaluate their performance more accurately in order to have better planning for future actions. In fact, any efforts to reduce the gap between expectation and perception of patients result in greater satisfaction, loyalty and further visits to organizations.

  8. Providing Context for Complexity: Using Infographics and Conceptual Models to Teach Global Change Processes

    Science.gov (United States)

    Bean, J. R.; White, L. D.

    2015-12-01

    Understanding modern and historical global changes requires interdisciplinary knowledge of the physical and life sciences. The Understanding Global Change website from the UC Museum of Paleontology will use a focal infographic that unifies diverse content often taught in separate K-12 science units. This visualization tool provides scientists with a structure for presenting research within the broad context of global change, and supports educators with a framework for teaching and assessing student understanding of complex global change processes. This new approach to teaching the science of global change is currently being piloted and refined based on feedback from educators and scientists in anticipation of a 2016 website launch. Global change concepts are categorized within the infographic as causes of global change (e.g., burning of fossil fuels, volcanism), ongoing Earth system processes (e.g., ocean circulation, the greenhouse effect), and the changes scientists measure in Earth's physical and biological systems (e.g., temperature, extinctions/radiations). The infographic will appear on all website content pages and provides a template for the creation of flowcharts, which are conceptual models that allow teachers and students to visualize the interdependencies and feedbacks among processes in the atmosphere, hydrosphere, biosphere, and geosphere. The development of this resource is timely given that the newly adopted Next Generation Science Standards emphasize cross-cutting concepts, including model building, and Earth system science. Flowchart activities will be available on the website to scaffold inquiry-based lessons, determine student preconceptions, and assess student content knowledge. The infographic has already served as a learning and evaluation tool during professional development workshops at UC Berkeley, Stanford University, and the Smithsonian National Museum of Natural History. At these workshops, scientists and educators used the infographic

  9. MODEL REQUEST FOR PROPOSALS TO PROVIDE ENERGY AND OTHER ATTRIBUTES FROM AN OFFSHORE WIND POWER PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    Jeremy Firestone; Dawn Kurtz Crompton

    2011-10-22

    This document provides a model RFP for new generation. The 'base' RFP is for a single-source offshore wind RFP. Required modifications are noted should a state or utility seek multi-source bids (e.g., all renewables or all sources). The model is premised on proposals meeting threshold requirements (e.g., a MW range of generating capacity and a range in terms of years), RFP issuer preferences (e.g., likelihood of commercial operation by a date certain, price certainty, and reduction in congestion), and evaluation criteria, along with a series of plans (e.g., site, environmental effects, construction, community outreach, interconnection, etc.). The Model RFP places the most weight on project risk (45%), followed by project economics (35%), and environmental and social considerations (20%). However, if a multi-source RFP is put forward, the sponsor would need to either add per-MWh technology-specific, life-cycle climate (CO2), environmental and health impact costs to bid prices under the 'Project Economics' category or it should increase the weight given to the 'Environmental and Social Considerations' category.

  10. Guarana Provides Additional Stimulation over Caffeine Alone in the Planarian Model

    Science.gov (United States)

    Moustakas, Dimitrios; Mezzio, Michael; Rodriguez, Branden R.; Constable, Mic Andre; Mulligan, Margaret E.; Voura, Evelyn B.

    2015-01-01

    The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose. PMID:25880065

  11. Guarana provides additional stimulation over caffeine alone in the planarian model.

    Directory of Open Access Journals (Sweden)

    Dimitrios Moustakas

    Full Text Available The stimulant effect of energy drinks is primarily attributed to the caffeine they contain. Many energy drinks also contain other ingredients that might enhance the tonic effects of these caffeinated beverages. One of these additives is guarana. Guarana is a climbing plant native to the Amazon whose seeds contain approximately four times the amount of caffeine found in coffee beans. The mix of other natural chemicals contained in guarana seeds is thought to heighten the stimulant effects of guarana over caffeine alone. Yet, despite the growing use of guarana as an additive in energy drinks, and a burgeoning market for it as a nutritional supplement, the science examining guarana and how it affects other dietary ingredients is lacking. To appreciate the stimulant effects of guarana and other natural products, a straightforward model to investigate their physiological properties is needed. The planarian provides such a system. The locomotor activity and convulsive response of planarians with substance exposure has been shown to provide an excellent system to measure the effects of drug stimulation, addiction and withdrawal. To gauge the stimulant effects of guarana we studied how it altered the locomotor activity of the planarian species Dugesia tigrina. We report evidence that guarana seeds provide additional stimulation over caffeine alone, and document the changes to this stimulation in the context of both caffeine and glucose.

  12. The Nordic welfare model providing energy transition? A political geography approach to the EU RES directive

    International Nuclear Information System (INIS)

    Westholm, Erik; Beland Lindahl, Karin

    2012-01-01

    The EU Renewable Energy Strategy (RES) Directive requires that each member state obtain 20% of its energy supply from renewable sources by 2020. If fully implemented, this implies major changes in institutions, infrastructure, land use, and natural resource flows. This study applies a political geography perspective to explore the transition to renewable energy use in the heating and cooling segment of the Swedish energy system, 1980–2010. The Nordic welfare model, which developed mainly after the Second World War, required relatively uniform, standardized local and regional authorities functioning as implementation agents for national politics. Since 1980, the welfare orientation has gradually been complemented by competition politics promoting technological change, innovation, and entrepreneurship. This combination of welfare state organization and competition politics provided the dynamics necessary for energy transition, which occurred in a semi-public sphere of actors at various geographical scales. However, our analysis, suggest that this was partly an unintended policy outcome, since it was based on a welfare model with no significant energy aims. Our case study suggests that state organization plays a significant role, and that the EU RES Directive implementation will be uneven across Europe, reflecting various welfare models with different institutional pre-requisites for energy transition. - Highlights: ► We explore the energy transition in the heating/cooling sector in Sweden 1980–2000. ► The role of the state is studied from a political geography perspective. ► The changing welfare model offered the necessary institutional framework. ► Institutional arrangements stand out as central to explain the relative success. ► The use of renewables in EU member states will continue to vary significantly.

  13. Current predictive models do not accurately differentiate between single and multi gland disease in primary hyperparathyroidism: a retrospective cohort study of two endocrine surgery units.

    Science.gov (United States)

    Edafe, O; Collins, E E; Ubhi, C S; Balasubramanian, S P

    2018-02-01

    Background Minimally invasive parathyroidectomy (MIP) for primary hyperparathyroidism is dependent upon accurate prediction of single-gland disease on the basis of preoperative imaging and biochemistry. The aims of this study were to validate currently available predictive models of single-gland disease in two UK cohorts and to determine if these models can facilitate MIP. Methods This is a retrospectively cohort study of 624 patients who underwent parathyroidectomy for primary hyperparathyroidism in two centres between July 2008 and December 2013. Two recognised models: CaPTHUS (preoperative calcium, parathyroid hormone, ultrasound, sestamibi, concordance imaging) and Wisconsin Index (preoperative calcium, parathyroid hormone) were validated for their ability to predict single-gland disease. Results The rates of single- and multi-gland disease were 491 (79.6%) and 126 (20.2%), respectively. Cure rates in centres 1 and 2 were 93.2% and 93.8%, respectively (P = 0.789). The positive predictive value (PPV) of CaPTHUS score . 3 in predicting single-gland disease was 84.6%, compared with 100% in the original report. CaPTHUS . 4 and 5 had a PPV of 85.1 and 87.1, respectively. There were no differences in Wisconsin Index (WIN) between patients with single- and multi-gland (P = 0.573). A WIN greater than 1600 and weight of excised gland greater than 1 g had a positive predictive value of 86.7% for single-gland disease. Conclusions The use of CaPTHUS and WIN indices without intraoperative adjuncts (such as IOPTH) had the potential to result in failure to cure in up to 15% (CaPTHUS) and 13% (WIN) of patients treated by MIP targeting a single enlarged gland.

  14. Ecosystem Services Provided by Agricultural Land as Modeled by Broad Scale Geospatial Analysis

    Science.gov (United States)

    Kokkinidis, Ioannis

    Agricultural ecosystems provide multiple services including food and fiber provision, nutrient cycling, soil retention and water regulation. Objectives of the study were to identify and quantify a selection of ecosystem services provided by agricultural land, using existing geospatial tools and preferably free and open source data, such as the Virginia Land Use Evaluation System (VALUES), the North Carolina Realistic Yield Expectations (RYE) database, and the land cover datasets NLCD and CDL. Furthermore I sought to model tradeoffs between provisioning and other services. First I assessed the accuracy of agricultural land in NLCD and CDL over a four county area in eastern Virginia using cadastral parcels. I uncovered issues concerning the definition of agricultural land. The area and location of agriculture saw little change in the 19 years studied. Furthermore all datasets have significant errors of omission (11.3 to 95.1%) and commission (0 to 71.3%). Location of agriculture was used with spatial crop yield databases I created and combined with models I adapted to calculate baseline values for plant biomass, nutrient composition and requirements, land suitability for and potential production of biofuels and the economic impact of agriculture for the four counties. The study area was then broadened to cover 97 counties in eastern Virginia and North Carolina, investigating the potential for increased regional grain production through intensification and extensification of agriculture. Predicted yield from geospatial crop models was compared with produced yield from the NASS Survey of Agriculture. Area of most crops in CDL was similar to that in the Survey of Agriculture, but a yield gap is present for most years, partially due to weather, thus indicating potential for yield increase through intensification. Using simple criteria I quantified the potential to extend agriculture in high yield land in other uses and modeled the changes in erosion and runoff should

  15. An integrated decision making model for the selection of sustainable forward and reverse logistic providers

    DEFF Research Database (Denmark)

    Govindan, Kannan; Agarwal, Vernika; Darbari, Jyoti Dhingra

    2017-01-01

    Due to rising concerns for environmental sustainability, the Indian electronic industry faces immense pressure to incorporate effective sustainable practices into the supply chain (SC) planning. Consequently, manufacturing enterprises (ME) are exploring the option of re-examining their SC...... hierarchy process and the technique for order performance by similarity to ideal solution. The integrated logistics network is modeled as a bi-objective mixed-integer programming problem with the objective of maximizing the profit of the manufacturer and maximizing the sustainable score of the selected...... improve the sustainable performance value of the SC network and secure reasonable profits. The managerial implications drawn from the result analysis provide a sustainable framework to the ME for enhancing its corporate image....

  16. Lipopolysaccharide from Burkholderia thailandensis E264 provides protection in a murine model of melioidosis.

    Science.gov (United States)

    Ngugi, Sarah A; Ventura, Valeria V; Qazi, Omar; Harding, Sarah V; Kitto, G Barrie; Estes, D Mark; Dell, Anne; Titball, Richard W; Atkins, Timothy P; Brown, Katherine A; Hitchen, Paul G; Prior, Joann L

    2010-11-03

    Burkholderia thailandensis is a less virulent close relative of Burkholderia pseudomallei, a CDC category B biothreat agent. We have previously shown that lipopolysaccharide (LPS) extracted from B. pseudomallei can provide protection against a lethal challenge of B. pseudomallei in a mouse model of melioidosis. Sugar analysis on LPS from B. thailandensis strain E264 confirmed that this polysaccharide has a similar structure to LPS from B. pseudomallei. Mice were immunised with LPS from B. thailandensis or B. pseudomallei and challenged with a lethal dose of B. pseudomallei strain K96243. Similar protection levels were observed when either LPS was used as the immunogen. This data suggests that B. thailandensis LPS has the potential to be used as part of a subunit based vaccine against pathogenic B. pseudomallei. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  17. Collaborative Care: a Pilot Study of a Child Psychiatry Outpatient Consultation Model for Primary Care Providers.

    Science.gov (United States)

    Fallucco, Elise M; Blackmore, Emma Robertson; Bejarano, Carolina M; Kozikowksi, Chelsea B; Cuffe, Steven; Landy, Robin; Glowinski, Anne

    2017-07-01

    A Child Psychiatry Consultation Model (CPCM) offering primary care providers (PCPs) expedited access to outpatient child psychiatric consultation regarding management in primary care would allow more children to access mental health services. Yet, little is known about outpatient CPCMs. This pilot study describes an outpatient CPCM for 22 PCPs in a large Northeast Florida county. PCPs referred 81 patients, of which 60 were appropriate for collaborative management and 49 were subsequently seen for outpatient psychiatric consultation. The most common psychiatric diagnoses following consultation were anxiety (57%), ADHD (53%), and depression (39%). Over half (57%) of the patients seen for consultation were discharged to their PCP with appropriate treatment recommendations, and only a small minority (10%) of patients required long-term care by a psychiatrist. This CPCM helped child psychiatrists collaborate with PCPs to deliver mental health services for youth. The CPCM should be considered for adaptation and dissemination.

  18. Accurate x-ray spectroscopy

    International Nuclear Information System (INIS)

    Deslattes, R.D.

    1987-01-01

    Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data

  19. The fornix provides multiple biomarkers to characterize circuit disruption in a mouse model of Alzheimer's disease.

    Science.gov (United States)

    Badea, Alexandra; Kane, Lauren; Anderson, Robert J; Qi, Yi; Foster, Mark; Cofer, Gary P; Medvitz, Neil; Buckley, Anne F; Badea, Andreas K; Wetsel, William C; Colton, Carol A

    2016-11-15

    Multivariate biomarkers are needed for detecting Alzheimer's disease (AD), understanding its etiology, and quantifying the effect of therapies. Mouse models provide opportunities to study characteristics of AD in well-controlled environments that can help facilitate development of early interventions. The CVN-AD mouse model replicates multiple AD hallmark pathologies, and we identified multivariate biomarkers characterizing a brain circuit disruption predictive of cognitive decline. In vivo and ex vivo magnetic resonance imaging (MRI) revealed that CVN-AD mice replicate the hippocampal atrophy (6%), characteristic of humans with AD, and also present changes in subcortical areas. The largest effect was in the fornix (23% smaller), which connects the septum, hippocampus, and hypothalamus. In characterizing the fornix with diffusion tensor imaging, fractional anisotropy was most sensitive (20% reduction), followed by radial (15%) and axial diffusivity (2%), in detecting pathological changes. These findings were strengthened by optical microscopy and ultrastructural analyses. Ultrastructual analysis provided estimates of axonal density, diameters, and myelination-through the g-ratio, defined as the ratio between the axonal diameter, and the diameter of the axon plus the myelin sheath. The fornix had reduced axonal density (47% fewer), axonal degeneration (13% larger axons), and abnormal myelination (1.5% smaller g-ratios). CD68 staining showed that white matter pathology could be secondary to neuronal degeneration, or due to direct microglial attack. In conclusion, these findings strengthen the hypothesis that the fornix plays a role in AD, and can be used as a disease biomarker and as a target for therapy. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A stochastic simulation model for reliable PV system sizing providing for solar radiation fluctuations

    International Nuclear Information System (INIS)

    Kaplani, E.; Kaplanis, S.

    2012-01-01

    Highlights: ► Solar radiation data for European cities follow the Extreme Value or Weibull distribution. ► Simulation model for the sizing of SAPV systems based on energy balance and stochastic analysis. ► Simulation of PV Generator-Loads-Battery Storage System performance for all months. ► Minimum peak power and battery capacity required for reliable SAPV sizing for various European cities. ► Peak power and battery capacity reduced by more than 30% for operation 95% success rate. -- Abstract: The large fluctuations observed in the daily solar radiation profiles affect highly the reliability of the PV system sizing. Increasing the reliability of the PV system requires higher installed peak power (P m ) and larger battery storage capacity (C L ). This leads to increased costs, and makes PV technology less competitive. This research paper presents a new stochastic simulation model for stand-alone PV systems, developed to determine the minimum installed P m and C L for the PV system to be energy independent. The stochastic simulation model developed, makes use of knowledge acquired from an in-depth statistical analysis of the solar radiation data for the site, and simulates the energy delivered, the excess energy burnt, the load profiles and the state of charge of the battery system for the month the sizing is applied, and the PV system performance for the entire year. The simulation model provides the user with values for the autonomy factor d, simulating PV performance in order to determine the minimum P m and C L depending on the requirements of the application, i.e. operation with critical or non-critical loads. The model makes use of NASA’s Surface meteorology and Solar Energy database for the years 1990–2004 for various cities in Europe with a different climate. The results obtained with this new methodology indicate a substantial reduction in installed peak power and battery capacity, both for critical and non-critical operation, when compared to

  1. Cameroon mid-level providers offer a promising public health dentistry model

    Directory of Open Access Journals (Sweden)

    Achembong Leo

    2012-11-01

    Full Text Available Background Oral health services are inadequate and unevenly distributed in many developing countries, particularly those in sub-Saharan Africa. Rural areas in these countries and poorer sections of the population in urban areas often do not have access to oral health services mainly because of a significant shortage of dentists and the high costs of care. We reviewed Cameroon’s experience with deploying a mid-level cadre of oral health professionals and the feasibility of establishing a more formal and predictable role for these health workers. We anticipate that a task-shifting approach in the provision of dental care will significantly improve the uneven distribution of oral health services particularly in the rural areas of Cameroon, which is currently served by only 3% of the total number of dentists. Methods The setting of this study was the Cameroon Baptist Convention Health Board (BCHB, which has four dentists and 42 mid-level providers. De-identified data were collected manually from the registries of 10 Baptist Convention clinics located in six of Cameroon’s 10 regions and then entered into an Excel format before importing into STATA. A retrospective abstraction of all entries for patient visits starting October 2010, and going back in time until 1500 visits were extracted from each clinic. Results This study showed that mid-level providers in BCHB clinics are offering a full scope of dental work across the 10 clinics, with the exception of treatment for major facial injuries. Mid-level providers alone performed 93.5% of all extractions, 87.5% of all fillings, 96.5% of all root canals, 97.5% of all cleanings, and 98.1% of all dentures. The dentists also typically played a teaching role in training the mid-level providers. Conclusions The Ministry of Health in Cameroon has an opportunity to learn from the BCHB model to expand access to oral health care across the country. This study shows the benefits of using a simple, workable, low

  2. Immunization of stromal cell targeting fibroblast activation protein providing immunotherapy to breast cancer mouse model.

    Science.gov (United States)

    Meng, Mingyao; Wang, Wenju; Yan, Jun; Tan, Jing; Liao, Liwei; Shi, Jianlin; Wei, Chuanyu; Xie, Yanhua; Jin, Xingfang; Yang, Li; Jin, Qing; Zhu, Huirong; Tan, Weiwei; Yang, Fang; Hou, Zongliu

    2016-08-01

    Unlike heterogeneous tumor cells, cancer-associated fibroblasts (CAF) are genetically more stable which serve as a reliable target for tumor immunotherapy. Fibroblast activation protein (FAP) which is restrictively expressed in tumor cells and CAF in vivo and plays a prominent role in tumor initiation, progression, and metastasis can function as a tumor rejection antigen. In the current study, we have constructed artificial FAP(+) stromal cells which mimicked the FAP(+) CAF in vivo. We immunized a breast cancer mouse model with FAP(+) stromal cells to perform immunotherapy against FAP(+) cells in the tumor microenvironment. By forced expression of FAP, we have obtained FAP(+) stromal cells whose phenotype was CD11b(+)/CD34(+)/Sca-1(+)/FSP-1(+)/MHC class I(+). Interestingly, proliferation capacity of the fibroblasts was significantly enhanced by FAP. In the breast cancer-bearing mouse model, vaccination with FAP(+) stromal cells has significantly inhibited the growth of allograft tumor and reduced lung metastasis indeed. Depletion of T cell assays has suggested that both CD4(+) and CD8(+) T cells were involved in the tumor cytotoxic immune response. Furthermore, tumor tissue from FAP-immunized mice revealed that targeting FAP(+) CAF has induced apoptosis and decreased collagen type I and CD31 expression in the tumor microenvironment. These results implicated that immunization with FAP(+) stromal cells led to the disruption of the tumor microenvironment. Our study may provide a novel strategy for immunotherapy of a broad range of cancer.

  3. OpenClimateGIS - A Web Service Providing Climate Model Data in Commonly Used Geospatial Formats

    Science.gov (United States)

    Erickson, T. A.; Koziol, B. W.; Rood, R. B.

    2011-12-01

    The goal of the OpenClimateGIS project is to make climate model datasets readily available in commonly used, modern geospatial formats used by GIS software, browser-based mapping tools, and virtual globes.The climate modeling community typically stores climate data in multidimensional gridded formats capable of efficiently storing large volumes of data (such as netCDF, grib) while the geospatial community typically uses flexible vector and raster formats that are capable of storing small volumes of data (relative to the multidimensional gridded formats). OpenClimateGIS seeks to address this difference in data formats by clipping climate data to user-specified vector geometries (i.e. areas of interest) and translating the gridded data on-the-fly into multiple vector formats. The OpenClimateGIS system does not store climate data archives locally, but rather works in conjunction with external climate archives that expose climate data via the OPeNDAP protocol. OpenClimateGIS provides a RESTful API web service for accessing climate data resources via HTTP, allowing a wide range of applications to access the climate data.The OpenClimateGIS system has been developed using open source development practices and the source code is publicly available. The project integrates libraries from several other open source projects (including Django, PostGIS, numpy, Shapely, and netcdf4-python).OpenClimateGIS development is supported by a grant from NOAA's Climate Program Office.

  4. Modelling Water Uptake Provides a New Perspective on Grass and Tree Coexistence.

    Directory of Open Access Journals (Sweden)

    Michael G Mazzacavallo

    Full Text Available Root biomass distributions have long been used to infer patterns of resource uptake. These patterns are used to understand plant growth, plant coexistence and water budgets. Root biomass, however, may be a poor indicator of resource uptake because large roots typically do not absorb water, fine roots do not absorb water from dry soils and roots of different species can be difficult to differentiate. In a sub-tropical savanna, Kruger Park, South Africa, we used a hydrologic tracer experiment to describe the abundance of active grass and tree roots across the soil profile. We then used this tracer data to parameterize a water movement model (Hydrus 1D. The model accounted for water availability and estimated grass and tree water uptake by depth over a growing season. Most root biomass was found in shallow soils (0-20 cm and tracer data revealed that, within these shallow depths, half of active grass roots were in the top 12 cm while half of active tree roots were in the top 21 cm. However, because shallow soils provided roots with less water than deep soils (20-90 cm, the water movement model indicated that grass and tree water uptake was twice as deep as would be predicted from root biomass or tracer data alone: half of grass and tree water uptake occurred in the top 23 and 43 cm, respectively. Niche partitioning was also greater when estimated from water uptake rather than tracer uptake. Contrary to long-standing assumptions, shallow grass root distributions absorbed 32% less water than slightly deeper tree root distributions when grasses and trees were assumed to have equal water demands. Quantifying water uptake revealed deeper soil water uptake, greater niche partitioning and greater benefits of deep roots than would be estimated from root biomass or tracer uptake data alone.

  5. How Accurately Do Maize Crop Models Simulate the Interactions of Atmospheric CO2 Concentration Levels With Limited Water Supply on Water Use and Yield?

    Science.gov (United States)

    Durand, Jean-Louis; Delusca, Kenel; Boote, Ken; Lizaso, Jon; Manderscheid, Remy; Weigel, Hans Johachim; Ruane, Alexander Clark; Rosenzweig, Cynthia E.; Jones, Jim; Ahuja, Laj; hide

    2017-01-01

    This study assesses the ability of 21 crop models to capture the impact of elevated CO2 concentration [CO2] on maize yield and water use as measured in a 2-year Free Air Carbon dioxide Enrichment experiment conducted at the Thunen Institute in Braunschweig, Germany (Manderscheid et al. 2014). Data for ambient [CO2] and irrigated treatments were provided to the 21 models for calibrating plant traits, including weather, soil and management data as well as yield, grain number, above ground biomass, leaf area index, nitrogen concentration in biomass and grain, water use and soil water content. Models differed in their representation of carbon assimilation and evapotranspiration processes. The models reproduced the absence of yield response to elevated [CO2] under well-watered conditions, as well as the impact of water deficit at ambient [CO2], with 50 percent of models within a range of plus/minus 1 Mg ha(exp. -1) around the mean. The bias of the median of the 21 models was less than 1 Mg ha(exp. -1). However under water deficit in one of the two years, the models captured only 30 percent of the exceptionally high [CO2] enhancement on yield observed. Furthermore the ensemble of models was unable to simulate the very low soil water content at anthesis and the increase of soil water and grain number brought about by the elevated [CO2] under dry conditions. Overall, we found models with explicit stomatal control on transpiration tended to perform better. Our results highlight the need for model improvement with respect to simulating transpirational water use and its impact on water status during the kernel-set phase.

  6. Towards a conceptual model of online peer feedback: What about the provider?

    OpenAIRE

    Van Popta, Esther; Kral, Marijke; Camp, Gino; Martens, Rob; Simons, P.R.

    2018-01-01

    This paper reviews studies of peer feedback from the novel perspective of the providers of that feedback. The possible learning benefits of providing peer feedback in online learning have not been extensively studied. The goal of this study was therefore to explore the process of providing online peer feedback as a learning activity for the provider. We concluded that (1) providing online peer feedback has several potential learning benefits for the provider; (2) when providing online peer fe...

  7. Using a Time-Driven Activity-Based Costing Model To Determine the Actual Cost of Services Provided by a Transgenic Core.

    Science.gov (United States)

    Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J

    2018-03-01

    Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.

  8. TU-EF-204-01: Accurate Prediction of CT Tube Current Modulation: Estimating Tube Current Modulation Schemes for Voxelized Patient Models Used in Monte Carlo Simulations

    Energy Technology Data Exchange (ETDEWEB)

    McMillan, K; Bostani, M; McNitt-Gray, M [UCLA School of Medicine, Los Angeles, CA (United States); McCollough, C [Mayo Clinic, Rochester, MN (United States)

    2015-06-15

    Purpose: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. Methods: For 10 patients who received clinically-indicated chest (n=5) and abdomen/pelvis (n=5) scans on a Siemens CT scanner, both CT localizer radiograph (“topogram”) and image data were collected. Methods were devised to estimate the complete x-y-z TCM scheme using patient attenuation data: (a) available in the Siemens CT localizer radiograph/topogram itself (“actual-topo”) and (b) from a simulated topogram (“sim-topo”) derived from a projection of the image data. For comparison, the actual TCM scheme was extracted from the projection data of each patient. For validation, Monte Carlo simulations were performed using each TCM scheme to estimate dose to the lungs (chest scans) and liver (abdomen/pelvis scans). Organ doses from simulations using the actual TCM were compared to those using each of the estimated TCM methods (“actual-topo” and “sim-topo”). Results: For chest scans, the average differences between doses estimated using actual TCM schemes and estimated TCM schemes (“actual-topo” and “sim-topo”) were 3.70% and 4.98%, respectively. For abdomen/pelvis scans, the average differences were 5.55% and 6.97%, respectively. Conclusion: Strong agreement between doses estimated using actual and estimated TCM schemes validates the methods for simulating Siemens topograms and converting attenuation data into TCM schemes. This indicates that the methods developed in this work can be used to accurately estimate TCM schemes for any patient model or computational phantom, whether a CT localizer radiograph is available or not

  9. A simple and accurate rule-based modeling framework for simulation of autocrine/paracrine stimulation of glioblastoma cell motility and proliferation by L1CAM in 2-D culture.

    Science.gov (United States)

    Caccavale, Justin; Fiumara, David; Stapf, Michael; Sweitzer, Liedeke; Anderson, Hannah J; Gorky, Jonathan; Dhurjati, Prasad; Galileo, Deni S

    2017-12-11

    Glioblastoma multiforme (GBM) is a devastating brain cancer for which there is no known cure. Its malignancy is due to rapid cell division along with high motility and invasiveness of cells into the brain tissue. Simple 2-dimensional laboratory assays (e.g., a scratch assay) commonly are used to measure the effects of various experimental perturbations, such as treatment with chemical inhibitors. Several mathematical models have been developed to aid the understanding of the motile behavior and proliferation of GBM cells. However, many are mathematically complicated, look at multiple interdependent phenomena, and/or use modeling software not freely available to the research community. These attributes make the adoption of models and simulations of even simple 2-dimensional cell behavior an uncommon practice by cancer cell biologists. Herein, we developed an accurate, yet simple, rule-based modeling framework to describe the in vitro behavior of GBM cells that are stimulated by the L1CAM protein using freely available NetLogo software. In our model L1CAM is released by cells to act through two cell surface receptors and a point of signaling convergence to increase cell motility and proliferation. A simple graphical interface is provided so that changes can be made easily to several parameters controlling cell behavior, and behavior of the cells is viewed both pictorially and with dedicated graphs. We fully describe the hierarchical rule-based modeling framework, show simulation results under several settings, describe the accuracy compared to experimental data, and discuss the potential usefulness for predicting future experimental outcomes and for use as a teaching tool for cell biology students. It is concluded that this simple modeling framework and its simulations accurately reflect much of the GBM cell motility behavior observed experimentally in vitro in the laboratory. Our framework can be modified easily to suit the needs of investigators interested in other

  10. Estimated Nutritive Value of Low-Price Model Lunch Sets Provided to Garment Workers in Cambodia.

    Science.gov (United States)

    Makurat, Jan; Pillai, Aarati; Wieringa, Frank T; Chamnan, Chhoun; Krawinkel, Michael B

    2017-07-21

    The establishment of staff canteens is expected to improve the nutritional situation of Cambodian garment workers. The objective of this study is to assess the nutritive value of low-price model lunch sets provided at a garment factory in Phnom Penh, Cambodia. Exemplary lunch sets were served to female workers through a temporary canteen at a garment factory in Phnom Penh. Dish samples were collected repeatedly to examine mean serving sizes of individual ingredients. Food composition tables and NutriSurvey software were used to assess mean amounts and contributions to recommended dietary allowances (RDAs) or adequate intake of energy, macronutrients, dietary fiber, vitamin C (VitC), iron, vitamin A (VitA), folate and vitamin B12 (VitB12). On average, lunch sets provided roughly one third of RDA or adequate intake of energy, carbohydrates, fat and dietary fiber. Contribution to RDA of protein was high (46% RDA). The sets contained a high mean share of VitC (159% RDA), VitA (66% RDA), and folate (44% RDA), but were low in VitB12 (29% RDA) and iron (20% RDA). Overall, lunches satisfied recommendations of caloric content and macronutrient composition. Sets on average contained a beneficial amount of VitC, VitA and folate. Adjustments are needed for a higher iron content. Alternative iron-rich foods are expected to be better suited, compared to increasing portions of costly meat/fish components. Lunch provision at Cambodian garment factories holds the potential to improve food security of workers, approximately at costs of <1 USD/person/day at large scale. Data on quantitative total dietary intake as well as physical activity among workers are needed to further optimize the concept of staff canteens.

  11. Estimated Nutritive Value of Low-Price Model Lunch Sets Provided to Garment Workers in Cambodia

    Directory of Open Access Journals (Sweden)

    Jan Makurat

    2017-07-01

    Full Text Available Background: The establishment of staff canteens is expected to improve the nutritional situation of Cambodian garment workers. The objective of this study is to assess the nutritive value of low-price model lunch sets provided at a garment factory in Phnom Penh, Cambodia. Methods: Exemplary lunch sets were served to female workers through a temporary canteen at a garment factory in Phnom Penh. Dish samples were collected repeatedly to examine mean serving sizes of individual ingredients. Food composition tables and NutriSurvey software were used to assess mean amounts and contributions to recommended dietary allowances (RDAs or adequate intake of energy, macronutrients, dietary fiber, vitamin C (VitC, iron, vitamin A (VitA, folate and vitamin B12 (VitB12. Results: On average, lunch sets provided roughly one third of RDA or adequate intake of energy, carbohydrates, fat and dietary fiber. Contribution to RDA of protein was high (46% RDA. The sets contained a high mean share of VitC (159% RDA, VitA (66% RDA, and folate (44% RDA, but were low in VitB12 (29% RDA and iron (20% RDA. Conclusions: Overall, lunches satisfied recommendations of caloric content and macronutrient composition. Sets on average contained a beneficial amount of VitC, VitA and folate. Adjustments are needed for a higher iron content. Alternative iron-rich foods are expected to be better suited, compared to increasing portions of costly meat/fish components. Lunch provision at Cambodian garment factories holds the potential to improve food security of workers, approximately at costs of <1 USD/person/day at large scale. Data on quantitative total dietary intake as well as physical activity among workers are needed to further optimize the concept of staff canteens.

  12. A New Strategy for Accurately Predicting I-V Electrical Characteristics of PV Modules Using a Nonlinear Five-Point Model

    Directory of Open Access Journals (Sweden)

    Sakaros Bogning Dongue

    2013-01-01

    Full Text Available This paper presents the modelling of electrical I-V response of illuminated photovoltaic crystalline modules. As an alternative method to the linear five-parameter model, our strategy uses advantages of a nonlinear analytical five-point model to take into account the effects of nonlinear variations of current with respect to solar irradiance and of voltage with respect to cells temperature. We succeeded in this work to predict with great accuracy the I-V characteristics of monocrystalline shell SP75 and polycrystalline GESOLAR GE-P70 photovoltaic modules. The good comparison of our calculated results to experimental data provided by the modules manufacturers makes it possible to appreciate the contribution of taking into account the nonlinear effect of operating conditions data on I-V characteristics of photovoltaic modules.

  13. An integrated Biophysical CGE model to provide Sustainable Development Goal insights

    Science.gov (United States)

    Sanchez, Marko; Cicowiez, Martin; Howells, Mark; Zepeda, Eduardo

    2016-04-01

    Future projected changes in the energy system will inevitably result in changes to the level of appropriation of environmental resources, particularly land and water, and this will have wider implications for environmental sustainability, and may affect other sectors of the economy. An integrated climate, land, energy and water (CLEW) system will provide useful insights, particularly with regard to the environmental sustainability. However, it will require adequate integration with other tools to detect economic impacts and broaden the scope for policy analysis. A computable general equilibrium (CGE) model is a well suited tool to channel impacts, as detected in a CLEW analysis, onto all sectors of the economy, and evaluate trade-offs and synergies, including those of possible policy responses. This paper will show an application of such integration in a single-country CGE model with the following key characteristics. Climate is partly exogenous (as proxied by temperature and rainfall) and partly endogenous (as proxied by emissions generated by different sectors) and has an impact on endogenous variables such as land productivity and labor productivity. Land is a factor of production used in agricultural and forestry activities which can be of various types if land use alternatives (e.g., deforestation) are to be considered. Energy is an input to the production process of all economic sectors and a consumption good for households. Because it is possible to allow for substitution among different energy sources (e.g. renewable vs non-renewable) in the generation of electricity, the production process of energy products can consider the use of natural resources such as oil and water. Water, data permitting, can be considered as an input into the production process of agricultural sectors, which is particularly relevant in case of irrigation. It can also be considered as a determinant of total factor productivity in hydro-power generation. The integration of a CLEW

  14. Could implantable cardioverter defibrillators provide a human model supporting the learned helplessness theory of depression?

    Science.gov (United States)

    Goodman, M; Hess, B

    1999-01-01

    Affective symptoms were examined retrospectively in 25 patients following placement of implantable cardioverter defibrillators (ICD) which can produce intermittent shocks without warning in response to cardiac ventricular arrhythmias. The number of ICD random, uncontrollable discharge shocks and pre-ICD history of psychological distress (i.e., depression and/or anxiety) were documented in all patients using a demographics questionnaire and a standardized behavioral/psychological symptoms questionnaire (i.e., Symptom Checklist-90 Revised). ICD patients were dichotomized into two groups: those without a history of psychological distress prior to ICD (n = 18) and those with a history of psychological distress prior to ICD (n = 7). In ICD patients without a prior history, results indicated that quantity of ICD discharge shocks was significantly predictive of current reported depression (r = 0.45, p = 0.03) and current reported anxiety (r = 0.51, p = 0.02). Conversely, in patients with a reported history of psychological distress, there was no significant relationship found between quantity of discharge shocks and current reported depression or anxiety. This study may provide evidence in support of a human model of learned helplessness in that it supports the notion that exposure to an unavoidable and inescapable aversive stimulus was found to be related to patients' reported depression. Further studies may wish to prospectively consider a larger sample as well as a more comprehensive assessment of premorbid psychological symptoms.

  15. Model for a reproducible curriculum infrastructure to provide international nurse anesthesia continuing education.

    Science.gov (United States)

    Collins, Shawn Bryant

    2011-12-01

    There are no set standards for nurse anesthesia education in developing countries, yet one of the keys to the standards in global professional practice is competency assurance for individuals. Nurse anesthetists in developing countries have difficulty obtaining educational materials. These difficulties include, but are not limited to, financial constraints, lack of anesthesia textbooks, and distance from educational sites. There is increasing evidence that the application of knowledge in developing countries is failing. One reason is that many anesthetists in developing countries are trained for considerably less than acceptable time periods and are often supervised by poorly trained practitioners, who then pass on less-than-desirable practice skills, thus exacerbating difficulties. Sustainability of development can come only through anesthetists who are both well trained and able to pass on their training to others. The international nurse anesthesia continuing education project was developed in response to the difficulty that nurse anesthetists in developing countries face in accessing continuing education. The purpose of this project was to develop a nonprofit, volunteer-based model for providing nurse anesthesia continuing education that can be reproduced and used in any developing country.

  16. Spdef null mice lack conjunctival goblet cells and provide a model of dry eye.

    Science.gov (United States)

    Marko, Christina K; Menon, Balaraj B; Chen, Gang; Whitsett, Jeffrey A; Clevers, Hans; Gipson, Ilene K

    2013-07-01

    Goblet cell numbers decrease within the conjunctival epithelium in drying and cicatrizing ocular surface diseases. Factors regulating goblet cell differentiation in conjunctival epithelium are unknown. Recent data indicate that the transcription factor SAM-pointed domain epithelial-specific transcription factor (Spdef) is essential for goblet cell differentiation in tracheobronchial and gastrointestinal epithelium of mice. Using Spdef(-/-) mice, we determined that Spdef is required for conjunctival goblet cell differentiation and that Spdef(-/-) mice, which lack conjunctival goblet cells, have significantly increased corneal surface fluorescein staining and tear volume, a phenotype consistent with dry eye. Microarray analysis of conjunctival epithelium in Spdef(-/-) mice revealed down-regulation of goblet cell-specific genes (Muc5ac, Tff1, Gcnt3). Up-regulated genes included epithelial cell differentiation/keratinization genes (Sprr2h, Tgm1) and proinflammatory genes (Il1-α, Il-1β, Tnf-α), all of which are up-regulated in dry eye. Interestingly, four Wnt pathway genes were down-regulated. SPDEF expression was significantly decreased in the conjunctival epithelium of Sjögren syndrome patients with dry eye and decreased goblet cell mucin expression. These data demonstrate that Spdef is required for conjunctival goblet cell differentiation and down-regulation of SPDEF may play a role in human dry eye with goblet cell loss. Spdef(-/-) mice have an ocular surface phenotype similar to that in moderate dry eye, providing a new, more convenient model for the disease. Copyright © 2013 American Society for Investigative Pathology. Published by Elsevier Inc. All rights reserved.

  17. The EZ diffusion model provides a powerful test of simple empirical effects

    NARCIS (Netherlands)

    van Ravenzwaaij, Don; Donkin, Chris; Vandekerckhove, Joachim

    Over the last four decades, sequential accumulation models for choice response times have spread through cognitive psychology like wildfire. The most popular style of accumulator model is the diffusion model (Ratcliff Psychological Review, 85, 59–108, 1978), which has been shown to account for data

  18. Towards a conceptual model of online peer feedback: What about the provider?

    NARCIS (Netherlands)

    Van Popta, Esther; Kral, Marijke; Camp, Gino; Martens, Rob; Simons, P.R.

    2018-01-01

    This paper reviews studies of peer feedback from the novel perspective of the providers of that feedback. The possible learning benefits of providing peer feedback in online learning have not been extensively studied. The goal of this study was therefore to explore the process of providing online

  19. Providing Agility in C2 Environments Through Networked Information Processing: A Model of Expertise

    Science.gov (United States)

    2014-06-01

    individual being able to correctly identify the solution in various circumstances. A three-parameter logistic ( 3PL ) model is used, where the...difficulty. Based on this 3PL model, we choose to fit the parameters of the following expression for the probability of a correct response given...problem difficulty can predict accuracy of responses to specific questions • Three-parameter logistic ( 3PL ) model b – difficulty a

  20. Economic model of a cloud provider operating in a federated cloud

    OpenAIRE

    Goiri Presa, Íñigo; Guitart Fernández, Jordi; Torres Viñals, Jordi

    2012-01-01

    Resource provisioning in Cloud providers is a challenge because of the high variability of load over time. On the one hand, the providers can serve most of the requests owning only a restricted amount of resources, but this forces to reject customers during peak hours. On the other hand, valley hours incur in under-utilization of the resources, which forces the providers to increase their prices to be profitable. Federation overcomes these limitations and allows pro...

  1. A Distance Education Model for Training Substance Abuse Treatment Providers in Cognitive-Behavioral Therapy

    Science.gov (United States)

    Watson, Donnie W.; Rawson, Richard R.; Rataemane, Solomon; Shafer, Michael S.; Obert, Jeanne; Bisesi, Lorrie; Tanamly, Susie

    2003-01-01

    This paper presents a rationale for the use of a distance education approach in the clinical training of community substance abuse treatment providers. Developing and testing new approaches to the clinical training and supervision of providers is important in the substance abuse treatment field where new information is always available. A…

  2. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation

    Science.gov (United States)

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework. PMID:28522983

  3. 75 FR 2562 - Publication of Model Notices for Health Care Continuation Coverage Provided Pursuant to the...

    Science.gov (United States)

    2010-01-15

    ... DEPARTMENT OF LABOR Employee Benefits Security Administration Publication of Model Notices for... AGENCY: Employee Benefits Security Administration, Department of Labor. ACTION: Notice of the..., contact the Department's Employee Benefits Security Administration's Benefits Advisors at 1-866-444-3272...

  4. Ensemble modeling of the Baltic Sea ecosystem to provide scenarios for management.

    Science.gov (United States)

    Meier, H E Markus; Andersson, Helén C; Arheimer, Berit; Donnelly, Chantal; Eilola, Kari; Gustafsson, Bo G; Kotwicki, Lech; Neset, Tina-Simone; Niiranen, Susa; Piwowarczyk, Joanna; Savchuk, Oleg P; Schenk, Frederik; Węsławski, Jan Marcin; Zorita, Eduardo

    2014-02-01

    We present a multi-model ensemble study for the Baltic Sea, and investigate the combined impact of changing climate, external nutrient supply, and fisheries on the marine ecosystem. The applied regional climate system model contains state-of-the-art component models for the atmosphere, sea ice, ocean, land surface, terrestrial and marine biogeochemistry, and marine food-web. Time-dependent scenario simulations for the period 1960-2100 are performed and uncertainties of future projections are estimated. In addition, reconstructions since 1850 are carried out to evaluate the models sensitivity to external stressors on long time scales. Information from scenario simulations are used to support decision-makers and stakeholders and to raise awareness of climate change, environmental problems, and possible abatement strategies among the general public using geovisualization. It is concluded that the study results are relevant for the Baltic Sea Action Plan of the Helsinki Commission.

  5. A description of model 3B of the multipurpose ventricular actuating system. [providing controlled driving pressures

    Science.gov (United States)

    Webb, J. A., Jr.

    1974-01-01

    The multipurpose ventricular actuating system is a pneumatic signal generating device that provides controlled driving pressures for actuating pulsatile blood pumps. Overall system capabilities, the timing circuitry, and calibration instruction are included.

  6. Petrographic characterization to build an accurate rock model using micro-CT: Case study on low-permeable to tight turbidite sandstone from Eocene Shahejie Formation.

    Science.gov (United States)

    Munawar, Muhammad Jawad; Lin, Chengyan; Cnudde, Veerle; Bultreys, Tom; Dong, Chunmei; Zhang, Xianguo; De Boever, Wesley; Zahid, Muhammad Aleem; Wu, Yuqi

    2018-03-26

    Pore scale flow simulations heavily depend on petrographic characterizing and modeling of reservoir rocks. Mineral phase segmentation and pore network modeling are crucial stages in micro-CT based rock modeling. The success of the pore network model (PNM) to predict petrophysical properties relies on image segmentation, image resolution and most importantly nature of rock (homogenous, complex or microporous). The pore network modeling has experienced extensive research and development during last decade, however the application of these models to a variety of naturally heterogenous reservoir rock is still a challenge. In this paper, four samples from a low permeable to tight sandstone reservoir were used to characterize their petrographic and petrophysical properties using high-resolution micro-CT imaging. The phase segmentation analysis from micro-CT images shows that 5-6% microporous regions are present in kaolinite rich sandstone (E3 and E4), while 1.7-1.8% are present in illite rich sandstone (E1 and E2). The pore system percolates without micropores in E1 and E2 while it does not percolate without micropores in E3 and E4. In E1 and E2, total MICP porosity is equal to the volume percent of macrospores determined from micro-CT images, which indicate that the macropores are well connected and microspores do not play any role in non-wetting fluid (mercury) displacement process. Whereas in E3 and E4 sandstones, the volume percent of micropores is far less (almost 50%) than the total MICP porosity which means that almost half of the pore space was not detected by the micro-CT scan. PNM behaved well in E1 and E2 where better agreement exists in PNM and MICP measurements. While E3 and E4 exhibit multiscale pore space which cannot be addressed with single scale PNM method, a multiscale approach is needed to characterize such complex rocks. This study provides helpful insights towards the application of existing micro-CT based petrographic characterization methodology

  7. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    International Nuclear Information System (INIS)

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-01-01

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO 2 )]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO 2 ), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO 2 were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO 2 distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower

  8. Can Earth System Model Provide Reasonable Natural Runoff Estimates to Support Water Management Studies?

    Science.gov (United States)

    Kao, S. C.; Shi, X.; Kumar, J.; Ricciuto, D. M.; Mao, J.; Thornton, P. E.

    2017-12-01

    With the concern of changing hydrologic regime, there is a crucial need to better understand how water availability may change and influence water management decisions in the projected future climate conditions. Despite that surface hydrology has long been simulated by land model within the Earth System modeling (ESM) framework, given the coarser horizontal resolution and lack of engineering-level calibration, raw runoff from ESM is generally discarded by water resource managers when conducting hydro-climate impact assessments. To identify a likely path to improve the credibility of ESM-simulated natural runoff, we conducted regional model simulation using the land component (ALM) of the Accelerated Climate Modeling for Energy (ACME) version 1 focusing on the conterminous United States (CONUS). Two very different forcing data sets, including (1) the conventional 0.5° CRUNCEP (v5, 1901-2013) and (2) the 1-km Daymet (v3, 1980-2013) aggregated to 0.5°, were used to conduct 20th century transient simulation with satellite phenology. Additional meteorologic and hydrologic observations, including PRISM precipitation and U.S. Geological Survey WaterWatch runoff, were used for model evaluation. For various CONUS hydrologic regions (such as Pacific Northwest), we found that Daymet can significantly improve the reasonableness of simulated ALM runoff even without intensive calibration. The large dry bias of CRUNCEP precipitation (evaluated by PRISM) in multiple CONUS hydrologic regions is believed to be the main reason causing runoff underestimation. The results suggest that when driving with skillful precipitation estimates, ESM has the ability to produce reasonable natural runoff estimates to support further water management studies. Nevertheless, model calibration will be required for regions (such as Upper Colorado) where ill performance is showed for multiple different forcings.

  9. The Roy Adaptation Model: A Theoretical Framework for Nurses Providing Care to Individuals With Anorexia Nervosa.

    Science.gov (United States)

    Jennings, Karen M

    Using a nursing theoretical framework to understand, elucidate, and propose nursing research is fundamental to knowledge development. This article presents the Roy Adaptation Model as a theoretical framework to better understand individuals with anorexia nervosa during acute treatment, and the role of nursing assessments and interventions in the promotion of weight restoration. Nursing assessments and interventions situated within the Roy Adaptation Model take into consideration how weight restoration does not occur in isolation but rather reflects an adaptive process within external and internal environments, and has the potential for more holistic care.

  10. The anti-human trafficking collaboration model and serving victims: Providers' perspectives on the impact and experience.

    Science.gov (United States)

    Kim, Hea-Won; Park, Taekyung; Quiring, Stephanie; Barrett, Diana

    2018-01-01

    A coalition model is often used to serve victims of human trafficking but little is known about whether the model is adequately meeting the needs of the victims. The purpose of this study was to examine anti-human trafficking collaboration model in terms of its impact and the collaborative experience, including challenges and lessons learned from the service providers' perspective. Mixed methods study was conducted to evaluate the impact of a citywide anti-trafficking coalition model from the providers' perspectives. Web-based survey was administered with service providers (n = 32) and focus groups were conducted with Core Group members (n = 10). Providers reported the coalition model has made important impacts in the community by increasing coordination among the key agencies, law enforcement, and service providers and improving quality of service provision. Providers identified the improved and expanded partnerships among coalition members as the key contributing factor to the success of the coalition model. Several key strategies were suggested to improve the coalition model: improved referral tracking, key partner and protocol development, and information sharing.

  11. An agent-based simulation model of patient choice of health care providers in accountable care organizations.

    Science.gov (United States)

    Alibrahim, Abdullah; Wu, Shinyi

    2018-03-01

    Accountable care organizations (ACO) in the United States show promise in controlling health care costs while preserving patients' choice of providers. Understanding the effects of patient choice is critical in novel payment and delivery models like ACO that depend on continuity of care and accountability. The financial, utilization, and behavioral implications associated with a patient's decision to forego local health care providers for more distant ones to access higher quality care remain unknown. To study this question, we used an agent-based simulation model of a health care market composed of providers able to form ACO serving patients and embedded it in a conditional logit decision model to examine patients capable of choosing their care providers. This simulation focuses on Medicare beneficiaries and their congestive heart failure (CHF) outcomes. We place the patient agents in an ACO delivery system model in which provider agents decide if they remain in an ACO and perform a quality improving CHF disease management intervention. Illustrative results show that allowing patients to choose their providers reduces the yearly payment per CHF patient by $320, reduces mortality rates by 0.12 percentage points and hospitalization rates by 0.44 percentage points, and marginally increases provider participation in ACO. This study demonstrates a model capable of quantifying the effects of patient choice in a theoretical ACO system and provides a potential tool for policymakers to understand implications of patient choice and assess potential policy controls.

  12. A Context-Aware Model to Provide Positioning in Disaster Relief Scenarios

    Directory of Open Access Journals (Sweden)

    Daniel Moreno

    2015-09-01

    Full Text Available The effectiveness of the work performed during disaster relief efforts is highly dependent on the coordination of activities conducted by the first responders deployed in the affected area. Such coordination, in turn, depends on an appropriate management of geo-referenced information. Therefore, enabling first responders to count on positioning capabilities during these activities is vital to increase the effectiveness of the response process. The positioning methods used in this scenario must assume a lack of infrastructure-based communication and electrical energy, which usually characterizes affected areas. Although positioning systems such as the Global Positioning System (GPS have been shown to be useful, we cannot assume that all devices deployed in the area (or most of them will have positioning capabilities by themselves. Typically, many first responders carry devices that are not capable of performing positioning on their own, but that require such a service. In order to help increase the positioning capability of first responders in disaster-affected areas, this paper presents a context-aware positioning model that allows mobile devices to estimate their position based on information gathered from their surroundings. The performance of the proposed model was evaluated using simulations, and the obtained results show that mobile devices without positioning capabilities were able to use the model to estimate their position. Moreover, the accuracy of the positioning model has been shown to be suitable for conducting most first response activities.

  13. Neutron Scattering Provides a New Model for Optimal Morphologies in Organic Photovoltaics: Rivers and Streams

    Science.gov (United States)

    Dadmun, Mark; Henry, Nathan; Yin, Wen; Xiao, Kai; Ankner, John

    2011-03-01

    The current model for the ideal morphology of a conjugated polymer bulk heterojunction organic photovoltaic (OPV) is a phase-separated structure that consists of two pure phases, one an electron donor, the other an acceptor, that form an interpenetrating, bicontinuous, network on the length scale of 10-20 nm. In this talk, neutron scattering experiments that demonstrate that this model is incorrect for the archetypal conjugated polymer bulk heterojunction, poly[3-hexylthiophene] (P3HT) and the fullerene 1-(3-methyloxycarbonyl)propy(1-phenyl [6,6]) C61 (PCBM) will be presented. These studies show that the miscibility of PCBM in P3HT approaches 20 wt%, a result that is counter to the standard model of efficient organic photovoltaics. The implications of this finding on the ideal morphology of conjugated polymer bulk heterojunctions will be discussed, where these results are interpreted to present a model that agrees with this data, and conforms to structural and functional information in the literature. Furthermore, the thermodynamics of conjugated polymer:fullerene mixtures dominate the formation of this hierarchical morphology and must be more thoroughly understood to rationally design and fabricate optimum morphologies for OPV activity.

  14. The Strategic Thinking and Learning Community: An Innovative Model for Providing Academic Assistance

    Science.gov (United States)

    Commander, Nannette Evans; Valeri-Gold, Maria; Darnell, Kim

    2004-01-01

    Today, academic assistance efforts are frequently geared to all students, not just the underprepared, with study skills offered in various formats. In this article, the authors describe a learning community model with the theme, "Strategic Thinking and Learning" (STL). Results of data analysis indicate that participants of the STL…

  15. Models Provide Specificity: Testing a Proposed Mechanism of Visual Working Memory Capacity Development

    Science.gov (United States)

    Simmering, Vanessa R.; Patterson, Rebecca

    2012-01-01

    Numerous studies have established that visual working memory has a limited capacity that increases during childhood. However, debate continues over the source of capacity limits and its developmental increase. Simmering (2008) adapted a computational model of spatial cognitive development, the Dynamic Field Theory, to explain not only the source…

  16. The "P2P" Educational Model Providing Innovative Learning by Linking Technology, Business and Research

    Science.gov (United States)

    Dickinson, Paul Gordon

    2017-01-01

    This paper evaluates the effect and potential of a new educational learning model called Peer to Peer (P2P). The study was focused on Laurea, Hyvinkaa's Finland campus and its response to bridging the gap between traditional educational methods and working reality, where modern technology plays an important role. The study describes and evaluates…

  17. Using Model-Based System Engineering to Provide Artifacts for NASA Project Life-Cycle and Technical Reviews Presentation

    Science.gov (United States)

    Parrott, Edith L.; Weiland, Karen J.

    2017-01-01

    This is the presentation for the AIAA Space conference in September 2017. It highlights key information from Using Model-Based Systems Engineering to Provide Artifacts for NASA Project Life-cycle and Technical Reviews paper.

  18. A Structural Model for a Self-Assembled Nanotube Provides Insight into Its Exciton Dynamics

    Science.gov (United States)

    2016-01-01

    The design and synthesis of functional self-assembled nanostructures is frequently an empirical process fraught with critical knowledge gaps about atomic-level structure in these noncovalent systems. Here, we report a structural model for a semiconductor nanotube formed via the self-assembly of naphthalenediimide-lysine (NDI-Lys) building blocks determined using experimental 13C–13C and 13C–15N distance restraints from solid-state nuclear magnetic resonance supplemented by electron microscopy and X-ray powder diffraction data. The structural model reveals a two-dimensional-crystal-like architecture of stacked monolayer rings each containing ∼50 NDI-Lys molecules, with significant π-stacking interactions occurring both within the confines of the ring and along the long axis of the tube. Excited-state delocalization and energy transfer are simulated for the nanotube based on time-dependent density functional theory and an incoherent hopping model. Remarkably, these calculations reveal efficient energy migration from the excitonic bright state, which is in agreement with the rapid energy transfer within NDI-Lys nanotubes observed previously using fluorescence spectroscopy. PMID:26120375

  19. Model of a multiverse providing the dark energy of our universe

    Science.gov (United States)

    Rebhan, E.

    2017-09-01

    It is shown that the dark energy presently observed in our universe can be regarded as the energy of a scalar field driving an inflation-like expansion of a multiverse with ours being a subuniverse among other parallel universes. A simple model of this multiverse is elaborated: Assuming closed space geometry, the origin of the multiverse can be explained by quantum tunneling from nothing; subuniverses are supposed to emerge from local fluctuations of separate inflation fields. The standard concept of tunneling from nothing is extended to the effect that in addition to an inflationary scalar field, matter is also generated, and that the tunneling leads to an (unstable) equilibrium state. The cosmological principle is assumed to pertain from the origin of the multiverse until the first subuniverses emerge. With increasing age of the multiverse, its spatial curvature decays exponentially so fast that, due to sharing the same space, the flatness problem of our universe resolves by itself. The dark energy density imprinted by the multiverse on our universe is time-dependent, but such that the ratio w = ϱ/(c2p) of its mass density and pressure (times c2) is time-independent and assumes a value - 1 + 𝜖 with arbitrary 𝜖 > 0. 𝜖 can be chosen so small, that the dark energy model of this paper can be fitted to the current observational data as well as the cosmological constant model.

  20. Using an established telehealth model to train urban primary care providers on hypertension management.

    Science.gov (United States)

    Masi, Christopher; Hamlish, Tamara; Davis, Andrew; Bordenave, Kristine; Brown, Stephen; Perea, Brenda; Aduana, Glen; Wolfe, Marcus; Bakris, George; Johnson, Daniel

    2012-01-01

    The objective of this study was to determine whether a videoconference-based telehealth network can increase hypertension management knowledge and self-assessed competency among primary care providers (PCPs) working in urban Federally Qualified Health Centers (FQHCs). We created a telehealth network among 6 urban FQHCs and our institution to support a 12-session educational program designed to teach state-of-the-art hypertension management. Each 1-hour session included a brief lecture by a university-based hypertension specialist, case presentations by PCPs, and interactive discussions among the specialist and PCPs. Twelve PCPs (9 intervention and 3 controls) were surveyed at baseline and immediately following the curriculum. The mean number of correct answers on the 26-item hypertension knowledge questionnaire increased in the intervention group (13.11 [standard deviation (SD)]=3.06) to 17.44 [SD=1.59], Phypertension management self-assessed competency scale increased in the intervention group (4.68 [SD=0.94] to 5.41 [SD=0.89], Phypertension care provided by urban FQHC providers. © 2011 Wiley Periodicals, Inc.

  1. Do NHS walk-in centres in England provide a model of integrated care?

    Directory of Open Access Journals (Sweden)

    C. Salisbury

    2003-08-01

    Full Text Available Purpose: To undertake a comprehensive evaluation of NHS walk-in centres against criteria of improved access, quality, user satisfaction and efficiency. Context: Forty NHS walk-in centres have been opened in England, as part of the UK governments agenda to modernise the NHS. They are intended to improve access to primary care, provide high quality treatment at convenient times, and reduce inappropriate demand on other NHS providers. Care is provided by nurses rather than doctors, using computerised algorithms, and nurses use protocols to supply treatments previously only available from doctors. Data sources: Several linked studies were conducted using different sources of data and methodologies. These included routinely collected data, site visits, patient interviews, a survey of users of walk-in centres, a study using simulated patients to assess quality of care, analysis of consultation rates in NHS services near to walk-in centres, and audit of compliance with protocols. Conclusion & discussion: The findings illustrate many of the issues described in a recent WHO reflective paper on Integrated Care, including tensions between professional judgement and use of protocols, problems with incompatible IT systems, balancing users' demands and needs, the importance of understanding health professionals' roles and issues of technical versus allocative efficiency.

  2. RESEARCH OF PROBLEMS OF DESIGN OF COMPLEX TECHNICAL PROVIDING AND THE GENERALIZED MODEL OF THEIR DECISION

    Directory of Open Access Journals (Sweden)

    A. V. Skrypnikov

    2015-01-01

    Full Text Available Summary. In this work the general ideas of a method of V. I. Skurikhin taking into account the specified features develop and questions of the analysis and synthesis of a complex of technical means, with finishing them to the level suitable for use in engineering practice of design of information management systems are in more detail considered. In work the general system approach to the solution of questions of a choice of technical means of the information management system is created, the general technique of the sys tem analysis and synthesis of a complex of the technical means and its subsystems providing achievement of extreme value of criterion of efficiency of functioning of a technical complex of the information management system is developed. The main attention is paid to the applied party of system researches of complex technical providing, in particular, to definition of criteria of quality of functioning of a technical complex, development of methods of the analysis of information base of the information management system and definition of requirements to technical means, and also methods of structural synthesis of the main subsystems of complex technical providing. Thus, the purpose is research on the basis of system approach of complex technical providing the information management system and development of a number of methods of the analysis and the synthesis of complex technical providing suitable for use in engineering practice of design of systems. The well-known paradox of development of management information consists of that parameters of the system, and consequently, and requirements to the complex hardware, can not be strictly reasonable to development of algorithms and programs, and vice versa. The possible method of overcoming of these difficulties is prognostication of structure and parameters of complex hardware for certain management informations on the early stages of development, with subsequent clarification and

  3. A decision tree model to estimate the value of information provided by a groundwater quality monitoring network

    Directory of Open Access Journals (Sweden)

    A. I. Khader

    2013-05-01

    Full Text Available Groundwater contaminated with nitrate poses a serious health risk to infants when this contaminated water is used for culinary purposes. To avoid this health risk, people need to know whether their culinary water is contaminated or not. Therefore, there is a need to design an effective groundwater monitoring network, acquire information on groundwater conditions, and use acquired information to inform management options. These actions require time, money, and effort. This paper presents a method to estimate the value of information (VOI provided by a groundwater quality monitoring network located in an aquifer whose water poses a spatially heterogeneous and uncertain health risk. A decision tree model describes the structure of the decision alternatives facing the decision-maker and the expected outcomes from these alternatives. The alternatives include (i ignore the health risk of nitrate-contaminated water, (ii switch to alternative water sources such as bottled water, or (iii implement a previously designed groundwater quality monitoring network that takes into account uncertainties in aquifer properties, contaminant transport processes, and climate (Khader, 2012. The VOI is estimated as the difference between the expected costs of implementing the monitoring network and the lowest-cost uninformed alternative. We illustrate the method for the Eocene Aquifer, West Bank, Palestine, where methemoglobinemia (blue baby syndrome is the main health problem associated with the principal contaminant nitrate. The expected cost of each alternative is estimated as the weighted sum of the costs and probabilities (likelihoods associated with the uncertain outcomes resulting from the alternative. Uncertain outcomes include actual nitrate concentrations in the aquifer, concentrations reported by the monitoring system, whether people abide by manager recommendations to use/not use aquifer water, and whether people get sick from drinking contaminated water

  4. A decision tree model to estimate the value of information provided by a groundwater quality monitoring network

    Science.gov (United States)

    Khader, A. I.; Rosenberg, D. E.; McKee, M.

    2013-05-01

    Groundwater contaminated with nitrate poses a serious health risk to infants when this contaminated water is used for culinary purposes. To avoid this health risk, people need to know whether their culinary water is contaminated or not. Therefore, there is a need to design an effective groundwater monitoring network, acquire information on groundwater conditions, and use acquired information to inform management options. These actions require time, money, and effort. This paper presents a method to estimate the value of information (VOI) provided by a groundwater quality monitoring network located in an aquifer whose water poses a spatially heterogeneous and uncertain health risk. A decision tree model describes the structure of the decision alternatives facing the decision-maker and the expected outcomes from these alternatives. The alternatives include (i) ignore the health risk of nitrate-contaminated water, (ii) switch to alternative water sources such as bottled water, or (iii) implement a previously designed groundwater quality monitoring network that takes into account uncertainties in aquifer properties, contaminant transport processes, and climate (Khader, 2012). The VOI is estimated as the difference between the expected costs of implementing the monitoring network and the lowest-cost uninformed alternative. We illustrate the method for the Eocene Aquifer, West Bank, Palestine, where methemoglobinemia (blue baby syndrome) is the main health problem associated with the principal contaminant nitrate. The expected cost of each alternative is estimated as the weighted sum of the costs and probabilities (likelihoods) associated with the uncertain outcomes resulting from the alternative. Uncertain outcomes include actual nitrate concentrations in the aquifer, concentrations reported by the monitoring system, whether people abide by manager recommendations to use/not use aquifer water, and whether people get sick from drinking contaminated water. Outcome costs

  5. A decision tree model to estimate the value of information provided by a groundwater quality monitoring network

    Science.gov (United States)

    Khader, A.; Rosenberg, D.; McKee, M.

    2012-12-01

    Nitrate pollution poses a health risk for infants whose freshwater drinking source is groundwater. This risk creates a need to design an effective groundwater monitoring network, acquire information on groundwater conditions, and use acquired information to inform management. These actions require time, money, and effort. This paper presents a method to estimate the value of information (VOI) provided by a groundwater quality monitoring network located in an aquifer whose water poses a spatially heterogeneous and uncertain health risk. A decision tree model describes the structure of the decision alternatives facing the decision maker and the expected outcomes from these alternatives. The alternatives include: (i) ignore the health risk of nitrate contaminated water, (ii) switch to alternative water sources such as bottled water, or (iii) implement a previously designed groundwater quality monitoring network that takes into account uncertainties in aquifer properties, pollution transport processes, and climate (Khader and McKee, 2012). The VOI is estimated as the difference between the expected costs of implementing the monitoring network and the lowest-cost uninformed alternative. We illustrate the method for the Eocene Aquifer, West Bank, Palestine where methemoglobinemia is the main health problem associated with the principal pollutant nitrate. The expected cost of each alternative is estimated as the weighted sum of the costs and probabilities (likelihoods) associated with the uncertain outcomes resulting from the alternative. Uncertain outcomes include actual nitrate concentrations in the aquifer, concentrations reported by the monitoring system, whether people abide by manager recommendations to use/not-use aquifer water, and whether people get sick from drinking contaminated water. Outcome costs include healthcare for methemoglobinemia, purchase of bottled water, and installation and maintenance of the groundwater monitoring system. At current

  6. Sunitinib malate provides activity against murine bladder tumor growth and invasion in a preclinical orthotopic model.

    Science.gov (United States)

    Chan, Eddie Shu-yin; Patel, Amit R; Hansel, Donna E; Larchian, William A; Heston, Warren D

    2012-09-01

    To evaluate the effects of sunitinib on localized bladder cancer in a mouse orthotopic bladder tumor model. We used an established orthotopic mouse bladder cancer model in syngeneic C3H/He mice. Treatment doses of 40 mg/kg of sunitinib or placebo sterile saline were administrated daily by oral gavage. Tumor volume, intratumoral perfusion, and in vivo vascular endothelial growth factor receptor-2 expression were measured using a targeted contrast-enhanced micro-ultrasound imaging system. The findings were correlated with the total bladder weight, tumor stage, and survival. The effects of sunitinib malate on angiogenesis and cellular proliferation were measured by immunostaining of CD31 and Ki-67. Significant inhibition of tumor growth was seen after sunitinib treatment compared with the control. The incidence of extravesical extension of the bladder tumor and hydroureter in the sunitinib-treated group (30% and 20%, respectively) was lower than the incidence in the control group (66.7% and 55.6%, respectively). Sunitinib therapy prolonged the survival in mice, with statistical significance (log-rank test, P = .03). On targeted contrast-enhanced micro-ultrasound imaging, in vivo vascular endothelial growth factor receptor-2 expression was reduced in the sunitinib group and correlated with a decrease in microvessel density. The results of our study have demonstrated the antitumor effects of sunitinib in the mouse localized bladder cancer model. Sunitinib inhibited the growth of bladder tumors and prolonged survival. Given that almost 30% of cases in our treatment arm developed extravesical disease, sunitinib might be suited as a part of a multimodal treatment regimen for bladder cancer. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. A spirulina-enhanced diet provides neuroprotection in an α-synuclein model of Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Mibel M Pabon

    Full Text Available Inflammation in the brain plays a major role in neurodegenerative diseases. In particular, microglial cell activation is believed to be associated with the pathogenesis of neurodegenerative diseases, including Parkinson's disease (PD. An increase in microglia activation has been shown in the substantia nigra pars compacta (SNpc of PD models when there has been a decrease in tyrosine hydroxylase (TH positive cells. This may be a sign of neurotoxicity due to prolonged activation of microglia in both early and late stages of disease progression. Natural products, such as spirulina, derived from blue green algae, are believed to help reverse this effect due to its anti-inflammatory/anti-oxidant properties. An adeno-associated virus vector (AAV9 for α-synuclein was injected in the substantia nigra of rats to model Parkinson's disease and to study the effects of spirulina on the inflammatory response. One month prior to surgeries, rats were fed either a diet enhanced with spirulina or a control diet. Immunohistochemistry was analyzed with unbiased stereological methods to quantify lesion size and microglial activation. As hypothesized, spirulina was neuroprotective in this α-synuclein model of PD as more TH+ and NeuN+ cells were observed; spirulina concomitantly decreased the numbers of activated microglial cells as determined by MHCII expression. This decrease in microglia activation may have been due, in part, to the effect of spirulina to increase expression of the fractalkine receptor (CX3CR1 on microglia. With this study we hypothesize that α-synuclein neurotoxicity is mediated, at least in part, via an interaction with microglia. We observed a decrease in activated microglia in the rats that received a spirulina- enhanced diet concomitant to neuroprotection. The increase in CX3CR1 in the groups that received spirulina, suggests a potential mechanism of action.

  8. An artificial pancreas provided a novel model of blood glucose level variability in beagles.

    Science.gov (United States)

    Munekage, Masaya; Yatabe, Tomoaki; Kitagawa, Hiroyuki; Takezaki, Yuka; Tamura, Takahiko; Namikawa, Tsutomu; Hanazaki, Kazuhiro

    2015-12-01

    Although the effects on prognosis of blood glucose level variability have gained increasing attention, it is unclear whether blood glucose level variability itself or the manifestation of pathological conditions that worsen prognosis. Then, previous reports have not been published on variability models of perioperative blood glucose levels. The aim of this study is to establish a novel variability model of blood glucose concentration using an artificial pancreas. We maintained six healthy, male beagles. After anesthesia induction, a 20-G venous catheter was inserted in the right femoral vein and an artificial pancreas (STG-22, Nikkiso Co. Ltd., Tokyo, Japan) was connected for continuous blood glucose monitoring and glucose management. After achieving muscle relaxation, total pancreatectomy was performed. After 1 h of stabilization, automatic blood glucose control was initiated using the artificial pancreas. Blood glucose level varied for 8 h, alternating between the target blood glucose values of 170 and 70 mg/dL. Eight hours later, the experiment was concluded. Total pancreatectomy was performed for 62 ± 13 min. Blood glucose swings were achieved 9.8 ± 2.3 times. The average blood glucose level was 128.1 ± 5.1 mg/dL with an SD of 44.6 ± 3.9 mg/dL. The potassium levels after stabilization and at the end of the experiment were 3.5 ± 0.3 and 3.1 ± 0.5 mmol/L, respectively. In conclusion, the results of the present study demonstrated that an artificial pancreas contributed to the establishment of a novel variability model of blood glucose levels in beagles.

  9. Rat tibial osteotomy model providing a range of normal to impaired healing.

    Science.gov (United States)

    Miles, Joan D; Weinhold, Paul; Brimmo, Olubusola; Dahners, Laurence

    2011-01-01

    The purpose of this study was to develop an inexpensive and easily implemented rat tibial osteotomy model capable of producing a range of healing outcomes. A saw blade was used to create a transverse osteotomy of the tibia in 89 Sprague-Dawley rats. A 0.89 mm diameter stainless steel wire was then inserted as an intramedullary nail to stabilize the fracture. To impair healing, 1, 2, or 3 mm cylindrical polyetheretherketone (PEEK) spacer beads were threaded onto the wires, between the bone ends. Fracture healing was evaluated radiographically, biomechanically, and histologically at 5 weeks. Means were compared for statistical differences by one-way ANOVA and Holm-Sidak multiple comparison testing. The mean number of "cortices bridged" for the no spacer group was 3.4 (SD ± 0.8), which was significantly greater than in the 1 mm (2.3 ± 1.4), 2 mm (0.8 ± 0.7), and 3 mm (0.3 ± 0.4) groups (p < 0.003). Biomechanical results correlated with radiographic findings, with an ultimate torque of 172 ± 53, 137 ± 41, 90 ± 38, and 24 ± 23 N/mm with a 0, 1, 2, or 3 mm defect, respectively. In conclusion, we have demonstrated that this inexpensive, technically straightforward model can be used to create a range of outcomes from normal healing to impaired healing, to nonunions. This model may be useful for testing new therapeutic strategies to promote fracture healing, materials thought to be able to heal critical-sized defects, or evaluating agents suspected of impairing healing. Copyright © 2010 Orthopaedic Research Society.

  10. A Spirulina-Enhanced Diet Provides Neuroprotection in an α-Synuclein Model of Parkinson's Disease

    Science.gov (United States)

    Pabon, Mibel M.; Jernberg, Jennifer N.; Morganti, Josh; Contreras, Jessika; Hudson, Charles E.; Klein, Ronald L.; Bickford, Paula C.

    2012-01-01

    Inflammation in the brain plays a major role in neurodegenerative diseases. In particular, microglial cell activation is believed to be associated with the pathogenesis of neurodegenerative diseases, including Parkinson’s disease (PD). An increase in microglia activation has been shown in the substantia nigra pars compacta (SNpc) of PD models when there has been a decrease in tyrosine hydroxylase (TH) positive cells. This may be a sign of neurotoxicity due to prolonged activation of microglia in both early and late stages of disease progression. Natural products, such as spirulina, derived from blue green algae, are believed to help reverse this effect due to its anti-inflammatory/anti-oxidant properties. An adeno-associated virus vector (AAV9) for α-synuclein was injected in the substantia nigra of rats to model Parkinson's disease and to study the effects of spirulina on the inflammatory response. One month prior to surgeries, rats were fed either a diet enhanced with spirulina or a control diet. Immunohistochemistry was analyzed with unbiased stereological methods to quantify lesion size and microglial activation. As hypothesized, spirulina was neuroprotective in this α-synuclein model of PD as more TH+ and NeuN+ cells were observed; spirulina concomitantly decreased the numbers of activated microglial cells as determined by MHCII expression. This decrease in microglia activation may have been due, in part, to the effect of spirulina to increase expression of the fractalkine receptor (CX3CR1) on microglia. With this study we hypothesize that α-synuclein neurotoxicity is mediated, at least in part, via an interaction with microglia. We observed a decrease in activated microglia in the rats that received a spirulina- enhanced diet concomitant to neuroprotection. The increase in CX3CR1 in the groups that received spirulina, suggests a potential mechanism of action. PMID:23028885

  11. User modeling and adaptation for daily routines providing assistance to people with special needs

    CERN Document Server

    Martín, Estefanía; Carro, Rosa M

    2013-01-01

    User Modeling and Adaptation for Daily Routines is motivated by the need to bring attention to how people with special needs can benefit from adaptive methods and techniques in their everyday lives. Assistive technologies, adaptive systems and context-aware applications are three well-established research fields. There is, in fact, a vast amount of literature that covers HCI-related issues in each area separately. However, the contributions in the intersection of these areas have been less visible, despite the fact that such synergies may have a great impact on improving daily living.Presentin

  12. Metabolomic perfusate analysis during kidney machine perfusion: the pig provides an appropriate model for human studies.

    Directory of Open Access Journals (Sweden)

    Jay Nath

    Full Text Available Hypothermic machine perfusion offers great promise in kidney transplantation and experimental studies are needed to establish the optimal conditions for this to occur. Pig kidneys are considered to be a good model for this purpose and share many properties with human organs. However it is not established whether the metabolism of pig kidneys in such hypothermic hypoxic conditions is comparable to human organs.Standard criteria human (n = 12 and porcine (n = 10 kidneys underwent HMP using the LifePort Kidney Transporter 1.0 (Organ Recovery Systems using KPS-1 solution. Perfusate was sampled at 45 minutes and 4 hours of perfusion and metabolomic analysis performed using 1-D 1H-NMR spectroscopy.There was no inter-species difference in the number of metabolites identified. Of the 30 metabolites analysed, 16 (53.3% were present in comparable concentrations in the pig and human kidney perfusates. The rate of change of concentration for 3-Hydroxybutyrate was greater for human kidneys (p<0.001. For the other 29 metabolites (96.7%, there was no difference in the rate of change of concentration between pig and human samples.Whilst there are some differences between pig and human kidneys during HMP they appear to be metabolically similar and the pig seems to be a valid model for human studies.

  13. Penerapan Model Multidimensional Scaling dalam Pemetaan Brand Positioning Internet Service Provider

    Directory of Open Access Journals (Sweden)

    Robertus Tang Herman

    2010-03-01

    Full Text Available In this high-tech era, there have been tremendous advances in tech-based products and services. Internet is one of them that have widened the world’s eyes to a new borderless marketplace. High competition among internet service providers has pushed companies to create competitive advantage and brilliant marketing strategies. They undertake positioning mapping to describe product or service’s positioning amongst many competitors. The right positioning strategy becomes a powerful weapon to win in the battle. This research is designed to create positioning mapping based on perceptual mapping. The researcher uses Multidimensional Scaling and image mapping to achieve this research goal. Sampling is using non-probability sampling in Jakarta. Based on non-attribute approach, the research findings show that there is similarity between two different brands. Thus, both brands are competing against one another. On the other hand, CBN and Netzap provider reflect some differences to others. And some brands require some improvements in terms of network reliability.

  14. Multiple imputation as one tool to provide longitudinal databases for modelling human height and weight development.

    Science.gov (United States)

    Aßmann, C

    2016-06-01

    Besides large efforts regarding field work, provision of valid databases requires statistical and informational infrastructure to enable long-term access to longitudinal data sets on height, weight and related issues. To foster use of longitudinal data sets within the scientific community, provision of valid databases has to address data-protection regulations. It is, therefore, of major importance to hinder identifiability of individuals from publicly available databases. To reach this goal, one possible strategy is to provide a synthetic database to the public allowing for pretesting strategies for data analysis. The synthetic databases can be established using multiple imputation tools. Given the approval of the strategy, verification is based on the original data. Multiple imputation by chained equations is illustrated to facilitate provision of synthetic databases as it allows for capturing a wide range of statistical interdependencies. Also missing values, typically occurring within longitudinal databases for reasons of item non-response, can be addressed via multiple imputation when providing databases. The provision of synthetic databases using multiple imputation techniques is one possible strategy to ensure data protection, increase visibility of longitudinal databases and enhance the analytical potential.

  15. JSBML 1.0: providing a smorgasbord of options to encode systems biology models.

    Science.gov (United States)

    Rodriguez, Nicolas; Thomas, Alex; Watanabe, Leandro; Vazirabad, Ibrahim Y; Kofia, Victor; Gómez, Harold F; Mittag, Florian; Matthes, Jakob; Rudolph, Jan; Wrzodek, Finja; Netz, Eugen; Diamantikos, Alexander; Eichner, Johannes; Keller, Roland; Wrzodek, Clemens; Fröhlich, Sebastian; Lewis, Nathan E; Myers, Chris J; Le Novère, Nicolas; Palsson, Bernhard Ø; Hucka, Michael; Dräger, Andreas

    2015-10-15

    JSBML, the official pure Java programming library for the Systems Biology Markup Language (SBML) format, has evolved with the advent of different modeling formalisms in systems biology and their ability to be exchanged and represented via extensions of SBML. JSBML has matured into a major, active open-source project with contributions from a growing, international team of developers who not only maintain compatibility with SBML, but also drive steady improvements to the Java interface and promote ease-of-use with end users. Source code, binaries and documentation for JSBML can be freely obtained under the terms of the LGPL 2.1 from the website http://sbml.org/Software/JSBML. More information about JSBML can be found in the user guide at http://sbml.org/Software/JSBML/docs/. jsbml-development@googlegroups.com or andraeger@eng.ucsd.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  16. Pharmacological targeting of GSK-3 and NRF2 provides neuroprotection in a preclinical model of tauopathy

    Directory of Open Access Journals (Sweden)

    Antonio Cuadrado

    2018-04-01

    Full Text Available Tauopathies are a group of neurodegenerative disorders where TAU protein is presented as aggregates or is abnormally phosphorylated, leading to alterations of axonal transport, neuronal death and neuroinflammation. Currently, there is no treatment to slow progression of these diseases. Here, we have investigated whether dimethyl fumarate (DMF, an inducer of the transcription factor NRF2, could mitigate tauopathy in a mouse model. The signaling pathways modulated by DMF were also studied in mouse embryonic fibroblast (MEFs from wild type or KEAP1-deficient mice. The effect of DMF on neurodegeneration, astrocyte and microglial activation was examined in Nrf2+/+ and Nrf2−/− mice stereotaxically injected in the right hippocampus with an adeno-associated vector expressing human TAUP301L and treated daily with DMF (100 mg/kg, i.g during three weeks. DMF induces the NRF2 transcriptional through a mechanism that involves KEAP1 but also PI3K/AKT/GSK-3-dependent pathways. DMF modulates GSK-3β activity in mouse hippocampi. Furthermore, DMF modulates TAU phosphorylation, neuronal impairment measured by calbindin-D28K and BDNF expression, and inflammatory processes involved in astrogliosis, microgliosis and pro-inflammatory cytokines production. This study reveals neuroprotective effects of DMF beyond disruption of the KEAP1/NRF2 axis by inhibiting GSK3 in a mouse model of tauopathy. Our results support repurposing of this drug for treatment of these diseases. Keywords: DMF, Inflammation, Neurodegeneration, NRF2, Oxidative stress, TAU/ GSK-3

  17. Bridging the financial gap through providing contract services: a model for publicly funded clinical biobanks.

    Science.gov (United States)

    Kozlakidis, Zisis; Mant, Christine; Cason, John

    2012-08-01

    Biobanks offer translational researchers a novel method of obtaining clinical research materials, patient data, and relevant ethical and legal permissions. However, such tissue collections are expensive to establish and maintain. Current opinion is that such initiatives can only survive with core funding from Government or major funding bodies. Given the present climate of financial austerity, funding agencies may be tempted to invest in fast-return research projects rather than in maintaining tissue collections, whose benefits will only become apparent in much longer timescales. Thus, securing additional funding for biobanks could provide a valuable boost enabling an extension of core services. Here we suggest that using biobank expertise to offer contract services to clinicians and industry may be an alternative approach to obtaining such extra funding.

  18. A case study of a team-based, quality-focused compensation model for primary care providers.

    Science.gov (United States)

    Greene, Jessica; Hibbard, Judith H; Overton, Valerie

    2014-06-01

    In 2011, Fairview Health Services began replacing their fee-for-service compensation model for primary care providers (PCPs), which included an annual pay-for-performance bonus, with a team-based model designed to improve quality of care, patient experience, and (eventually) cost containment. In-depth interviews and an online survey of PCPs early after implementation of the new model suggest that it quickly changed the way many PCPs practiced. Most PCPs reported a shift in orientation toward quality of care, working more collaboratively with their colleagues and focusing on their full panel of patients. The majority reported that their quality of care had improved because of the model and that their colleagues' quality had to. The comprehensive change did, however, result in lower fee-for-service billing and reductions in PCP satisfaction. While Fairview's compensation model is still a work in progress, their early experiences can provide lessons for other delivery systems seeking to reform PCP compensation.

  19. Hyperspectral Imaging Provides Early Prediction of Random Axial Flap Necrosis in a Preclinical Model.

    Science.gov (United States)

    Chin, Michael S; Chappell, Ava G; Giatsidis, Giorgio; Perry, Dylan J; Lujan-Hernandez, Jorge; Haddad, Anthony; Matsumine, Hajime; Orgill, Dennis P; Lalikos, Janice F

    2017-06-01

    Necrosis remains a significant complication in cutaneous flap procedures. Monitoring, and ideally prediction, of vascular compromise in the early postoperative period may allow surgeons to limit the impact of complications by prompt intervention. Hyperspectral imaging could be a reliable, effective, and noninvasive method for predicting flap survival postoperatively. In this preclinical study, the authors demonstrate that hyperspectral imaging is able to correlate early skin perfusion changes and ultimate flap survival in a preclinical model. Thirty-one hairless, immunocompetent, adult male mice were used. Random pattern dorsal skin flaps were elevated and sutured back into place with a silicone barrier. Hyperspectral imaging and digital images were obtained 30 minutes, 24 hours, or 72 hours after flap elevation and before sacrifice on postoperative day 7. Areas of high deoxygenated hemoglobin change (124; 95 percent CI, 118 to 129) seen at 30 minutes after surgery were associated with greater than 50 percent flap necrosis at postoperative day 7. Areas demarcated by high deoxygenated hemoglobin at 30 minutes postoperatively had a statistically significant correlation with areas of macroscopic necrosis on postoperative day 7. Analysis of images obtained at 24 and 72 hours did not show similar changes. These findings suggest that early changes in deoxygenated hemoglobin seen with hyperspectral imaging may predict the region and extent of flap necrosis. Further clinical studies are needed to determine whether hyperspectral imaging is applicable to the clinical setting.

  20. Assistance dogs provide a useful behavioral model to enrich communicative skills of assistance robots.

    Science.gov (United States)

    Gácsi, Márta; Szakadát, Sára; Miklósi, Adám

    2013-01-01

    These studies are part of a project aiming to reveal relevant aspects of human-dog interactions, which could serve as a model to design successful human-robot interactions. Presently there are no successfully commercialized assistance robots, however, assistance dogs work efficiently as partners for persons with disabilities. In Study 1, we analyzed the cooperation of 32 assistance dog-owner dyads performing a carrying task. We revealed typical behavior sequences and also differences depending on the dyads' experiences and on whether the owner was a wheelchair user. In Study 2, we investigated dogs' responses to unforeseen difficulties during a retrieving task in two contexts. Dogs displayed specific communicative and displacement behaviors, and a strong commitment to execute the insoluble task. Questionnaire data from Study 3 confirmed that these behaviors could successfully attenuate owners' disappointment. Although owners anticipated the technical competence of future assistance robots to be moderate/high, they could not imagine robots as emotional companions, which negatively affected their acceptance ratings of future robotic assistants. We propose that assistance dogs' cooperative behaviors and problem solving strategies should inspire the development of the relevant functions and social behaviors of assistance robots with limited manual and verbal skills.

  1. Planarians as models of cadmium-induced neoplasia provide measurable benchmarks for mechanistic studies.

    Science.gov (United States)

    Voura, Evelyn B; Montalvo, Melissa J; Dela Roca, Kevin T; Fisher, Julia M; Defamie, Virginie; Narala, Swami R; Khokha, Rama; Mulligan, Margaret E; Evans, Colleen A

    2017-08-01

    Bioassays of planarian neoplasia highlight the potential of these organisms as useful standards to assess whether environmental toxins such as cadmium promote tumorigenesis. These studies complement other investigations into the exceptional healing and regeneration of planarians - processes that are driven by a population of active stem cells, or neoblasts, which are likely transformed during planarian tumor growth. Our goal was to determine if planarian tumorigenesis assays are amenable to mechanistic studies of cadmium carcinogenesis. To that end we demonstrate, by examining both counts of cell populations by size, and instances of mitosis, that the activity of the stem cell population can be monitored. We also provide evidence that specific biomodulators can affect the potential of planarian neoplastic growth, in that an inhibitor of metalloproteinases effectively blocked the development of the lesions. From these results, we infer that neoblast activity does respond to cadmium-induced tumor growth, and that metalloproteinases are required for the progression of cancer in the planarian. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Directed evolution of a model primordial enzyme provides insights into the development of the genetic code.

    Directory of Open Access Journals (Sweden)

    Manuel M Müller

    Full Text Available The contemporary proteinogenic repertoire contains 20 amino acids with diverse functional groups and side chain geometries. Primordial proteins, in contrast, were presumably constructed from a subset of these building blocks. Subsequent expansion of the proteinogenic alphabet would have enhanced their capabilities, fostering the metabolic prowess and organismal fitness of early living systems. While the addition of amino acids bearing innovative functional groups directly enhances the chemical repertoire of proteomes, the inclusion of chemically redundant monomers is difficult to rationalize. Here, we studied how a simplified chorismate mutase evolves upon expanding its amino acid alphabet from nine to potentially 20 letters. Continuous evolution provided an enhanced enzyme variant that has only two point mutations, both of which extend the alphabet and jointly improve protein stability by >4 kcal/mol and catalytic activity tenfold. The same, seemingly innocuous substitutions (Ile→Thr, Leu→Val occurred in several independent evolutionary trajectories. The increase in fitness they confer indicates that building blocks with very similar side chain structures are highly beneficial for fine-tuning protein structure and function.

  3. Efforts to enrich evidence for accurate diagnoses

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-10-01

    Full Text Available There are always difficulties to gain accurate diagnoses for complicated disease conditions, especially for the diagnoses of disorders with similar characteristics. With advanced technology, clinicians are able to detect tiny changes during the on-going diseased processes. Research papers in the current issue help to understand some features of some disorders. In this case, the current issue would provide some references or hints to the accurate diagnoses and the precisional therapies for some disorders.

  4. Accurate and efficient gp120 V3 loop structure based models for the determination of HIV-1 co-receptor usage

    Directory of Open Access Journals (Sweden)

    Vaisman Iosif I

    2010-10-01

    Full Text Available Abstract Background HIV-1 targets human cells expressing both the CD4 receptor, which binds the viral envelope glycoprotein gp120, as well as either the CCR5 (R5 or CXCR4 (X4 co-receptors, which interact primarily with the third hypervariable loop (V3 loop of gp120. Determination of HIV-1 affinity for either the R5 or X4 co-receptor on host cells facilitates the inclusion of co-receptor antagonists as a part of patient treatment strategies. A dataset of 1193 distinct gp120 V3 loop peptide sequences (989 R5-utilizing, 204 X4-capable is utilized to train predictive classifiers based on implementations of random forest, support vector machine, boosted decision tree, and neural network machine learning algorithms. An in silico mutagenesis procedure employing multibody statistical potentials, computational geometry, and threading of variant V3 sequences onto an experimental structure, is used to generate a feature vector representation for each variant whose components measure environmental perturbations at corresponding structural positions. Results Classifier performance is evaluated based on stratified 10-fold cross-validation, stratified dataset splits (2/3 training, 1/3 validation, and leave-one-out cross-validation. Best reported values of sensitivity (85%, specificity (100%, and precision (98% for predicting X4-capable HIV-1 virus, overall accuracy (97%, Matthew's correlation coefficient (89%, balanced error rate (0.08, and ROC area (0.97 all reach critical thresholds, suggesting that the models outperform six other state-of-the-art methods and come closer to competing with phenotype assays. Conclusions The trained classifiers provide instantaneous and reliable predictions regarding HIV-1 co-receptor usage, requiring only translated V3 loop genotypes as input. Furthermore, the novelty of these computational mutagenesis based predictor attributes distinguishes the models as orthogonal and complementary to previous methods that utilize sequence

  5. Overlapping gene expression profiles of model compounds provide opportunities for immunotoxicity screening

    International Nuclear Information System (INIS)

    Baken, Kirsten A.; Pennings, Jeroen L.A.; Jonker, Martijs J.; Schaap, Mirjam M.; Vries, Annemieke de; Steeg, Harry van; Breit, Timo M.; Loveren, Henk van

    2008-01-01

    In order to investigate immunotoxic effects of a set of model compounds in mice, a toxicogenomics approach was combined with information on macroscopical and histopathological effects on spleens and on modulation of immune function. Bis(tri-n-butyltin)oxide (TBTO), cyclosporin A (CsA), and benzo[a]pyrene (B[a]P) were administered to C57BL/6 mice at immunosuppressive dose levels. Acetaminophen (APAP) was included in the study since indications of immunomodulating properties of this compound have appeared in the literature. TBTO exposure caused the most pronounced effect on gene expression and also resulted in the most severe reduction of body weight gain and induction of splenic irregularities. All compounds caused inhibition of cell division in the spleen as shown by microarray analysis as well as by suppression of lymphocyte proliferation after application of a contact sensitizer as demonstrated in an immune function assay that was adapted from the local lymph node assay. The immunotoxicogenomics approach applied in this study thus pointed to immunosuppression through cell cycle arrest as a common mechanism of action of immunotoxicants, including APAP. Genes related to cell division such as Ccna2, Brca1, Birc5, Incenp, and Cdkn1a (p21) were identified as candidate genes to indicate anti-proliferative effects of xenobiotics in immune cells for future screening assays. The results of our experiments also show the value of group wise pathway analysis for detection of more subtle transcriptional effects and the potency of evaluation of effects in the spleen to demonstrate immunotoxicity

  6. The Charrette Design Model Provides a Means to Promote Collaborative Design in Higher Education

    Directory of Open Access Journals (Sweden)

    Webber Steven B.

    2016-02-01

    Full Text Available Higher education is typically compartmentalized by field and expertise level leading to a lack of collaboration across disciplines and reduced interaction among students of the same discipline that possess varying levels of expertise. The divisions between disciplines and expertise levels can be perforated through the use of a concentrated, short-term design problem called a charrette. The charrette is commonly used in architecture and interior design, and applications in other disciplines are possible. The use of the charrette in an educational context provides design students the opportunity to collaborate in teams where members have varying levels of expertise and consult with experts in allied disciplines in preparation for a profession that will expect the same. In the context of a competitive charrette, this study examines the effectiveness of forming teams of design students that possess a diversity of expertise. This study also looks at the effectiveness of integrating input from professional experts in design-allied disciplines (urban planning, architecture, mechanical and electrical engineering and a design-scenario-specific discipline (medicine into the students' design process. Using a chi-square test of goodness-of-fit, it is possible to determine student preferences in terms of the team configurations as well as their preferences on the experts. In this charrette context, the students indicated that the cross-expertise student team make-up had a positive effect for both the more experienced students and the less experienced students. Overall, the students placed high value on the input from experts in design-allied fields for the charrette. They also perceived a preference of input from external experts that had an immediate and practical implication to their design process. This article will also show student work examples as additional evidence of the successful cross-expertise collaboration among the design students and evidence

  7. Accurate thickness measurement of graphene

    International Nuclear Information System (INIS)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-01-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1–1.3 nm to 0.1–0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. (paper)

  8. An accurate mobility model for the I-V characteristics of n-channel enhancement-mode MOSFETs with single-channel boron implantation

    International Nuclear Information System (INIS)

    Chingyuan Wu; Yeongwen Daih

    1985-01-01

    In this paper an analytical mobility model is developed for the I-V characteristics of n-channel enhancement-mode MOSFETs, in which the effects of the two-dimensional electric fields in the surface inversion channel and the parasitic resistances due to contact and interconnection are included. Most importantly, the developed mobility model easily takes the device structure and process into consideration. In order to demonstrate the capabilities of the developed model, the structure- and process-oriented parameters in the present mobility model are calculated explicitly for an n-channel enhancement-mode MOSFET with single-channel boron implantation. Moreover, n-channel MOSFETs with different channel lengths fabricated in a production line by using a set of test keys have been characterized and the measured mobilities have been compared to the model. Excellent agreement has been obtained for all ranges of the fabricated channel lengths, which strongly support the accuracy of the model. (author)

  9. A simple simulation model as a tool to assess alternative health care provider payment reform options in Vietnam.

    Science.gov (United States)

    Cashin, Cheryl; Phuong, Nguyen Khanh; Shain, Ryan; Oanh, Tran Thi Mai; Thuy, Nguyen Thi

    2015-01-01

    Vietnam is currently considering a revision of its 2008 Health Insurance Law, including the regulation of provider payment methods. This study uses a simple spreadsheet-based, micro-simulation model to analyse the potential impacts of different provider payment reform scenarios on resource allocation across health care providers in three provinces in Vietnam, as well as on the total expenditure of the provincial branches of the public health insurance agency (Provincial Social Security [PSS]). The results show that currently more than 50% of PSS spending is concentrated at the provincial level with less than half at the district level. There is also a high degree of financial risk on district hospitals with the current fund-holding arrangement. Results of the simulation model show that several alternative scenarios for provider payment reform could improve the current payment system by reducing the high financial risk currently borne by district hospitals without dramatically shifting the current level and distribution of PSS expenditure. The results of the simulation analysis provided an empirical basis for health policy-makers in Vietnam to assess different provider payment reform options and make decisions about new models to support health system objectives.

  10. Creation of a Collaborative Disaster Preparedness Video for Daycare Providers: Use of the Delphi Model for the Creation of a Comprehensive Disaster Preparedness Video for Daycare Providers.

    Science.gov (United States)

    Mar, Pamela; Spears, Robert; Reeb, Jeffrey; Thompson, Sarah B; Myers, Paul; Burke, Rita V

    2018-02-22

    Eight million American children under the age of 5 attend daycare and more than another 50 million American children are in school or daycare settings. Emergency planning requirements for daycare licensing vary by state. Expert opinions were used to create a disaster preparedness video designed for daycare providers to cover a broad spectrum of scenarios. Various stakeholders (17) devised the outline for an educational pre-disaster video for child daycare providers using the Delphi technique. Fleiss κ values were obtained for consensus data. A 20-minute video was created, addressing the physical, psychological, and legal needs of children during and after a disaster. Viewers completed an anonymous survey to evaluate topic comprehension. A consensus was attempted on all topics, ranging from elements for inclusion to presentation format. The Fleiss κ value of 0.07 was obtained. Fifty-seven of the total 168 video viewers completed the 10-question survey, with comprehension scores ranging from 72% to 100%. Evaluation of caregivers that viewed our video supports understanding of video contents. Ultimately, the technique used to create and disseminate the resources may serve as a template for others providing pre-disaster planning education. (Disaster Med Public Health Preparedness. 2018;page 1 of 5).

  11. Investigating Effective Components of Higher Education Marketing and Providing a Marketing Model for Iranian Private Higher Education Institutions

    Science.gov (United States)

    Kasmaee, Roya Babaee; Nadi, Mohammad Ali; Shahtalebi, Badri

    2016-01-01

    Purpose: The purpose of this paper is to study and identify the effective components of higher education marketing and providing a marketing model for Iranian higher education private sector institutions. Design/methodology/approach: This study is a qualitative research. For identifying the effective components of higher education marketing and…

  12. The Development of Mouse APECED Models Provides New Insight into the Role of AIRE in Immune Regulation

    OpenAIRE

    Pereira, Lara E.; Bostik, Pavel; Ansari, Aftab A.

    2005-01-01

    Autoimmune polyendocrinopathy candidiasis ectodermal dystrophy is a rare recessive autoimmune disorder caused by a defect in a single gene called AIRE (autoimmune regulator). Characteristics of this disease include a variable combination of autoimmune endocrine tissue destruction, mucocutaneous candidiasis and ectodermal dystrophies. The development of Aire-knockout mice has provided an invaluable model for the st...

  13. Charging and discharging tests for obtaining an accurate dynamic electro-thermal model of high power lithium-ion pack system for hybrid and EV applications

    DEFF Research Database (Denmark)

    Mihet-Popa, Lucian; Camacho, Oscar Mauricio Forero; Nørgård, Per Bromand

    2013-01-01

    . The aim of the tests has been to study the impact of the battery degradation and to find out the dynamic characteristics of the cells including nonlinear open circuit voltage, series resistance and parallel transient circuit at different charge/discharge currents and cell temperature. An equivalent...... circuit model, based on the runtime battery model and the Thevenin circuit model, with parameters obtained from the tests and depending on SOC, current and temperature has been implemented in MATLAB/Simulink and Power Factory. A good alignment between simulations and measurements has been found....

  14. Improving sexual health communication between older women and their providers: how the integrative model of behavioral prediction can help.

    Science.gov (United States)

    Hughes, Anne K; Rostant, Ola S; Curran, Paul G

    2014-07-01

    Talking about sexual health can be a challenge for some older women. This project was initiated to identify key factors that improve communication between aging women and their primary care providers. A sample of women (aged 60+) completed an online survey regarding their intent to communicate with a provider about sexual health. Using the integrative model of behavioral prediction as a guide, the survey instrument captured data on attitudes, perceived norms, self-efficacy, and intent to communicate with a provider about sexual health. Data were analyzed using structural equation modeling. Self-efficacy and perceived norms were the most important factors predicting intent to communicate for this sample of women. Intent did not vary with race, but mean scores of the predictors of intent varied for African American and White women. Results can guide practice and intervention with ethnically diverse older women who may be struggling to communicate about their sexual health concerns. © The Author(s) 2013.

  15. Common Sense Model Factors Affecting African Americans' Willingness to Consult a Healthcare Provider Regarding Symptoms of Mild Cognitive Impairment.

    Science.gov (United States)

    Gleason, Carey E; Dowling, N Maritza; Benton, Susan Flowers; Kaseroff, Ashley; Gunn, Wade; Edwards, Dorothy Farrar

    2016-07-01

    Although at increased risk for developing dementia compared with white patients, older African Americans are diagnosed later in the course of dementia. Using the common sense model (CSM) of illness perception, we sought to clarify processes promoting timely diagnosis of mild cognitive impairment (MCI) for African American patients. In-person, cross-sectional survey data were obtained from 187 African American (mean age: 60.44 years). Data were collected at social and health-focused community events in three southern Wisconsin cities. The survey represented a compilation of published surveys querying CSM constructs focused on early detection of memory disorders, and willingness to discuss concerns about memory loss with healthcare providers. Derived CSM variables measuring perceived causes, consequences, and controllability of MCI were included in a structural equation model predicting the primary outcome: Willingness to discuss symptoms of MCI with a provider. Two CSM factors influenced willingness to discuss symptoms of MCI with providers: Anticipation of beneficial consequences and perception of low harm associated with an MCI diagnosis predicted participants' willingness to discuss concerns about cognitive changes. No association was found between perceived controllability and causes of MCI, and willingness to discuss symptoms with providers. These data suggest that allaying concerns about the deleterious effects of a diagnosis, and raising awareness of potential benefits, couldinfluence an African American patient's willingness to discuss symptoms of MCI with a provider. The findings offer guidance to designers of culturally congruent MCI education materials, and healthcare providers caring for older African Americans. . Published by Elsevier Inc.

  16. A random forest based risk model for reliable and accurate prediction of receipt of transfusion in patients undergoing percutaneous coronary intervention.

    Directory of Open Access Journals (Sweden)

    Hitinder S Gurm

    Full Text Available BACKGROUND: Transfusion is a common complication of Percutaneous Coronary Intervention (PCI and is associated with adverse short and long term outcomes. There is no risk model for identifying patients most likely to receive transfusion after PCI. The objective of our study was to develop and validate a tool for predicting receipt of blood transfusion in patients undergoing contemporary PCI. METHODS: Random forest models were developed utilizing 45 pre-procedural clinical and laboratory variables to estimate the receipt of transfusion in patients undergoing PCI. The most influential variables were selected for inclusion in an abbreviated model. Model performance estimating transfusion was evaluated in an independent validation dataset using area under the ROC curve (AUC, with net reclassification improvement (NRI used to compare full and reduced model prediction after grouping in low, intermediate, and high risk categories. The impact of procedural anticoagulation on observed versus predicted transfusion rates were assessed for the different risk categories. RESULTS: Our study cohort was comprised of 103,294 PCI procedures performed at 46 hospitals between July 2009 through December 2012 in Michigan of which 72,328 (70% were randomly selected for training the models, and 30,966 (30% for validation. The models demonstrated excellent calibration and discrimination (AUC: full model  = 0.888 (95% CI 0.877-0.899, reduced model AUC = 0.880 (95% CI, 0.868-0.892, p for difference 0.003, NRI = 2.77%, p = 0.007. Procedural anticoagulation and radial access significantly influenced transfusion rates in the intermediate and high risk patients but no clinically relevant impact was noted in low risk patients, who made up 70% of the total cohort. CONCLUSIONS: The risk of transfusion among patients undergoing PCI can be reliably calculated using a novel easy to use computational tool (https://bmc2.org/calculators/transfusion. This risk prediction

  17. Accurate dispersion calculations: AUSTAL2000

    International Nuclear Information System (INIS)

    Janicke, U.

    2005-01-01

    Until the 2002 amendment of the Clean Air Technical Code of 1968, Annex C of this regulation required standard pollutant emission forecasts to be based on the Gaussian flag model. It was clear even at the time of the Code's initial promulgation that this model is only valid in a very narrow application range and in particular not in cases of sources close to ground level, low ground surface roughness and complex dispersion situations. In German licensing procedures there has been for this reason an increasing use of more complex models over the past 10 years, the most frequently used of which today is a Lagrangian dispersion model. This model type was standardised in VDI (Association of German Engineers) Guideline 3945 Sheet 3 in the year 2000. In the course of amending the Clean Air Technical Code in accordance with the new EU Framework Directive the decision was taken at the Environmental Protection Office to replace the Gaussian model with the Lagrangian model as described in VDI 3945 Sheet 3. Using the LASAT dispersion model as a basis the AUSTAL2000 program system has now been developed, providing an example of how the algorithms of Annex 3 of the Clean Air Technical Code can be used in practice. AUSTAL2000 has been available on the Internet since the year 2002 along with source text, documentation and example calculations

  18. The accurate particle tracer code

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  19. Modeling the growth and decline of pathogen effective population size provides insight into epidemic dynamics and drivers of antimicrobial resistance.

    Science.gov (United States)

    Volz, Erik M; Didelot, Xavier

    2018-02-07

    Non-parametric population genetic modeling provides a simple and flexible approach for studying demographic history and epidemic dynamics using pathogen sequence data. Existing Bayesian approaches are premised on stochastic processes with stationary increments which may provide an unrealistic prior for epidemic histories which feature extended period of exponential growth or decline. We show that non-parametric models defined in terms of the growth rate of the effective population size can provide a more realistic prior for epidemic history. We propose a non-parametric autoregressive model on the growth rate as a prior for effective population size, which corresponds to the dynamics expected under many epidemic situations. We demonstrate the use of this model within a Bayesian phylodynamic inference framework. Our method correctly reconstructs trends of epidemic growth and decline from pathogen genealogies even when genealogical data is sparse and conventional skyline estimators erroneously predict stable population size. We also propose a regression approach for relating growth rates of pathogen effective population size and time-varying variables that may impact the replicative fitness of a pathogen. The model is applied to real data from rabies virus and Staphylococcus aureus epidemics. We find a close correspondence between the estimated growth rates of a lineage of methicillin-resistant S. aureus and population-level prescription rates of β-lactam antibiotics. The new models are implemented in an open source R package called skygrowth which is available at https://github.com/mrc-ide/skygrowth. © The Author(s) 2018. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  20. [Barriers to the normalization of telemedicine in a healthcare system model based on purchasing of healthcare services using providers' contracts].

    Science.gov (United States)

    Roig, Francesc; Saigí, Francesc

    2011-01-01

    Despite the clear political will to promote telemedicine and the large number of initiatives, the incorporation of this modality in clinical practice remains limited. The objective of this study was to identify the barriers perceived by key professionals who actively participate in the design and implementation of telemedicine in a healthcare system model based on purchasing of healthcare services using providers' contracts. We performed a qualitative study based on data from semi-structured interviews with 17 key informants belonging to distinct Catalan health organizations. The barriers identified were grouped in four areas: technological, organizational, human and economic. The main barriers identified were changes in the healthcare model caused by telemedicine, problems with strategic alignment, resistance to change in the (re)definition of roles, responsibilities and new skills, and lack of a business model that incorporates telemedicine in the services portfolio to ensure its sustainability. In addition to suitable management of change and of the necessary strategic alignment, the definitive normalization of telemedicine in a mixed healthcare model based on purchasing of healthcare services using providers' contracts requires a clear and stable business model that incorporates this modality in the services portfolio and allows healthcare organizations to obtain reimbursement from the payer. 2010 SESPAS. Published by Elsevier Espana. All rights reserved.

  1. The Development of Mouse APECED Models Provides New Insight into the Role of AIRE in Immune Regulation

    Science.gov (United States)

    Pereira, Lara E.; Bostik, Pavel; Ansari, Aftab A.

    2005-01-01

    Autoimmune polyendocrinopathy candidiasis ectodermal dystrophy is a rare recessive autoimmune disorder caused by a defect in a single gene called AIRE (autoimmune regulator). Characteristics of this disease include a variable combination of autoimmune endocrine tissue destruction, mucocutaneous candidiasis and ectodermal dystrophies. The development of Aire-knockout mice has provided an invaluable model for the study of this disease. The aim of this review is to briefly highlight the strides made in APECED research using these transgenic murine models, with a focus on known roles of Aire in autoimmunity. The findings thus far are compelling and prompt additional areas of study which are discussed. PMID:16295527

  2. The Development of Mouse APECED Models Provides New Insight into the Role of AIRE in Immune Regulation

    Directory of Open Access Journals (Sweden)

    Lara E. Pereira

    2005-01-01

    Full Text Available Autoimmune polyendocrinopathy candidiasis ectodermal dystrophy is a rare recessive autoimmune disorder caused by a defect in a single gene called AIRE (autoimmune regulator. Characteristics of this disease include a variable combination of autoimmune endocrine tissue destruction, mucocutaneous candidiasis and ectodermal dystrophies. The development of Aire-knockout mice has provided an invaluable model for the study of this disease. The aim of this review is to briefly highlight the strides made in APECED research using these transgenic murine models, with a focus on known roles of Aire in autoimmunity. The findings thus far are compelling and prompt additional areas of study which are discussed.

  3. Assimilation of Sentinel-1 estimates of Precipitable Water Vapor (PWV) into a Numerical Weather Model for a more accurate forecast of extreme weather events

    Science.gov (United States)

    Mateus, Pedro; Nico, Giovanni; Catalao, Joao

    2017-04-01

    In the last two decades, SAR interferometry has been used to obtain maps of Precipitable Water Vapor (PWV).This maps are characterized by their high spatial resolution when compared to the currently available PWV measurements (e.g. GNSS, radiometers or radiosondes). Several previous works have shown that assimilating PWV values, mainly derived from GNSS observations, into Numerical Weather Models (NWMs) can significantly improve rainfall predictions.It is noteworthy that the PWV-derived from GNSS observations have a high temporal resolution but a low spatialone. In addition, there are many regions without any GNSS stations, where temporal and spatial distribution of PWV areonly available through satellite measurements. The first attempt to assimilate InSAR-derived maps of PWV (InSAR-PWV) into a NWM was made by Pichelli et al. [1].They used InSAR-PWV maps obtained from ENVISAT-ASAR images and the mesoscale weather prediction model MM5 over the city of Rome, Italy. The statistical indices show that the InSAR-PWVdata assimilation improves the forecast of weak to moderateprecipitation (BPD/96069/2013. References: [1] E. Pichelli et al., "InSAR water vapor data assimilation into mesoscale model MM5: Technique and pilot study," IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 8, no. 8, pp. 3859-3875, Aug. 2015. [2] P. Mateus, R. Tomé, G. Nico, and J. Catalão, "Three-Dimensional Variational Assimilation of InSAR PWV Using the WRFDA Model," IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 12, pp. 7323-7330, 2016.

  4. Polydimethylsiloxane-air partition ratios for semi-volatile organic compounds by GC-based measurement and COSMO-RS estimation: Rapid measurements and accurate modelling.

    Science.gov (United States)

    Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M

    2016-08-01

    Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. Copyright © 2016. Published by Elsevier Ltd.

  5. Accurate mean-field modeling of the Barkhausen noise power in ferromagnetic materials, using a positive-feedback theory of ferromagnetism

    Science.gov (United States)

    Harrison, R. G.

    2015-07-01

    A mean-field positive-feedback (PFB) theory of ferromagnetism is used to explain the origin of Barkhausen noise (BN) and to show why it is most pronounced in the irreversible regions of the hysteresis loop. By incorporating the ABBM-Sablik model of BN into the PFB theory, we obtain analytical solutions that simultaneously describe both the major hysteresis loop and, by calculating separate expressions for the differential susceptibility in the irreversible and reversible regions, the BN power response at all points of the loop. The PFB theory depends on summing components of the applied field, in particular, the non-monotonic field-magnetization relationship characterizing hysteresis, associated with physical processes occurring in the material. The resulting physical model is then validated by detailed comparisons with measured single-peak BN data in three different steels. It also agrees with the well-known influence of a demagnetizing field on the position and shape of these peaks. The results could form the basis of a physics-based method for modeling and understanding the significance of the observed single-peak (and in multi-constituent materials, multi-peak) BN envelope responses seen in contemporary applications of BN, such as quality control in manufacturing, non-destructive testing, and monitoring the microstructural state of ferromagnetic materials.

  6. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    Science.gov (United States)

    Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.

    2015-01-01

    Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164

  7. Fourier power, subjective distance and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    Directory of Open Access Journals (Sweden)

    Mark Daniel Lescroart

    2015-11-01

    Full Text Available Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA, Retrosplenial Complex (RSC, and the Occipital Place Area (OPA. It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1 2D features related to Fourier power; (2 3D spatial features such as the distance to objects in a scene; or (3 abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM to BOLD fMRI responses elicited by a set of 1,386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue.

  8. Preliminary results of an attempt to provide soil moisture datasets in order to verify numerical weather prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Cassardo, C. [Torino Univ., Torino (Italy). Dipartimento di fisica generale Amedeo Avogadro; Loglisci, N. [ARPA, Torino (Italy). Servizio meteorologico regionale

    2005-03-15

    In the recent years, there has been a significant growth in the recognition of the soil moisture importance in large-scale hydrology and climate modelling. Soil moisture is a lower boundary condition, which rules the partitioning of energy in terms of sensible and latent heat flux. Wrong estimations of soil moisture lead to wrong simulation of the surface layer evolution and hence precipitations and cloud cover forecasts could be consequently affected. This is true for large scale medium-range weather forecasts as well as for local-scale short range weather forecasts, particularly in those situations in which local convection is well developed. Unfortunately; despite the importance of this physical parameter there are only few soil moisture data sets sparse in time and in space around in the world. Due to this scarcity of soil moisture observations, we developed an alternative method to provide soil moisture datasets in order to verify numerical weather prediction models. In this paper are presented the preliminary results of an attempt to verify soil moisture fields predicted by a mesoscale model. The data for the comparison were provided by the simulations of the diagnostic land surface scheme LSPM (Land Surface Process Model), widely used at the Piedmont Regional Weather Service for agro-meteorological purposes. To this end, LSPM was initialized and driven by Synop observations, while the surface (vegetation and soil) parameter values were initialized by ECOCLIMAP global dataset at 1km{sup 2} resolution.

  9. Preliminary results of an attempt to provide soil moisture datasets in order to verify numerical weather prediction models

    International Nuclear Information System (INIS)

    Cassardo, C.; Loglisci, N.

    2005-01-01

    In the recent years, there has been a significant growth in the recognition of the soil moisture importance in large-scale hydrology and climate modelling. Soil moisture is a lower boundary condition, which rules the partitioning of energy in terms of sensible and latent heat flux. Wrong estimations of soil moisture lead to wrong simulation of the surface layer evolution and hence precipitations and cloud cover forecasts could be consequently affected. This is true for large scale medium-range weather forecasts as well as for local-scale short range weather forecasts, particularly in those situations in which local convection is well developed. Unfortunately; despite the importance of this physical parameter there are only few soil moisture data sets sparse in time and in space around in the world. Due to this scarcity of soil moisture observations, we developed an alternative method to provide soil moisture datasets in order to verify numerical weather prediction models. In this paper are presented the preliminary results of an attempt to verify soil moisture fields predicted by a mesoscale model. The data for the comparison were provided by the simulations of the diagnostic land surface scheme LSPM (Land Surface Process Model), widely used at the Piedmont Regional Weather Service for agro-meteorological purposes. To this end, LSPM was initialized and driven by Synop observations, while the surface (vegetation and soil) parameter values were initialized by ECOCLIMAP global dataset at 1km 2 resolution

  10. A Mathematical Model of Metabolism and Regulation Provides a Systems-Level View of How Escherichia coli Responds to Oxygen

    Directory of Open Access Journals (Sweden)

    Michael eEderer

    2014-03-01

    Full Text Available The efficient redesign of bacteria for biotechnological purposes, such as biofuel production, waste disposal or specific biocatalytic functions, requires a quantitative systems-level understanding of energy supply, carbon and redox metabolism. The measurement of transcript levels, metabolite concentrations and metabolic fluxes per se gives an incomplete picture. An appreciation of the interdependencies between the different measurement values is essential for systems-level understanding. Mathematical modeling has the potential to provide a coherent and quantitative description of the interplay between gene expression, metabolite concentrations and metabolic fluxes. Escherichia coli undergoes major adaptations in central metabolism when the availability of oxygen changes. Thus, an integrated description of the oxygen response provides a benchmark of our understanding of carbon, energy and redox metabolism. We present the first comprehensive model of the central metabolism of E. coli that describes steady-state metabolism at different levels of oxygen availability. Variables of the model are metabolite concentrations, gene expression levels, transcription factor activities, metabolic fluxes and biomass concentration. We analyze the model with respect to the production capabilities of central metabolism of E. coli. In particular, we predict how precursor and biomass concentration are affected by product formation.

  11. Emerging Business Models in Education Provisioning: A Case Study on Providing Learning Support as Education-as-a-Service

    Directory of Open Access Journals (Sweden)

    Loina Prifti

    2017-09-01

    Full Text Available This study aims to give a deeper understanding on emerging business models in the context of education. Industry 4.0/the Industrial Internet in general and especially recent advances in cloud computing enable a new kind of service offering in the education sector and lead to new business models for education: Education-as-a-Service (EaaS. Within EaaS, learning, and teaching contents are delivered as services. By combining a literature review with a qualitative case study, this paper makes a three-fold contribution to the field of business models in education: First, we provide a theoretical definition for a common understanding of EaaS. Second, we present the state-of-the-art research on this new paradigm. Third, in the case study we describe a “best practices” business model of an existing EaaS provider. These insights build a theoretical foundation for further research in this area. The paper concludes with a research agenda for further research in this emerging field.

  12. A Hybrid Artificial Reputation Model Involving Interaction Trust, Witness Information and the Trust Model to Calculate the Trust Value of Service Providers

    Directory of Open Access Journals (Sweden)

    Gurdeep Singh Ransi

    2014-02-01

    Full Text Available Agent interaction in a community, such as the online buyer-seller scenario, is often uncertain, as when an agent comes in contact with other agents they initially know nothing about each other. Currently, many reputation models are developed that help service consumers select better service providers. Reputation models also help agents to make a decision on who they should trust and transact with in the future. These reputation models are either built on interaction trust that involves direct experience as a source of information or they are built upon witness information also known as word-of-mouth that involves the reports provided by others. Neither the interaction trust nor the witness information models alone succeed in such uncertain interactions. In this paper we propose a hybrid reputation model involving both interaction trust and witness information to address the shortcomings of existing reputation models when taken separately. A sample simulation is built to setup buyer-seller services and uncertain interactions. Experiments reveal that the hybrid approach leads to better selection of trustworthy agents where consumers select more reputable service providers, eventually helping consumers obtain more gains. Furthermore, the trust model developed is used in calculating trust values of service providers.

  13. The impact of pediatric neuropsychological consultation in mild traumatic brain injury: a model for providing feedback after invalid performance.

    Science.gov (United States)

    Connery, Amy K; Peterson, Robin L; Baker, David A; Kirkwood, Michael W

    2016-05-01

    In recent years, pediatric practitioners have increasingly recognized the importance of objectively measuring performance validity during clinical assessments. Yet, no studies have examined the impact of neuropsychological consultation when invalid performance has been identified in pediatric populations and little published guidance exists for clinical management. Here we provide a conceptual model for providing feedback after noncredible performance has been detected. In a pilot study, we examine caregiver satisfaction and postconcussive symptoms following provision of this feedback for patients seen through our concussion program. Participants (N = 70) were 8-17-year-olds with a history of mild traumatic brain injury who underwent an abbreviated neuropsychological evaluation between 2 and 12 months post-injury. We examined postconcussive symptom reduction and caregiver satisfaction after neuropsychological evaluation between groups of patients who were determined to have provided noncredible effort (n = 9) and those for whom no validity concerns were present (n = 61). We found similarly high levels of caregiver satisfaction between groups and greater reduction in self-reported symptoms after feedback was provided using the model with children with noncredible presentations compared to those with credible presentations. The current study lends preliminary support to the idea that the identification and communication of invalid performance can be a beneficial clinical intervention that promotes high levels of caregiver satisfaction and a reduction in self-reported and caregiver-reported symptoms.

  14. The importance of accurately modelling human interactions. Comment on "Coupled disease-behavior dynamics on complex networks: A review" by Z. Wang et al.

    Science.gov (United States)

    Rosati, Dora P.; Molina, Chai; Earn, David J. D.

    2015-12-01

    Human behaviour and disease dynamics can greatly influence each other. In particular, people often engage in self-protective behaviours that affect epidemic patterns (e.g., vaccination, use of barrier precautions, isolation, etc.). Self-protective measures usually have a mitigating effect on an epidemic [16], but can in principle have negative impacts at the population level [12,15,18]. The structure of underlying social and biological contact networks can significantly influence the specific ways in which population-level effects are manifested. Using a different contact network in a disease dynamics model-keeping all else equal-can yield very different epidemic patterns. For example, it has been shown that when individuals imitate their neighbours' vaccination decisions with some probability, this can lead to herd immunity in some networks [9], yet for other networks it can preserve clusters of susceptible individuals that can drive further outbreaks of infectious disease [12].

  15. Development of Accurate DFT Methods for Computing Redox Potentials of Transition Metal Complexes: Results for Model Complexes and Application to Cytochrome P450.

    Science.gov (United States)

    Hughes, Thomas F; Friesner, Richard A

    2012-02-14

    Single-electron reduction half potentials of 95 octahedral fourth-row transition metal complexes binding a diverse set of ligands have been calculated at the unrestricted pseudospectral B3LYP/LACV3P level of theory in a continuum solvent. Through systematic comparison of experimental and calculated potentials, it is determined that B3LYP strongly overbinds the d-manifold when the metal coordinates strongly interacting ligands and strongly underbinds the d-manifold when the metal coordinates weakly interacting ligands. These error patterns give rise to an extension of the localized orbital correction (LOC) scheme previously developed for organic molecules and which was recently extended to the spin-splitting properties of organometallic complexes. Mean unsigned errors in B3LYP redox potentials are reduced from 0.40 ± 0.20 V (0.88 V max error) to 0.12 ± 0.09 V (0.34 V max error) using a simple seven-parameter model. Although the focus of this article is on redox properties of transition metal complexes, we have found that applying our previous spin-splitting LOC model to an independent test set of oxidized and reduced complexes that are also spin-crossover complexes correctly reverses the ordering of spin states obtained with B3LYP. Interesting connections are made between redox and spin-splitting parameters with regard to the spectrochemical series and in their combined predictive power for properly closing the thermodynamic cycle of d-electron transitions in a transition metal complex. Results obtained from our large and diverse databases of spin-splitting and redox properties suggest that, while the error introduced by single reference B3LYP for simple multireference systems, like mononuclear transition metal complexes, remains significant, at around 2-5 kcal/mol, the dominant error, at around 10-20 kcal/mol, is in B3LYP's prediction of metal-ligand binding. Application of the LOC scheme to the rate-determining hydrogen atom transfer step in substrate

  16. Applying high-frequency surrogate measurements and a wavelet-ANN model to provide early warnings of rapid surface water quality anomalies.

    Science.gov (United States)

    Shi, Bin; Wang, Peng; Jiang, Jiping; Liu, Rentao

    2018-01-01

    It is critical for surface water management systems to provide early warnings of abrupt, large variations in water quality, which likely indicate the occurrence of spill incidents. In this study, a combined approach integrating a wavelet artificial neural network (wavelet-ANN) model and high-frequency surrogate measurements is proposed as a method of water quality anomaly detection and warning provision. High-frequency time series of major water quality indexes (TN, TP, COD, etc.) were produced via a regression-based surrogate model. After wavelet decomposition and denoising, a low-frequency signal was imported into a back-propagation neural network for one-step prediction to identify the major features of water quality variations. The precisely trained site-specific wavelet-ANN outputs the time series of residual errors. A warning is triggered when the actual residual error exceeds a given threshold, i.e., baseline pattern, estimated based on long-term water quality variations. A case study based on the monitoring program applied to the Potomac River Basin in Virginia, USA, was conducted. The integrated approach successfully identified two anomaly events of TP variations at a 15-minute scale from high-frequency online sensors. A storm event and point source inputs likely accounted for these events. The results show that the wavelet-ANN model is slightly more accurate than the ANN for high-frequency surface water quality prediction, and it meets the requirements of anomaly detection. Analyses of the performance at different stations and over different periods illustrated the stability of the proposed method. By combining monitoring instruments and surrogate measures, the presented approach can support timely anomaly identification and be applied to urban aquatic environments for watershed management. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Salmonids, stream temperatures, and solar loading--modeling the shade provided to the Klamath River by vegetation and geomorphology

    Science.gov (United States)

    Forney, William M.; Soulard, Christopher E.; Chickadel, C. Christopher

    2013-01-01

    The U.S. Geological Survey is studying approaches to characterize the thermal regulation of water and the dynamics of cold water refugia. High temperatures have physiological impacts on anadromous fish species. Factors affecting the presence, variability, and quality of thermal refugia are known, such as riverine and watershed processes, hyporheic flows, deep pools and bathymetric factors, thermal stratification of reservoirs, and other broader climatic considerations. This research develops a conceptual model and methodological techniques to quantify the change in solar insolation load to the Klamath River caused by riparian and floodplain vegetation, the morphology of the river, and the orientation and topographic characteristics of its watersheds. Using multiple scales of input data from digital elevation models and airborne light detection and ranging (LiDAR) derivatives, different analysis methods yielded three different model results. These models are correlated with thermal infrared imagery for ground-truth information at the focal confluence with the Scott River. Results from nonparametric correlation tests, geostatistical cross-covariograms, and cross-correlograms indicate that statistical relationships between the insolation models and the thermal infrared imagery exist and are significant. Furthermore, the use of geostatistics provides insights to the spatial structure of the relationships that would not be apparent otherwise. To incorporate a more complete representation of the temperature dynamics in the river system, other variables including the factors mentioned above, and their influence on solar loading, are discussed. With similar datasets, these methods could be applied to any river in the United States—especially those listed as temperature impaired under Section 303(d) of the Clean Water Act—or international riverine systems. Considering the importance of thermal refugia for aquatic species, these methods can help investigate opportunities

  18. Investigation of thermodynamic properties of gaseous SiC(X 3Π and a 1Σ) with accurate model chemistry calculations

    Science.gov (United States)

    Deng, Juanli; Su, Kehe; Zeng, Yan; Wang, Xin; Zeng, Qingfeng; Cheng, Laifei; Xu, Yongdong; Zhang, Litong

    2008-09-01

    Density functional theory, high-level model chemistry at G3(MP2), CBS-Q, G3//B3LYP, G3(QCI) and QCISD(T)/aug-cc-pv5z levels of theory combined with statistical thermodynamics have been employed to explore the thermodynamic properties of gaseous SiC(X 3Π) and SiC(a 1Σ). The heat capacities and entropies are obtained via statistical thermodynamics with the structure, vibrational frequency and the electronic excitations calculated at B3PW91/6-31G(d) and B3PW91/6-311G(d) levels. The importance of the anharmonic corrections and the electronic excitations is examined. The reliability of the electronic excitation energies calculated with the time dependent density functional method is tested. The heat capacities and entropies of the ground state SiC(XΠ) calculated in this work are consistent with other theoretical work but different from those in the JANAF Tables. The enthalpies of formation and the Gibbs free energies of formation are in excellent agreement with the calculated results from one experiment but are higher than another observation.

  19. Accurate Depth of Radiofrequency-Induced Lesions in Renal Sympathetic Denervation Based on a Fine Histological Sectioning Approach in a Porcine Model.

    Science.gov (United States)

    Sakaoka, Atsushi; Terao, Hisako; Nakamura, Shintaro; Hagiwara, Hitomi; Furukawa, Toshihito; Matsumura, Kiyoshi; Sakakura, Kenichi

    2018-02-01

    Ablation lesion depth caused by radiofrequency-based renal denervation (RDN) was limited to radiofrequency-RDN cannot ablate a substantial percentage of renal sympathetic nerves. We aimed to define the true lesion depth achieved with radiofrequency-RDN using a fine sectioning method and to investigate biophysical parameters that could predict lesion depth. Radiofrequency was delivered to 87 sites in 14 renal arteries from 9 farm pigs at various ablation settings: 2, 4, 6, and 9 W for 60 seconds and 6 W for 120 seconds. Electric impedance and electrode temperature were recorded during ablation. At 7 days, 2470 histological sections were obtained from the treated arteries. Maximum lesion depth increased at 2 to 6 W, peaking at 6.53 (95% confidence interval, 4.27-8.78) mm under the 6 W/60 s condition. It was not augmented by greater power (9 W) or longer duration (120 seconds). There were statistically significant tendencies at 6 and 9 W, with higher injury scores in the media, nerves, arterioles, and fat. Maximum lesion depth was positively correlated with impedance reduction and peak electrode temperature (Pearson correlation coefficients were 0.59 and 0.53, respectively). Lesion depth was 6.5 mm for radiofrequency-RDN at 6 W/60 s. The impedance reduction and peak electrode temperature during ablation were closely associated with lesion depth. Hence, these biophysical parameters could provide prompt feedback during radiofrequency-RDN procedures in the clinical setting. © 2018 The Authors.

  20. A multi-objective location-inventory model for 3PL providers with sustainable considerations under uncertainty

    Directory of Open Access Journals (Sweden)

    R. Daghigh

    2016-09-01

    Full Text Available In recent years, logistics development is considered as an important aspect of any country’s development. Outsourcing logistics activities to third party logistics (3PL providers is a common way to achieve logistics development. On the other hand, globalization and increasing customers’ concern about the environmental impact of activities as well as the appearance of the issue of social responsibility have led companies employ sustainable supply chain management, which considers economic, environmental and social benefits, simultaneously. This paper proposes a multi-objective model to design logistics network for 3PL providers by considering sustainable objectives under uncertainty. Objective functions include minimizing the total cost, minimizing greenhouse gas emission and maximizing social responsibility subject to fair access to products, number of created job opportunities and local community development. It is worth mentioning that in the present paper the perishability of products is also considered. A numerical example is provided to solve and validate model using augmented Epsilon-Constraint method. The results show that three sustainable objectives were in conflict and as the one receives more desirable values, the others fall into more undesirable values. In addition, by increasing maximum perishable time periods and by considering lateral transshipment among facilities of a level one can improve sustainability indices of the problem, which indicates the necessity of such policy in improving network sustainability.

  1. Can oral vitamin D prevent the cardiovascular diseases among migrants in Australia? Provider perspective using Markov modelling.

    Science.gov (United States)

    Ruwanpathirana, Thilanga; Owen, Alice; Renzaho, Andre M N; Zomer, Ella; Gambhir, Manoj; Reid, Christopher M

    2015-06-01

    The study was designed to model the effectiveness and cost effectiveness of oral Vitamin D supplementation as a primary prevention strategy for cardiovascular disease among a migrant population in Australia. It was carried out in the Community Health Service, Kensington, Melbourne. Best-case scenario analysis using a Markov model was employed to look at the health care providers' perspective. Adult migrants who were vitamin D deficient and free from cardiovascular disease visiting the medical centre at least once during the period from 1 January 2010 to 31 December 2012 were included in the study. The blood pressure-lowering effect of vitamin D was taken from a published meta-analysis and applied in the Framingham 10 year cardiovascular risk algorithm (with and without oral vitamin D supplements) to generate the probabilities of cardiovascular events. A Markov decision model was used to estimate the provider costs associated with the events and treatments. Uncertainties were derived by Monte Carlo simulation. Vitamin D oral supplementation (1000 IU/day) for 10 years could potentially prevent 31 (interquartile range (IQR) 26 to 37) non-fatal and 11 (IQR 10 to 15) fatal cardiovascular events in a migrant population of 10,000 assuming 100% compliance. The provider perspective incremental cost effectiveness per year of life saved was AU$3,992 (IQR 583 to 8558). This study suggests subsidised supplementation of oral vitamin D may be a cost effective intervention to reduce non-fatal and fatal cardiovascular outcomes in high-risk migrant populations. © 2015 Wiley Publishing Asia Pty Ltd.

  2. Modeling the Ecosystem Services Provided by Trees in Urban Ecosystems: Using Biome-BGC to Improve i-Tree Eco

    Science.gov (United States)

    Brown, Molly E.; McGroddy, Megan; Spence, Caitlin; Flake, Leah; Sarfraz, Amna; Nowak, David J.; Milesi, Cristina

    2012-01-01

    As the world becomes increasingly urban, the need to quantify the effect of trees in urban environments on energy usage, air pollution, local climate and nutrient run-off has increased. By identifying, quantifying and valuing the ecological activity that provides services in urban areas, stronger policies and improved quality of life for urban residents can be obtained. Here we focus on two radically different models that can be used to characterize urban forests. The i-Tree Eco model (formerly UFORE model) quantifies ecosystem services (e.g., air pollution removal, carbon storage) and values derived from urban trees based on field measurements of trees and local ancillary data sets. Biome-BGC (Biome BioGeoChemistry) is used to simulate the fluxes and storage of carbon, water, and nitrogen in natural environments. This paper compares i-Tree Eco's methods to those of Biome-BGC, which estimates the fluxes and storage of energy, carbon, water and nitrogen for vegetation and soil components of the ecosystem. We describe the two models and their differences in the way they calculate similar properties, with a focus on carbon and nitrogen. Finally, we discuss the implications of further integration of these two communities for land managers such as those in Maryland.

  3. Using Model-Based Systems Engineering To Provide Artifacts for NASA Project Life-Cycle and Technical Reviews

    Science.gov (United States)

    Parrott, Edith L.; Weiland, Karen J.

    2017-01-01

    The ability of systems engineers to use model-based systems engineering (MBSE) to generate self-consistent, up-to-date systems engineering products for project life-cycle and technical reviews is an important aspect for the continued and accelerated acceptance of MBSE. Currently, many review products are generated using labor-intensive, error-prone approaches based on documents, spreadsheets, and chart sets; a promised benefit of MBSE is that users will experience reductions in inconsistencies and errors. This work examines features of SysML that can be used to generate systems engineering products. Model elements, relationships, tables, and diagrams are identified for a large number of the typical systems engineering artifacts. A SysML system model can contain and generate most systems engineering products to a significant extent and this paper provides a guide on how to use MBSE to generate products for project life-cycle and technical reviews. The use of MBSE can reduce the schedule impact usually experienced for review preparation, as in many cases the review products can be auto-generated directly from the system model. These approaches are useful to systems engineers, project managers, review board members, and other key project stakeholders.

  4. NOAA People Empowered Products (PeEP): Combining social media with scientific models to provide eye-witness confirmed products

    Science.gov (United States)

    Codrescu, S.; Green, J. C.; Redmon, R. J.; Minor, K.; Denig, W. F.; Kihn, E. A.

    2013-12-01

    NOAA products and alerts rely on combinations of models and data to provide the public with information regarding space and terrestrial weather phenomena and hazards. This operational paradigm, while effective, neglects an abundant free source of measurements: millions of eyewitnesses viewing weather events. We demonstrate the capabilities of a prototype People Empowered Product (PeEP) that combines the OVATION prime auroral model running at the NOAA National Geophysical Data Center with Twitter reports of observable aurora. We introduce an algorithm for scoring Tweets based on keywords to improve the signal to noise of this dynamic data source. We use the location of the aurora derived from this new database of crowd sourced observations to validate the OVATION model for use in auroral forecasting. The combined product displays the model aurora in real time with markers showing the location and text of tweets from people actually observing the aurora. We discuss how the application might be extended to other space weather products such as radiation related satellite anomalies.