WorldWideScience

Sample records for models provide accurate

  1. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    Science.gov (United States)

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  2. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  3. Accurate market price formation model with both supply-demand and trend-following for global food prices providing policy recommendations.

    Science.gov (United States)

    Lagi, Marco; Bar-Yam, Yavni; Bertrand, Karla Z; Bar-Yam, Yaneer

    2015-11-10

    Recent increases in basic food prices are severely affecting vulnerable populations worldwide. Proposed causes such as shortages of grain due to adverse weather, increasing meat consumption in China and India, conversion of corn to ethanol in the United States, and investor speculation on commodity markets lead to widely differing implications for policy. A lack of clarity about which factors are responsible reinforces policy inaction. Here, for the first time to our knowledge, we construct a dynamic model that quantitatively agrees with food prices. The results show that the dominant causes of price increases are investor speculation and ethanol conversion. Models that just treat supply and demand are not consistent with the actual price dynamics. The two sharp peaks in 2007/2008 and 2010/2011 are specifically due to investor speculation, whereas an underlying upward trend is due to increasing demand from ethanol conversion. The model includes investor trend following as well as shifting between commodities, equities, and bonds to take advantage of increased expected returns. Claims that speculators cannot influence grain prices are shown to be invalid by direct analysis of price-setting practices of granaries. Both causes of price increase, speculative investment and ethanol conversion, are promoted by recent regulatory changes-deregulation of the commodity markets, and policies promoting the conversion of corn to ethanol. Rapid action is needed to reduce the impacts of the price increases on global hunger.

  4. Fishing site mapping using local knowledge provides accurate and ...

    African Journals Online (AJOL)

    Accurate fishing ground maps are necessary for fisheries monitoring. In Velondriake locally managed marine area (LMMA) we observed that the nomenclature of shared fishing sites (FS) is villages dependent. Additionally, the level of illiteracy makes data collection more complicated, leading to data collectors improvising ...

  5. Do detour tasks provide accurate assays of inhibitory control?

    Science.gov (United States)

    Whiteside, Mark A.; Laker, Philippa R.; Beardsworth, Christine E.

    2018-01-01

    Transparent Cylinder and Barrier tasks are used to purportedly assess inhibitory control in a variety of animals. However, we suspect that performances on these detour tasks are influenced by non-cognitive traits, which may result in inaccurate assays of inhibitory control. We therefore reared pheasants under standardized conditions and presented each bird with two sets of similar tasks commonly used to measure inhibitory control. We recorded the number of times subjects incorrectly attempted to access a reward through transparent barriers, and their latencies to solve each task. Such measures are commonly used to infer the differential expression of inhibitory control. We found little evidence that their performances were consistent across the two different Putative Inhibitory Control Tasks (PICTs). Improvements in performance across trials showed that pheasants learned the affordances of each specific task. Critically, prior experience of transparent tasks, either Barrier or Cylinder, also improved subsequent inhibitory control performance on a novel task, suggesting that they also learned the general properties of transparent obstacles. Individual measures of persistence, assayed in a third task, were positively related to their frequency of incorrect attempts to solve the transparent inhibitory control tasks. Neophobia, Sex and Body Condition had no influence on individual performance. Contrary to previous studies of primates, pheasants with poor performance on PICTs had a wider dietary breadth assayed using a free-choice task. Our results demonstrate that in systems or taxa where prior experience and differences in development cannot be accounted for, individual differences in performance on commonly used detour-dependent PICTS may reveal more about an individual's prior experience of transparent objects, or their motivation to acquire food, than providing a reliable measure of their inhibitory control. PMID:29593115

  6. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  7. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  8. Anatomically accurate, finite model eye for optical modeling.

    Science.gov (United States)

    Liou, H L; Brennan, N A

    1997-08-01

    There is a need for a schematic eye that models vision accurately under various conditions such as refractive surgical procedures, contact lens and spectacle wear, and near vision. Here we propose a new model eye close to anatomical, biometric, and optical realities. This is a finite model with four aspheric refracting surfaces and a gradient-index lens. It has an equivalent power of 60.35 D and an axial length of 23.95 mm. The new model eye provides spherical aberration values within the limits of empirical results and predicts chromatic aberration for wavelengths between 380 and 750 nm. It provides a model for calculating optical transfer functions and predicting optical performance of the eye.

  9. Accurate Modeling Method for Cu Interconnect

    Science.gov (United States)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  10. Accurate Electromagnetic Modeling Methods for Integrated Circuits

    NARCIS (Netherlands)

    Sheng, Z.

    2010-01-01

    The present development of modern integrated circuits (IC’s) is characterized by a number of critical factors that make their design and verification considerably more difficult than before. This dissertation addresses the important questions of modeling all electromagnetic behavior of features on

  11. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  12. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  13. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  14. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities.

    Science.gov (United States)

    Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan

    2015-08-11

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.

  15. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  16. Certified meter data managers provide potent tool : Utilities, customers benefit from accurate energy data

    Energy Technology Data Exchange (ETDEWEB)

    Hall, V.

    2004-02-01

    The use of customer energy information and its importance in building business-to-business and business-to-consumer demographic profiles, and the role of certified meter data management agents, i.e. companies that have created infrastructures to manage large volumes of energy data that can be used to drive marketing to energy customers, is discussed. Short and long-term load management planning, distribution planning, outage management and demand response programs, efforts to streamline billing and create revenue-generating value-added services, are just some of the areas that can benefit from comprehensively collected and accurate consumer data. The article emphasizes the process of certification, the benefits certified meter data management companies can provide to utilities as well as to consumers, their role in disaster recovery management, and characteristics of the way such companies bring the benefits of their operations to their client utilities and consumers. 1 tab.

  17. SPECTROPOLARIMETRICALLY ACCURATE MAGNETOHYDROSTATIC SUNSPOT MODEL FOR FORWARD MODELING IN HELIOSEISMOLOGY

    Energy Technology Data Exchange (ETDEWEB)

    Przybylski, D.; Shelyag, S.; Cally, P. S. [Monash Center for Astrophysics, School of Mathematical Sciences, Monash University, Clayton, Victoria 3800 (Australia)

    2015-07-01

    We present a technique to construct a spectropolarimetrically accurate magnetohydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion, and absorption in the solar interior and photosphere with the sunspot embedded into it. With the 6173 Å magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as the full Stokes vector for the simulation at various positions at the solar disk, and analyze the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterized. An increase in acoustic power in the simulated observations of the sunspot umbra away from the solar disk center was confirmed as the slow magnetoacoustic wave.

  18. SPECTROPOLARIMETRICALLY ACCURATE MAGNETOHYDROSTATIC SUNSPOT MODEL FOR FORWARD MODELING IN HELIOSEISMOLOGY

    International Nuclear Information System (INIS)

    Przybylski, D.; Shelyag, S.; Cally, P. S.

    2015-01-01

    We present a technique to construct a spectropolarimetrically accurate magnetohydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion, and absorption in the solar interior and photosphere with the sunspot embedded into it. With the 6173 Å magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as the full Stokes vector for the simulation at various positions at the solar disk, and analyze the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterized. An increase in acoustic power in the simulated observations of the sunspot umbra away from the solar disk center was confirmed as the slow magnetoacoustic wave

  19. Accurate lithography simulation model based on convolutional neural networks

    Science.gov (United States)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  20. Can administrative health utilisation data provide an accurate diabetes prevalence estimate for a geographical region?

    Science.gov (United States)

    Chan, Wing Cheuk; Papaconstantinou, Dean; Lee, Mildred; Telfer, Kendra; Jo, Emmanuel; Drury, Paul L; Tobias, Martin

    2018-05-01

    To validate the New Zealand Ministry of Health (MoH) Virtual Diabetes Register (VDR) using longitudinal laboratory results and to develop an improved algorithm for estimating diabetes prevalence at a population level. The assigned diabetes status of individuals based on the 2014 version of the MoH VDR is compared to the diabetes status based on the laboratory results stored in the Auckland regional laboratory result repository (TestSafe) using the New Zealand diabetes diagnostic criteria. The existing VDR algorithm is refined by reviewing the sensitivity and positive predictive value of the each of the VDR algorithm rules individually and as a combination. The diabetes prevalence estimate based on the original 2014 MoH VDR was 17% higher (n = 108,505) than the corresponding TestSafe prevalence estimate (n = 92,707). Compared to the diabetes prevalence based on TestSafe, the original VDR has a sensitivity of 89%, specificity of 96%, positive predictive value of 76% and negative predictive value of 98%. The modified VDR algorithm has improved the positive predictive value by 6.1% and the specificity by 1.4% with modest reductions in sensitivity of 2.2% and negative predictive value of 0.3%. At an aggregated level the overall diabetes prevalence estimated by the modified VDR is 5.7% higher than the corresponding estimate based on TestSafe. The Ministry of Health Virtual Diabetes Register algorithm has been refined to provide a more accurate diabetes prevalence estimate at a population level. The comparison highlights the potential value of a national population long term condition register constructed from both laboratory results and administrative data. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. An Accurate and Dynamic Computer Graphics Muscle Model

    Science.gov (United States)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  2. Allele-sharing models: LOD scores and accurate linkage tests.

    Science.gov (United States)

    Kong, A; Cox, N J

    1997-11-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.

  3. Accurate modeling of the hose instability in plasma wakefield accelerators

    Science.gov (United States)

    Mehrling, T. J.; Benedetti, C.; Schroeder, C. B.; Martinez de la Ossa, A.; Osterhoff, J.; Esarey, E.; Leemans, W. P.

    2018-05-01

    Hosing is a major challenge for the applicability of plasma wakefield accelerators and its modeling is therefore of fundamental importance to facilitate future stable and compact plasma-based particle accelerators. In this contribution, we present a new model for the evolution of the plasma centroid, which enables the accurate investigation of the hose instability in the nonlinear blowout regime. It paves the road for more precise and comprehensive studies of hosing, e.g., with drive and witness beams, which were not possible with previous models.

  4. Cost Calculation Model for Logistics Service Providers

    Directory of Open Access Journals (Sweden)

    Zoltán Bokor

    2012-11-01

    Full Text Available The exact calculation of logistics costs has become a real challenge in logistics and supply chain management. It is essential to gain reliable and accurate costing information to attain efficient resource allocation within the logistics service provider companies. Traditional costing approaches, however, may not be sufficient to reach this aim in case of complex and heterogeneous logistics service structures. So this paper intends to explore the ways of improving the cost calculation regimes of logistics service providers and show how to adopt the multi-level full cost allocation technique in logistics practice. After determining the methodological framework, a sample cost calculation scheme is developed and tested by using estimated input data. Based on the theoretical findings and the experiences of the pilot project it can be concluded that the improved costing model contributes to making logistics costing more accurate and transparent. Moreover, the relations between costs and performances also become more visible, which enhances the effectiveness of logistics planning and controlling significantly

  5. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der

    2014-01-01

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  6. Activity assays and immunoassays for plasma Renin and prorenin: information provided and precautions necessary for accurate measurement

    DEFF Research Database (Denmark)

    Campbell, Duncan J; Nussberger, Juerg; Stowasser, Michael

    2009-01-01

    into focus the differences in information provided by activity assays and immunoassays for renin and prorenin measurement and has drawn attention to the need for precautions to ensure their accurate measurement. CONTENT: Renin activity assays and immunoassays provide related but different information...... provided by these assays and of the precautions necessary to ensure their accuracy....

  7. Raman spectroscopy provides a powerful diagnostic tool for accurate determination of albumin glycation.

    Science.gov (United States)

    Dingari, Narahara Chari; Horowitz, Gary L; Kang, Jeon Woong; Dasari, Ramachandra R; Barman, Ishan

    2012-01-01

    We present the first demonstration of glycated albumin detection and quantification using Raman spectroscopy without the addition of reagents. Glycated albumin is an important marker for monitoring the long-term glycemic history of diabetics, especially as its concentrations, in contrast to glycated hemoglobin levels, are unaffected by changes in erythrocyte life times. Clinically, glycated albumin concentrations show a strong correlation with the development of serious diabetes complications including nephropathy and retinopathy. In this article, we propose and evaluate the efficacy of Raman spectroscopy for determination of this important analyte. By utilizing the pre-concentration obtained through drop-coating deposition, we show that glycation of albumin leads to subtle, but consistent, changes in vibrational features, which with the help of multivariate classification techniques can be used to discriminate glycated albumin from the unglycated variant with 100% accuracy. Moreover, we demonstrate that the calibration model developed on the glycated albumin spectral dataset shows high predictive power, even at substantially lower concentrations than those typically encountered in clinical practice. In fact, the limit of detection for glycated albumin measurements is calculated to be approximately four times lower than its minimum physiological concentration. Importantly, in relation to the existing detection methods for glycated albumin, the proposed method is also completely reagent-free, requires barely any sample preparation and has the potential for simultaneous determination of glycated hemoglobin levels as well. Given these key advantages, we believe that the proposed approach can provide a uniquely powerful tool for quantification of glycation status of proteins in biopharmaceutical development as well as for glycemic marker determination in routine clinical diagnostics in the future.

  8. Raman Spectroscopy Provides a Powerful Diagnostic Tool for Accurate Determination of Albumin Glycation

    Science.gov (United States)

    Dingari, Narahara Chari; Horowitz, Gary L.; Kang, Jeon Woong; Dasari, Ramachandra R.; Barman, Ishan

    2012-01-01

    We present the first demonstration of glycated albumin detection and quantification using Raman spectroscopy without the addition of reagents. Glycated albumin is an important marker for monitoring the long-term glycemic history of diabetics, especially as its concentrations, in contrast to glycated hemoglobin levels, are unaffected by changes in erythrocyte life times. Clinically, glycated albumin concentrations show a strong correlation with the development of serious diabetes complications including nephropathy and retinopathy. In this article, we propose and evaluate the efficacy of Raman spectroscopy for determination of this important analyte. By utilizing the pre-concentration obtained through drop-coating deposition, we show that glycation of albumin leads to subtle, but consistent, changes in vibrational features, which with the help of multivariate classification techniques can be used to discriminate glycated albumin from the unglycated variant with 100% accuracy. Moreover, we demonstrate that the calibration model developed on the glycated albumin spectral dataset shows high predictive power, even at substantially lower concentrations than those typically encountered in clinical practice. In fact, the limit of detection for glycated albumin measurements is calculated to be approximately four times lower than its minimum physiological concentration. Importantly, in relation to the existing detection methods for glycated albumin, the proposed method is also completely reagent-free, requires barely any sample preparation and has the potential for simultaneous determination of glycated hemoglobin levels as well. Given these key advantages, we believe that the proposed approach can provide a uniquely powerful tool for quantification of glycation status of proteins in biopharmaceutical development as well as for glycemic marker determination in routine clinical diagnostics in the future. PMID:22393405

  9. An accurate and simple large signal model of HEMT

    DEFF Research Database (Denmark)

    Liu, Qing

    1989-01-01

    A large-signal model of discrete HEMTs (high-electron-mobility transistors) has been developed. It is simple and suitable for SPICE simulation of hybrid digital ICs. The model parameters are extracted by using computer programs and data provided by the manufacturer. Based on this model, a hybrid...

  10. Accurate modeling and evaluation of microstructures in complex materials

    Science.gov (United States)

    Tahmasebi, Pejman

    2018-02-01

    Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.

  11. Can Raters with Reduced Job Descriptive Information Provide Accurate Position Analysis Questionnaire (PAQ) Ratings?

    Science.gov (United States)

    Friedman, Lee; Harvey, Robert J.

    1986-01-01

    Job-naive raters provided with job descriptive information made Position Analysis Questionnaire (PAQ) ratings which were validated against ratings of job analysts who were also job content experts. None of the reduced job descriptive information conditions enabled job-naive raters to obtain either acceptable levels of convergent validity with…

  12. Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.

    Science.gov (United States)

    Wu, Tim; Hung, Alice; Mithraratne, Kumar

    2014-11-01

    This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.

  13. Does universal 16S rRNA gene amplicon sequencing of environmental communities provide an accurate description of nitrifying guilds?

    DEFF Research Database (Denmark)

    Diwan, Vaibhav; Albrechtsen, Hans-Jørgen; Smets, Barth F.

    2018-01-01

    amplicon sequencing and from guild targeted approaches. The universal amplicon sequencing provided 1) accurate estimates of nitrifier composition, 2) clustering of the samples based on these compositions consistent with sample origin, 3) estimates of the relative abundance of the guilds correlated...

  14. Measuring physical inactivity: do current measures provide an accurate view of "sedentary" video game time?

    Science.gov (United States)

    Fullerton, Simon; Taylor, Anne W; Dal Grande, Eleonora; Berry, Narelle

    2014-01-01

    Measures of screen time are often used to assess sedentary behaviour. Participation in activity-based video games (exergames) can contribute to estimates of screen time, as current practices of measuring it do not consider the growing evidence that playing exergames can provide light to moderate levels of physical activity. This study aimed to determine what proportion of time spent playing video games was actually spent playing exergames. Data were collected via a cross-sectional telephone survey in South Australia. Participants aged 18 years and above (n = 2026) were asked about their video game habits, as well as demographic and socioeconomic factors. In cases where children were in the household, the video game habits of a randomly selected child were also questioned. Overall, 31.3% of adults and 79.9% of children spend at least some time playing video games. Of these, 24.1% of adults and 42.1% of children play exergames, with these types of games accounting for a third of all time that adults spend playing video games and nearly 20% of children's video game time. A substantial proportion of time that would usually be classified as "sedentary" may actually be spent participating in light to moderate physical activity.

  15. A new, accurate predictive model for incident hypertension.

    Science.gov (United States)

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-11-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures. The primary study population consisted of 1605 normotensive individuals aged 20-79 years with 5-year follow-up from the population-based study, that is the Study of Health in Pomerania (SHIP). The initial set was randomly split into a training and a testing set. We used a probabilistic graphical model applying a Bayesian network to create a predictive model for incident hypertension and compared the predictive performance with the established Framingham risk score for hypertension. Finally, the model was validated in 2887 participants from INTER99, a Danish community-based intervention study. In the training set of SHIP data, the Bayesian network used a small subset of relevant baseline features including age, mean arterial pressure, rs16998073, serum glucose and urinary albumin concentrations. Furthermore, we detected relevant interactions between age and serum glucose as well as between rs16998073 and urinary albumin concentrations [area under the receiver operating characteristic (AUC 0.76)]. The model was confirmed in the SHIP validation set (AUC 0.78) and externally replicated in INTER99 (AUC 0.77). Compared to the established Framingham risk score for hypertension, the predictive performance of the new model was similar in the SHIP validation set and moderately better in INTER99. Data mining procedures identified a predictive model for incident hypertension, which included innovative and easy-to-measure variables. The findings promise great applicability in screening settings and clinical practice.

  16. Accurate, low-cost 3D-models of gullies

    Science.gov (United States)

    Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine

    2015-04-01

    Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we

  17. Accurate SHAPE-directed RNA secondary structure modeling, including pseudoknots.

    Science.gov (United States)

    Hajdin, Christine E; Bellaousov, Stanislav; Huggins, Wayne; Leonard, Christopher W; Mathews, David H; Weeks, Kevin M

    2013-04-02

    A pseudoknot forms in an RNA when nucleotides in a loop pair with a region outside the helices that close the loop. Pseudoknots occur relatively rarely in RNA but are highly overrepresented in functionally critical motifs in large catalytic RNAs, in riboswitches, and in regulatory elements of viruses. Pseudoknots are usually excluded from RNA structure prediction algorithms. When included, these pairings are difficult to model accurately, especially in large RNAs, because allowing this structure dramatically increases the number of possible incorrect folds and because it is difficult to search the fold space for an optimal structure. We have developed a concise secondary structure modeling approach that combines SHAPE (selective 2'-hydroxyl acylation analyzed by primer extension) experimental chemical probing information and a simple, but robust, energy model for the entropic cost of single pseudoknot formation. Structures are predicted with iterative refinement, using a dynamic programming algorithm. This melded experimental and thermodynamic energy function predicted the secondary structures and the pseudoknots for a set of 21 challenging RNAs of known structure ranging in size from 34 to 530 nt. On average, 93% of known base pairs were predicted, and all pseudoknots in well-folded RNAs were identified.

  18. Accurate Online Full Charge Capacity Modeling of Smartphone Batteries

    OpenAIRE

    Hoque, Mohammad A.; Siekkinen, Matti; Koo, Jonghoe; Tarkoma, Sasu

    2016-01-01

    Full charge capacity (FCC) refers to the amount of energy a battery can hold. It is the fundamental property of smartphone batteries that diminishes as the battery ages and is charged/discharged. We investigate the behavior of smartphone batteries while charging and demonstrate that the battery voltage and charging rate information can together characterize the FCC of a battery. We propose a new method for accurately estimating FCC without exposing low-level system details or introducing new ...

  19. Daily FOUR score assessment provides accurate prognosis of long-term outcome in out-of-hospital cardiac arrest.

    Science.gov (United States)

    Weiss, N; Venot, M; Verdonk, F; Chardon, A; Le Guennec, L; Llerena, M C; Raimbourg, Q; Taldir, G; Luque, Y; Fagon, J-Y; Guerot, E; Diehl, J-L

    2015-05-01

    The accurate prediction of outcome after out-of-hospital cardiac arrest (OHCA) is of major importance. The recently described Full Outline of UnResponsiveness (FOUR) is well adapted to mechanically ventilated patients and does not depend on verbal response. To evaluate the ability of FOUR assessed by intensivists to accurately predict outcome in OHCA. We prospectively identified patients admitted for OHCA with a Glasgow Coma Scale below 8. Neurological assessment was performed daily. Outcome was evaluated at 6 months using Glasgow-Pittsburgh Cerebral Performance Categories (GP-CPC). Eighty-five patients were included. At 6 months, 19 patients (22%) had a favorable outcome, GP-CPC 1-2, and 66 (78%) had an unfavorable outcome, GP-CPC 3-5. Compared to both brainstem responses at day 3 and evolution of Glasgow Coma Scale, evolution of FOUR score over the three first days was able to predict unfavorable outcome more precisely. Thus, absence of improvement or worsening from day 1 to day 3 of FOUR had 0.88 (0.79-0.97) specificity, 0.71 (0.66-0.76) sensitivity, 0.94 (0.84-1.00) PPV and 0.54 (0.49-0.59) NPV to predict unfavorable outcome. Similarly, the brainstem response of FOUR score at 0 evaluated at day 3 had 0.94 (0.89-0.99) specificity, 0.60 (0.50-0.70) sensitivity, 0.96 (0.92-1.00) PPV and 0.47 (0.37-0.57) NPV to predict unfavorable outcome. The absence of improvement or worsening from day 1 to day 3 of FOUR evaluated by intensivists provides an accurate prognosis of poor neurological outcome in OHCA. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  20. Concurrent chart review provides more accurate documentation and increased calculated case mix index, severity of illness, and risk of mortality.

    Science.gov (United States)

    Frazee, Richard C; Matejicka, Anthony V; Abernathy, Stephen W; Davis, Matthew; Isbell, Travis S; Regner, Justin L; Smith, Randall W; Jupiter, Daniel C; Papaconstantinou, Harry T

    2015-04-01

    Case mix index (CMI) is calculated to determine the relative value assigned to a Diagnosis-Related Group. Accurate documentation of patient complications and comorbidities and major complications and comorbidities changes CMI and can affect hospital reimbursement and future pay for performance metrics. Starting in 2010, a physician panel concurrently reviewed the documentation of the trauma/acute care surgeons. Clarifications of the Centers for Medicare and Medicaid Services term-specific documentation were made by the panel, and the surgeon could incorporate or decline the clinical queries. A retrospective review of trauma/acute care inpatients was performed. The mean severity of illness, risk of mortality, and CMI from 2009 were compared with the 3 subsequent years. Mean length of stay and mean Injury Severity Score by year were listed as measures of patient acuity. Statistical analysis was performed using ANOVA and t-test, with p reimbursement and more accurately stratify outcomes measures for care providers. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  1. Dynamic sensing model for accurate delectability of environmental phenomena using event wireless sensor network

    Science.gov (United States)

    Missif, Lial Raja; Kadhum, Mohammad M.

    2017-09-01

    Wireless Sensor Network (WSN) has been widely used for monitoring where sensors are deployed to operate independently to sense abnormal phenomena. Most of the proposed environmental monitoring systems are designed based on a predetermined sensing range which does not reflect the sensor reliability, event characteristics, and the environment conditions. Measuring of the capability of a sensor node to accurately detect an event within a sensing field is of great important for monitoring applications. This paper presents an efficient mechanism for even detection based on probabilistic sensing model. Different models have been presented theoretically in this paper to examine their adaptability and applicability to the real environment applications. The numerical results of the experimental evaluation have showed that the probabilistic sensing model provides accurate observation and delectability of an event, and it can be utilized for different environment scenarios.

  2. New process model proves accurate in tests on catalytic reformer

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar-Rodriguez, E.; Ancheyta-Juarez, J. (Inst. Mexicano del Petroleo, Mexico City (Mexico))

    1994-07-25

    A mathematical model has been devised to represent the process that takes place in a fixed-bed, tubular, adiabatic catalytic reforming reactor. Since its development, the model has been applied to the simulation of a commercial semiregenerative reformer. The development of mass and energy balances for this reformer led to a model that predicts both concentration and temperature profiles along the reactor. A comparison of the model's results with experimental data illustrates its accuracy at predicting product profiles. Simple steps show how the model can be applied to simulate any fixed-bed catalytic reformer.

  3. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  4. A water wave model with horizontal circulation and accurate dispersion

    NARCIS (Netherlands)

    Cotter, C.; Bokhove, Onno

    We describe a new water wave model which is variational, and combines a depth-averaged vertical (component of) vorticity with depth-dependent potential flow. The model facilitates the further restriction of the vertical profile of the velocity potential to n-th order polynomials or a finite element

  5. Fast and accurate modeling of stray light in optical systems

    Science.gov (United States)

    Perrin, Jean-Claude

    2017-11-01

    The first problem to be solved in most optical designs with respect to stray light is that of internal reflections on the several surfaces of individual lenses and mirrors, and on the detector itself. The level of stray light ratio can be considerably reduced by taking into account the stray light during the optimization to determine solutions in which the irradiance due to these ghosts is kept to the minimum possible value. Unhappily, the routines available in most optical design software's, for example CODE V, do not permit all alone to make exact quantitative calculations of the stray light due to these ghosts. Therefore, the engineer in charge of the optical design is confronted to the problem of using two different software's, one for the design and optimization, for example CODE V, one for stray light analysis, for example ASAP. This makes a complete optimization very complex . Nevertheless, using special techniques and combinations of the routines available in CODE V, it is possible to have at its disposal a software macro tool to do such an analysis quickly and accurately, including Monte-Carlo ray tracing, or taking into account diffraction effects. This analysis can be done in a few minutes, to be compared to hours with other software's.

  6. Accurate wind farm development and operation. Advanced wake modelling

    Energy Technology Data Exchange (ETDEWEB)

    Brand, A.; Bot, E.; Ozdemir, H. [ECN Unit Wind Energy, P.O. Box 1, NL 1755 ZG Petten (Netherlands); Steinfeld, G.; Drueke, S.; Schmidt, M. [ForWind, Center for Wind Energy Research, Carl von Ossietzky Universitaet Oldenburg, D-26129 Oldenburg (Germany); Mittelmeier, N. REpower Systems SE, D-22297 Hamburg (Germany))

    2013-11-15

    The ability is demonstrated to calculate wind farm wakes on the basis of ambient conditions that were calculated with an atmospheric model. Specifically, comparisons are described between predicted and observed ambient conditions, and between power predictions from three wind farm wake models and power measurements, for a single and a double wake situation. The comparisons are based on performance indicators and test criteria, with the objective to determine the percentage of predictions that fall within a given range about the observed value. The Alpha Ventus site is considered, which consists of a wind farm with the same name and the met mast FINO1. Data from the 6 REpower wind turbines and the FINO1 met mast were employed. The atmospheric model WRF predicted the ambient conditions at the location and the measurement heights of the FINO1 mast. May the predictability of the wind speed and the wind direction be reasonable if sufficiently sized tolerances are employed, it is fairly impossible to predict the ambient turbulence intensity and vertical shear. Three wind farm wake models predicted the individual turbine powers: FLaP-Jensen and FLaP-Ainslie from ForWind Oldenburg, and FarmFlow from ECN. The reliabilities of the FLaP-Ainslie and the FarmFlow wind farm wake models are of equal order, and higher than FLaP-Jensen. Any difference between the predictions from these models is most clear in the double wake situation. Here FarmFlow slightly outperforms FLaP-Ainslie.

  7. Can phenological models predict tree phenology accurately under climate change conditions?

    Science.gov (United States)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay

  8. Development of Model for Providing Feasible Scholarship

    Directory of Open Access Journals (Sweden)

    Harry Dhika

    2016-05-01

    Full Text Available The current work focuses on the development of a model to determine a feasible scholarship recipient on the basis of the naiv¨e Bayes’ method using very simple and limited attributes. Those attributes are the applicants academic year, represented by their semester, academic performance, represented by their GPa, socioeconomic ability, which represented the economic capability to attend a higher education institution, and their level of social involvement. To establish and evaluate the model performance, empirical data are collected, and the data of 100 students are divided into 80 student data for the model training and the remaining of 20 student data are for the model testing. The results suggest that the model is capable to provide recommendations for the potential scholarship recipient at the level of accuracy of 95%.

  9. Double Layered Sheath in Accurate HV XLPE Cable Modeling

    DEFF Research Database (Denmark)

    Gudmundsdottir, Unnur Stella; Silva, J. De; Bak, Claus Leth

    2010-01-01

    This paper discusses modelling of high voltage AC underground cables. For long cables, when crossbonding points are present, not only the coaxial mode of propagation is excited during transient phenomena, but also the intersheath mode. This causes inaccurate simulation results for high frequency...

  10. Innovative technologies to accurately model waves and moored ship motions

    CSIR Research Space (South Africa)

    van der Molen, W

    2010-09-01

    Full Text Available Late in 2009 CSIR Built Environment in Stellenbosch was awarded a contract to carry out extensive physical and numerical modelling to study the wave conditions and associated moored ship motions, for the design of a new iron ore export jetty for BHP...

  11. Accurate Antenna Models in Ground Penetrating Radar Diffraction Tomography

    DEFF Research Database (Denmark)

    Meincke, Peter; Kim, Oleksiy S.

    2002-01-01

    are modeled by their plane-wave receiving and transmitting spectra. We find these spectra numerically for a resistively loaded dipole using the method of moments. Also, we illustrate, through a numerical example, the importance of taking into account the correct antenna pattern in GPR diffraction tomography.......Linear inversion schemes based on the concept of diffraction tomography have proven successful for ground penetrating radar (GPR) imaging. In many GPR surveys, the antennas of the GPR are located close to the air-soil interface and, therefore, it is important to incorporate the presence...... of this interface in the inversion scheme (see Hansen, T.B. and Meincke Johansen, P., IEEE Trans. Geoscience and Remote Sensing, vol.38, p.496-506, 2000). Hansen and Meincke Johansen modeled the antennas as ideal (Hertzian) electric dipoles. Since practical GPR antennas are not ideal, it is of interest...

  12. Compact and Accurate Turbocharger Modelling for Engine Control

    DEFF Research Database (Denmark)

    Sorenson, Spencer C; Hendricks, Elbert; Magnússon, Sigurjón

    2005-01-01

    With the current trend towards engine downsizing, the use of turbochargers to obtain extra engine power has become common. A great díffuculty in the use of turbochargers is in the modelling of the compressor map. In general this is done by inserting the compressor map directly into the engine ECU...... turbocharges with radial compressors for either Spark Ignition (SI) or diesel engines...

  13. Composite PET and MRI for accurate localization and metabolic modeling

    International Nuclear Information System (INIS)

    Bidaut, L.

    1991-01-01

    This paper reports that in order to help in analyzing PET data and really take advantage of their metabolic content, a system was designed and implemented to align and process data from various medical imaging modalities, particularly (but not only) for brain studies. Although this system is for now mostly used for anatomical localization, multi-modality ROIs and pharmaco-kinetic modeling, more multi-modality protocols will be implemented in the future, not only to help in PET reconstruction data correction and semi-automated ROI definition, but also for helping in improving diagnostic accuracy along with surgery and therapy planning

  14. Accurate, model-based tuning of synthetic gene expression using introns in S. cerevisiae.

    Directory of Open Access Journals (Sweden)

    Ido Yofe

    2014-06-01

    Full Text Available Introns are key regulators of eukaryotic gene expression and present a potentially powerful tool for the design of synthetic eukaryotic gene expression systems. However, intronic control over gene expression is governed by a multitude of complex, incompletely understood, regulatory mechanisms. Despite this lack of detailed mechanistic understanding, here we show how a relatively simple model enables accurate and predictable tuning of synthetic gene expression system in yeast using several predictive intron features such as transcript folding and sequence motifs. Using only natural Saccharomyces cerevisiae introns as regulators, we demonstrate fine and accurate control over gene expression spanning a 100 fold expression range. These results broaden the engineering toolbox of synthetic gene expression systems and provide a framework in which precise and robust tuning of gene expression is accomplished.

  15. The effects of video modeling with voiceover instruction on accurate implementation of discrete-trial instruction.

    Science.gov (United States)

    Vladescu, Jason C; Carroll, Regina; Paden, Amber; Kodak, Tiffany M

    2012-01-01

    The present study replicates and extends previous research on the use of video modeling (VM) with voiceover instruction to train staff to implement discrete-trial instruction (DTI). After staff trainees reached the mastery criterion when teaching an adult confederate with VM, they taught a child with a developmental disability using DTI. The results showed that the staff trainees' accurate implementation of DTI remained high, and both child participants acquired new skills. These findings provide additional support that VM may be an effective method to train staff members to conduct DTI.

  16. Microbiome Data Accurately Predicts the Postmortem Interval Using Random Forest Regression Models

    Directory of Open Access Journals (Sweden)

    Aeriel Belk

    2018-02-01

    Full Text Available Death investigations often include an effort to establish the postmortem interval (PMI in cases in which the time of death is uncertain. The postmortem interval can lead to the identification of the deceased and the validation of witness statements and suspect alibis. Recent research has demonstrated that microbes provide an accurate clock that starts at death and relies on ecological change in the microbial communities that normally inhabit a body and its surrounding environment. Here, we explore how to build the most robust Random Forest regression models for prediction of PMI by testing models built on different sample types (gravesoil, skin of the torso, skin of the head, gene markers (16S ribosomal RNA (rRNA, 18S rRNA, internal transcribed spacer regions (ITS, and taxonomic levels (sequence variants, species, genus, etc.. We also tested whether particular suites of indicator microbes were informative across different datasets. Generally, results indicate that the most accurate models for predicting PMI were built using gravesoil and skin data using the 16S rRNA genetic marker at the taxonomic level of phyla. Additionally, several phyla consistently contributed highly to model accuracy and may be candidate indicators of PMI.

  17. BEYOND ELLIPSE(S): ACCURATELY MODELING THE ISOPHOTAL STRUCTURE OF GALAXIES WITH ISOFIT AND CMODEL

    International Nuclear Information System (INIS)

    Ciambur, B. C.

    2015-01-01

    This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial, cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources

  18. Integrating GPS, GYRO, vehicle speed sensor, and digital map to provide accurate and real-time position in an intelligent navigation system

    Science.gov (United States)

    Li, Qingquan; Fang, Zhixiang; Li, Hanwu; Xiao, Hui

    2005-10-01

    The global positioning system (GPS) has become the most extensively used positioning and navigation tool in the world. Applications of GPS abound in surveying, mapping, transportation, agriculture, military planning, GIS, and the geosciences. However, the positional and elevation accuracy of any given GPS location is prone to error, due to a number of factors. The applications of Global Positioning System (GPS) positioning is more and more popular, especially the intelligent navigation system which relies on GPS and Dead Reckoning technology is developing quickly for future huge market in China. In this paper a practical combined positioning model of GPS/DR/MM is put forward, which integrates GPS, Gyro, Vehicle Speed Sensor (VSS) and digital navigation maps to provide accurate and real-time position for intelligent navigation system. This model is designed for automotive navigation system making use of Kalman filter to improve position and map matching veracity by means of filtering raw GPS and DR signals, and then map-matching technology is used to provide map coordinates for map displaying. In practical examples, for illustrating the validity of the model, several experiments and their results of integrated GPS/DR positioning in intelligent navigation system will be shown for the conclusion that Kalman Filter based GPS/DR integrating position approach is necessary, feasible and efficient for intelligent navigation application. Certainly, this combined positioning model, similar to other model, can not resolve all situation issues. Finally, some suggestions are given for further improving integrated GPS/DR/MM application.

  19. Technical Note: Using experimentally determined proton spot scanning timing parameters to accurately model beam delivery time.

    Science.gov (United States)

    Shen, Jiajian; Tryggestad, Erik; Younkin, James E; Keole, Sameer R; Furutani, Keith M; Kang, Yixiu; Herman, Michael G; Bues, Martin

    2017-10-01

    To accurately model the beam delivery time (BDT) for a synchrotron-based proton spot scanning system using experimentally determined beam parameters. A model to simulate the proton spot delivery sequences was constructed, and BDT was calculated by summing times for layer switch, spot switch, and spot delivery. Test plans were designed to isolate and quantify the relevant beam parameters in the operation cycle of the proton beam therapy delivery system. These parameters included the layer switch time, magnet preparation and verification time, average beam scanning speeds in x- and y-directions, proton spill rate, and maximum charge and maximum extraction time for each spill. The experimentally determined parameters, as well as the nominal values initially provided by the vendor, served as inputs to the model to predict BDTs for 602 clinical proton beam deliveries. The calculated BDTs (T BDT ) were compared with the BDTs recorded in the treatment delivery log files (T Log ): ∆t = T Log -T BDT . The experimentally determined average layer switch time for all 97 energies was 1.91 s (ranging from 1.9 to 2.0 s for beam energies from 71.3 to 228.8 MeV), average magnet preparation and verification time was 1.93 ms, the average scanning speeds were 5.9 m/s in x-direction and 19.3 m/s in y-direction, the proton spill rate was 8.7 MU/s, and the maximum proton charge available for one acceleration is 2.0 ± 0.4 nC. Some of the measured parameters differed from the nominal values provided by the vendor. The calculated BDTs using experimentally determined parameters matched the recorded BDTs of 602 beam deliveries (∆t = -0.49 ± 1.44 s), which were significantly more accurate than BDTs calculated using nominal timing parameters (∆t = -7.48 ± 6.97 s). An accurate model for BDT prediction was achieved by using the experimentally determined proton beam therapy delivery parameters, which may be useful in modeling the interplay effect and patient throughput. The model may

  20. Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter

    Science.gov (United States)

    2009-03-31

    AFRL-RV-HA-TR-2009-1055 Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter ...m (or even 500 m) at mid to high latitudes . At low latitudes , the FDTD model exhibits variations that make it difficult to determine a reliable...Scientific, Final 3. DATES COVERED (From - To) 02-08-2006 – 31-12-2008 4. TITLE AND SUBTITLE Accurate Modeling of Ionospheric Electromagnetic Fields

  1. Kajian Model Kesuksesan Sistem Informasi Delone & Mclean Pada Pengguna Sistem Informasi Akuntansi Accurate Di Kota Sukabumi

    OpenAIRE

    Hudin, Jamal Maulana; Riana, Dwiza

    2016-01-01

    Accurate accounting information system is one of accounting information systems used in the sixcompanies in the city of Sukabumi. DeLone and McLean information system success model is asuitable model to measure the success of the application of information systems in an organizationor company. This study will analyze factors that measure the success of DeLone & McLeaninformation systems model to the users of the Accurate accounting information systems in sixcompanies in the city of Sukabumi. ...

  2. A new model with an anatomically accurate human renal collecting system for training in fluoroscopy-guided percutaneous nephrolithotomy access.

    Science.gov (United States)

    Turney, Benjamin W

    2014-03-01

    Obtaining renal access is one of the most important and complex steps in learning percutaneous nephrolithotomy (PCNL). Ideally, this skill should be practiced outside the operating room. There is a need for anatomically accurate and cheap models for simulated training. The objective was to develop a cost-effective, anatomically accurate, nonbiologic training model for simulated PCNL access under fluoroscopic guidance. Collecting systems from routine computed tomography urograms were extracted and reformatted using specialized software. These images were printed in a water-soluble plastic on a three-dimensional (3D) printer to create biomodels. These models were embedded in silicone and then the models were dissolved in water to leave a hollow collecting system within a silicone model. These PCNL models were filled with contrast medium and sealed. A layer of dense foam acted as a spacer to replicate the tissues between skin and kidney. 3D printed models of human collecting systems are a useful adjunct in planning PCNL access. The PCNL access training model is relatively low cost and reproduces the anatomy of the renal collecting system faithfully. A range of models reflecting the variety and complexity of human collecting systems can be reproduced. The fluoroscopic triangulation process needed to target the calix of choice can be practiced successfully in this model. This silicone PCNL training model accurately replicates the anatomic architecture and orientation of the human renal collecting system. It provides a safe, clean, and effective model for training in accurate fluoroscopy-guided PCNL access.

  3. A different interpretation of Einstein's viscosity equation provides accurate representations of the behavior of hydrophilic solutes to high concentrations.

    Science.gov (United States)

    Zavitsas, Andreas A

    2012-08-23

    Viscosities of aqueous solutions of many highly soluble hydrophilic solutes with hydroxyl and amino groups are examined with a focus on improving the concentration range over which Einstein's relationship between solution viscosity and solute volume, V, is applicable accurately. V is the hydrodynamic effective volume of the solute, including any water strongly bound to it and acting as a single entity with it. The widespread practice is to relate the relative viscosity of solute to solvent, η/η(0), to V/V(tot), where V(tot) is the total volume of the solution. For solutions that are not infinitely dilute, it is shown that the volume ratio must be expressed as V/V(0), where V(0) = V(tot) - V. V(0) is the volume of water not bound to the solute, the "free" water solvent. At infinite dilution, V/V(0) = V/V(tot). For the solutions examined, the proportionality constant between the relative viscosity and volume ratio is shown to be 2.9, rather than the 2.5 commonly used. To understand the phenomena relating to viscosity, the hydrodynamic effective volume of water is important. It is estimated to be between 54 and 85 cm(3). With the above interpretations of Einstein's equation, which are consistent with his stated reasoning, the relation between the viscosity and volume ratio remains accurate to much higher concentrations than those attainable with any of the other relations examined that express the volume ratio as V/V(tot).

  4. Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    Science.gov (United States)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-04-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter halos. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the "accurate" regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard ΛCDM + halo model against the clustering of SDSS DR7 galaxies. Specifically, we use the projected correlation function, group multiplicity function and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir halos) matches the clustering of low luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the "standard" halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  5. Guide to Working with Model Providers.

    Science.gov (United States)

    Walter, Katie; Hassel, Bryan C.

    Often a central feature of a school's improvement efforts is the adoption of a Comprehensive School Reform (CSR) model, an externally developed research-based design for school improvement. Adopting a model is only the first step in CSR. Another important step is forging partnerships with developers of CSR models. This guide aims to help schools…

  6. Modeling Site Heterogeneity with Posterior Mean Site Frequency Profiles Accelerates Accurate Phylogenomic Estimation.

    Science.gov (United States)

    Wang, Huai-Chun; Minh, Bui Quang; Susko, Edward; Roger, Andrew J

    2018-03-01

    Proteins have distinct structural and functional constraints at different sites that lead to site-specific preferences for particular amino acid residues as the sequences evolve. Heterogeneity in the amino acid substitution process between sites is not modeled by commonly used empirical amino acid exchange matrices. Such model misspecification can lead to artefacts in phylogenetic estimation such as long-branch attraction. Although sophisticated site-heterogeneous mixture models have been developed to address this problem in both Bayesian and maximum likelihood (ML) frameworks, their formidable computational time and memory usage severely limits their use in large phylogenomic analyses. Here we propose a posterior mean site frequency (PMSF) method as a rapid and efficient approximation to full empirical profile mixture models for ML analysis. The PMSF approach assigns a conditional mean amino acid frequency profile to each site calculated based on a mixture model fitted to the data using a preliminary guide tree. These PMSF profiles can then be used for in-depth tree-searching in place of the full mixture model. Compared with widely used empirical mixture models with $k$ classes, our implementation of PMSF in IQ-TREE (http://www.iqtree.org) speeds up the computation by approximately $k$/1.5-fold and requires a small fraction of the RAM. Furthermore, this speedup allows, for the first time, full nonparametric bootstrap analyses to be conducted under complex site-heterogeneous models on large concatenated data matrices. Our simulations and empirical data analyses demonstrate that PMSF can effectively ameliorate long-branch attraction artefacts. In some empirical and simulation settings PMSF provided more accurate estimates of phylogenies than the mixture models from which they derive.

  7. Simplifying ART cohort monitoring: Can pharmacy stocks provide accurate estimates of patients retained on antiretroviral therapy in Malawi?

    Directory of Open Access Journals (Sweden)

    Tweya Hannock

    2012-07-01

    Full Text Available Abstract Background Routine monitoring of patients on antiretroviral therapy (ART is crucial for measuring program success and accurate drug forecasting. However, compiling data from patient registers to measure retention in ART is labour-intensive. To address this challenge, we conducted a pilot study in Malawi to assess whether patient ART retention could be determined using pharmacy records as compared to estimates of retention based on standardized paper- or electronic based cohort reports. Methods Twelve ART facilities were included in the study: six used paper-based registers and six used electronic data systems. One ART facility implemented an electronic data system in quarter three and was included as a paper-based system facility in quarter two only. Routine patient retention cohort reports, paper or electronic, were collected from facilities for both quarter two [April–June] and quarter three [July–September], 2010. Pharmacy stock data were also collected from the 12 ART facilities over the same period. Numbers of ART continuation bottles recorded on pharmacy stock cards at the beginning and end of each quarter were documented. These pharmacy data were used to calculate the total bottles dispensed to patients in each quarter with intent to estimate the number of patients retained on ART. Information for time required to determine ART retention was gathered through interviews with clinicians tasked with compiling the data. Results Among ART clinics with paper-based systems, three of six facilities in quarter two and four of five facilities in quarter three had similar numbers of patients retained on ART comparing cohort reports to pharmacy stock records. In ART clinics with electronic systems, five of six facilities in quarter two and five of seven facilities in quarter three had similar numbers of patients retained on ART when comparing retention numbers from electronically generated cohort reports to pharmacy stock records. Among

  8. An accurate analytical solution of a zero-dimensional greenhouse model for global warming

    International Nuclear Information System (INIS)

    Foong, S K

    2006-01-01

    In introducing the complex subject of global warming, books and papers usually use the zero-dimensional greenhouse model. When the ratio of the infrared radiation energy of the Earth's surface that is lost to outer space to the non-reflected average solar radiation energy is small, the model admits an accurate approximate analytical solution-the resulting energy balance equation of the model is a quartic equation that can be solved analytically-and thus provides an alternative solution and instructional strategy. A search through the literature fails to find an analytical solution, suggesting that the solution may be new. In this paper, we review the model, derive the approximation and obtain its solution. The dependence of the temperature of the surface of the Earth and the temperature of the atmosphere on seven parameters is made explicit. A simple and convenient formula for global warming (or cooling) in terms of the percentage change of the parameters is derived. The dependence of the surface temperature on the parameters is illustrated by several representative graphs

  9. Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware.

    Science.gov (United States)

    Daneels, Glenn; Municio, Esteban; Van de Velde, Bruno; Ergeerts, Glenn; Weyn, Maarten; Latré, Steven; Famaey, Jeroen

    2018-02-02

    The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks.

  10. Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation

    Energy Technology Data Exchange (ETDEWEB)

    Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp

    2017-02-01

    The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculating the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.

  11. Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data

    Science.gov (United States)

    Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej

    2016-04-01

    GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.

  12. Production of Accurate Skeletal Models of Domestic Animals Using Three-Dimensional Scanning and Printing Technology

    Science.gov (United States)

    Li, Fangzheng; Liu, Chunying; Song, Xuexiong; Huan, Yanjun; Gao, Shansong; Jiang, Zhongling

    2018-01-01

    Access to adequate anatomical specimens can be an important aspect in learning the anatomy of domestic animals. In this study, the authors utilized a structured light scanner and fused deposition modeling (FDM) printer to produce highly accurate animal skeletal models. First, various components of the bovine skeleton, including the femur, the…

  13. Measuring Physical Inactivity: Do Current Measures Provide an Accurate View of “Sedentary” Video Game Time?

    Directory of Open Access Journals (Sweden)

    Simon Fullerton

    2014-01-01

    Full Text Available Background. Measures of screen time are often used to assess sedentary behaviour. Participation in activity-based video games (exergames can contribute to estimates of screen time, as current practices of measuring it do not consider the growing evidence that playing exergames can provide light to moderate levels of physical activity. This study aimed to determine what proportion of time spent playing video games was actually spent playing exergames. Methods. Data were collected via a cross-sectional telephone survey in South Australia. Participants aged 18 years and above (n=2026 were asked about their video game habits, as well as demographic and socioeconomic factors. In cases where children were in the household, the video game habits of a randomly selected child were also questioned. Results. Overall, 31.3% of adults and 79.9% of children spend at least some time playing video games. Of these, 24.1% of adults and 42.1% of children play exergames, with these types of games accounting for a third of all time that adults spend playing video games and nearly 20% of children’s video game time. Conclusions. A substantial proportion of time that would usually be classified as “sedentary” may actually be spent participating in light to moderate physical activity.

  14. Total inpatient treatment costs in patients with severe burns: towards a more accurate reimbursement model.

    Science.gov (United States)

    Mehra, Tarun; Koljonen, Virve; Seifert, Burkhardt; Volbracht, Jörk; Giovanoli, Pietro; Plock, Jan; Moos, Rudolf Maria

    2015-01-01

    Reimbursement systems have difficulties depicting the actual cost of burn treatment, leaving care providers with a significant financial burden. Our aim was to establish a simple and accurate reimbursement model compatible with prospective payment systems. A total of 370 966 electronic medical records of patients discharged in 2012 to 2013 from Swiss university hospitals were reviewed. A total of 828 cases of burns including 109 cases of severe burns were retained. Costs, revenues and earnings for severe and nonsevere burns were analysed and a linear regression model predicting total inpatient treatment costs was established. The median total costs per case for severe burns was tenfold higher than for nonsevere burns (179 949 CHF [167 353 EUR] vs 11 312 CHF [10 520 EUR], interquartile ranges 96 782-328 618 CHF vs 4 874-27 783 CHF, p <0.001). The median of earnings per case for nonsevere burns was 588 CHF (547 EUR) (interquartile range -6 720 - 5 354 CHF) whereas severe burns incurred a large financial loss to care providers, with median earnings of -33 178 CHF (30 856 EUR) (interquartile range -95 533 - 23 662 CHF). Differences were highly significant (p <0.001). Our linear regression model predicting total costs per case with length of stay (LOS) as independent variable had an adjusted R2 of 0.67 (p <0.001 for LOS). Severe burns are systematically underfunded within the Swiss reimbursement system. Flat-rate DRG-based refunds poorly reflect the actual treatment costs. In conclusion, we suggest a reimbursement model based on a per diem rate for treatment of severe burns.

  15. Accurate monoenergetic electron parameters of laser wakefield in a bubble model

    Science.gov (United States)

    Raheli, A.; Rahmatallahpur, S. H.

    2012-11-01

    A reliable analytical expression for the potential of plasma waves with phase velocities near the speed of light is derived. The presented spheroid cavity model is more consistent than the previous spherical and ellipsoidal model and it explains the mono-energetic electron trajectory more accurately, especially at the relativistic region. As a result, the quasi-mono-energetic electrons output beam interacting with the laser plasma can be more appropriately described with this model.

  16. Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model

    Science.gov (United States)

    Ahlgren, K.; Li, X.

    2017-12-01

    Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model

  17. Modeling of capacitor charging dynamics in an energy harvesting system considering accurate electromechanical coupling effects

    Science.gov (United States)

    Bagheri, Shahriar; Wu, Nan; Filizadeh, Shaahin

    2018-06-01

    This paper presents an iterative numerical method that accurately models an energy harvesting system charging a capacitor with piezoelectric patches. The constitutive relations of piezoelectric materials connected with an external charging circuit with a diode bridge and capacitors lead to the electromechanical coupling effect and the difficulty of deriving accurate transient mechanical response, as well as the charging progress. The proposed model is built upon the Euler-Bernoulli beam theory and takes into account the electromechanical coupling effects as well as the dynamic process of charging an external storage capacitor. The model is validated through experimental tests on a cantilever beam coated with piezoelectric patches. Several parametric studies are performed and the functionality of the model is verified. The efficiency of power harvesting system can be predicted and tuned considering variations in different design parameters. Such a model can be utilized to design robust and optimal energy harvesting system.

  18. Fast and accurate exercise policies for Bermudan swaptions in the LIBOR market model

    NARCIS (Netherlands)

    P.K. Karlsson (Patrik); S. Jain (Shashi); C.W. Oosterlee (Kees)

    2016-01-01

    htmlabstractThis paper describes an American Monte Carlo approach for obtaining fast and accurate exercise policies for pricing of callable LIBOR Exotics (e.g., Bermudan swaptions) in the LIBOR market model using the Stochastic Grid Bundling Method (SGBM). SGBM is a bundling and regression based

  19. In-situ measurements of material thermal parameters for accurate LED lamp thermal modelling

    NARCIS (Netherlands)

    Vellvehi, M.; Perpina, X.; Jorda, X.; Werkhoven, R.J.; Kunen, J.M.G.; Jakovenko, J.; Bancken, P.; Bolt, P.J.

    2013-01-01

    This work deals with the extraction of key thermal parameters for accurate thermal modelling of LED lamps: air exchange coefficient around the lamp, emissivity and thermal conductivity of all lamp parts. As a case study, an 8W retrofit lamp is presented. To assess simulation results, temperature is

  20. ACCURATE UNIVERSAL MODELS FOR THE MASS ACCRETION HISTORIES AND CONCENTRATIONS OF DARK MATTER HALOS

    International Nuclear Information System (INIS)

    Zhao, D. H.; Jing, Y. P.; Mo, H. J.; Boerner, G.

    2009-01-01

    A large amount of observations have constrained cosmological parameters and the initial density fluctuation spectrum to a very high accuracy. However, cosmological parameters change with time and the power index of the power spectrum dramatically varies with mass scale in the so-called concordance ΛCDM cosmology. Thus, any successful model for its structural evolution should work well simultaneously for various cosmological models and different power spectra. We use a large set of high-resolution N-body simulations of a variety of structure formation models (scale-free, standard CDM, open CDM, and ΛCDM) to study the mass accretion histories, the mass and redshift dependence of concentrations, and the concentration evolution histories of dark matter halos. We find that there is significant disagreement between the much-used empirical models in the literature and our simulations. Based on our simulation results, we find that the mass accretion rate of a halo is tightly correlated with a simple function of its mass, the redshift, parameters of the cosmology, and of the initial density fluctuation spectrum, which correctly disentangles the effects of all these factors and halo environments. We also find that the concentration of a halo is strongly correlated with the universe age when its progenitor on the mass accretion history first reaches 4% of its current mass. According to these correlations, we develop new empirical models for both the mass accretion histories and the concentration evolution histories of dark matter halos, and the latter can also be used to predict the mass and redshift dependence of halo concentrations. These models are accurate and universal: the same set of model parameters works well for different cosmological models and for halos of different masses at different redshifts, and in the ΛCDM case the model predictions match the simulation results very well even though halo mass is traced to about 0.0005 times the final mass, when

  1. Accurate protein structure modeling using sparse NMR data and homologous structure information.

    Science.gov (United States)

    Thompson, James M; Sgourakis, Nikolaos G; Liu, Gaohua; Rossi, Paolo; Tang, Yuefeng; Mills, Jeffrey L; Szyperski, Thomas; Montelione, Gaetano T; Baker, David

    2012-06-19

    While information from homologous structures plays a central role in X-ray structure determination by molecular replacement, such information is rarely used in NMR structure determination because it can be incorrect, both locally and globally, when evolutionary relationships are inferred incorrectly or there has been considerable evolutionary structural divergence. Here we describe a method that allows robust modeling of protein structures of up to 225 residues by combining (1)H(N), (13)C, and (15)N backbone and (13)Cβ chemical shift data, distance restraints derived from homologous structures, and a physically realistic all-atom energy function. Accurate models are distinguished from inaccurate models generated using incorrect sequence alignments by requiring that (i) the all-atom energies of models generated using the restraints are lower than models generated in unrestrained calculations and (ii) the low-energy structures converge to within 2.0 Å backbone rmsd over 75% of the protein. Benchmark calculations on known structures and blind targets show that the method can accurately model protein structures, even with very remote homology information, to a backbone rmsd of 1.2-1.9 Å relative to the conventional determined NMR ensembles and of 0.9-1.6 Å relative to X-ray structures for well-defined regions of the protein structures. This approach facilitates the accurate modeling of protein structures using backbone chemical shift data without need for side-chain resonance assignments and extensive analysis of NOESY cross-peak assignments.

  2. A new model for the accurate calculation of natural gas viscosity

    OpenAIRE

    Xiaohong Yang; Shunxi Zhang; Weiling Zhu

    2017-01-01

    Viscosity of natural gas is a basic and important parameter, of theoretical and practical significance in the domain of natural gas recovery, transmission and processing. In order to obtain the accurate viscosity data efficiently at a low cost, a new model and its corresponding functional relation are derived on the basis of the relationship among viscosity, temperature and density derived from the kinetic theory of gases. After the model parameters were optimized using a lot of experimental ...

  3. Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.

    Science.gov (United States)

    Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng

    2015-06-10

    In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.

  4. Simple, fast and accurate two-diode model for photovoltaic modules

    Energy Technology Data Exchange (ETDEWEB)

    Ishaque, Kashif; Salam, Zainal; Taheri, Hamed [Faculty of Electrical Engineering, Universiti Teknologi Malaysia, UTM 81310, Skudai, Johor Bahru (Malaysia)

    2011-02-15

    This paper proposes an improved modeling approach for the two-diode model of photovoltaic (PV) module. The main contribution of this work is the simplification of the current equation, in which only four parameters are required, compared to six or more in the previously developed two-diode models. Furthermore the values of the series and parallel resistances are computed using a simple and fast iterative method. To validate the accuracy of the proposed model, six PV modules of different types (multi-crystalline, mono-crystalline and thin-film) from various manufacturers are tested. The performance of the model is evaluated against the popular single diode models. It is found that the proposed model is superior when subjected to irradiance and temperature variations. In particular the model matches very accurately for all important points of the I-V curves, i.e. the peak power, short-circuit current and open circuit voltage. The modeling method is useful for PV power converter designers and circuit simulator developers who require simple, fast yet accurate model for the PV module. (author)

  5. An accurate model for numerical prediction of piezoelectric energy harvesting from fluid structure interaction problems

    International Nuclear Information System (INIS)

    Amini, Y; Emdad, H; Farid, M

    2014-01-01

    Piezoelectric energy harvesting (PEH) from ambient energy sources, particularly vibrations, has attracted considerable interest throughout the last decade. Since fluid flow has a high energy density, it is one of the best candidates for PEH. Indeed, a piezoelectric energy harvesting process from the fluid flow takes the form of natural three-way coupling of the turbulent fluid flow, the electromechanical effect of the piezoelectric material and the electrical circuit. There are some experimental and numerical studies about piezoelectric energy harvesting from fluid flow in literatures. Nevertheless, accurate modeling for predicting characteristics of this three-way coupling has not yet been developed. In the present study, accurate modeling for this triple coupling is developed and validated by experimental results. A new code based on this modeling in an openFOAM platform is developed. (paper)

  6. Modeling of Non-Gravitational Forces for Precise and Accurate Orbit Determination

    Science.gov (United States)

    Hackel, Stefan; Gisinger, Christoph; Steigenberger, Peter; Balss, Ulrich; Montenbruck, Oliver; Eineder, Michael

    2014-05-01

    Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The precise reconstruction of the satellite's trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency Integrated Geodetic and Occultation Receiver (IGOR) onboard the spacecraft. The increasing demand for precise radar products relies on validation methods, which require precise and accurate orbit products. An analysis of the orbit quality by means of internal and external validation methods on long and short timescales shows systematics, which reflect deficits in the employed force models. Following the proper analysis of this deficits, possible solution strategies are highlighted in the presentation. The employed Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for gravitational and non-gravitational forces. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). The satellite TerraSAR-X flies on a dusk-dawn orbit with an altitude of approximately 510 km above ground. Due to this constellation, the Sun almost constantly illuminates the satellite, which causes strong across-track accelerations on the plane rectangular to the solar rays. The indirect effect of the solar radiation is called Earth Radiation Pressure (ERP). This force depends on the sunlight, which is reflected by the illuminated Earth surface (visible spectra) and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed. The scope of

  7. Accurate Modelling of Surface Currents and Internal Tides in a Semi-enclosed Coastal Sea

    Science.gov (United States)

    Allen, S. E.; Soontiens, N. K.; Dunn, M. B. H.; Liu, J.; Olson, E.; Halverson, M. J.; Pawlowicz, R.

    2016-02-01

    The Strait of Georgia is a deep (400 m), strongly stratified, semi-enclosed coastal sea on the west coast of North America. We have configured a baroclinic model of the Strait of Georgia and surrounding coastal waters using the NEMO ocean community model. We run daily nowcasts and forecasts and publish our sea-surface results (including storm surge warnings) to the web (salishsea.eos.ubc.ca/storm-surge). Tides in the Strait of Georgia are mixed and large. The baroclinic model and previous barotropic models accurately represent tidal sea-level variations and depth mean currents. The baroclinic model reproduces accurately the diurnal but not the semi-diurnal baroclinic tidal currents. In the Southern Strait of Georgia, strong internal tidal currents at the semi-diurnal frequency are observed. Strong semi-diurnal tides are also produced in the model, but are almost 180 degrees out of phase with the observations. In the model, in the surface, the barotropic and baroclinic tides reinforce, whereas the observations show that at the surface the baroclinic tides oppose the barotropic. As such the surface currents are very poorly modelled. Here we will present evidence of the internal tidal field from observations. We will discuss the generation regions of the tides, the necessary modifications to the model required to correct the phase, the resulting baroclinic tides and the improvements in the surface currents.

  8. Accurate path integration in continuous attractor network models of grid cells.

    Science.gov (United States)

    Burak, Yoram; Fiete, Ila R

    2009-02-01

    Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.

  9. Automatic generation of a subject-specific model for accurate markerless motion capture and biomechanical applications.

    Science.gov (United States)

    Corazza, Stefano; Gambaretto, Emiliano; Mündermann, Lars; Andriacchi, Thomas P

    2010-04-01

    A novel approach for the automatic generation of a subject-specific model consisting of morphological and joint location information is described. The aim is to address the need for efficient and accurate model generation for markerless motion capture (MMC) and biomechanical studies. The algorithm applied and expanded on previous work on human shapes space by embedding location information for ten joint centers in a subject-specific free-form surface. The optimal locations of joint centers in the 3-D mesh were learned through linear regression over a set of nine subjects whose joint centers were known. The model was shown to be sufficiently accurate for both kinematic (joint centers) and morphological (shape of the body) information to allow accurate tracking with MMC systems. The automatic model generation algorithm was applied to 3-D meshes of different quality and resolution such as laser scans and visual hulls. The complete method was tested using nine subjects of different gender, body mass index (BMI), age, and ethnicity. Experimental training error and cross-validation errors were 19 and 25 mm, respectively, on average over the joints of the ten subjects analyzed in the study.

  10. Do dual-route models accurately predict reading and spelling performance in individuals with acquired alexia and agraphia?

    Science.gov (United States)

    Rapcsak, Steven Z; Henry, Maya L; Teague, Sommer L; Carnahan, Susan D; Beeson, Pélagie M

    2007-06-18

    Coltheart and co-workers [Castles, A., Bates, T. C., & Coltheart, M. (2006). John Marshall and the developmental dyslexias. Aphasiology, 20, 871-892; Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. (2001). DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108, 204-256] have demonstrated that an equation derived from dual-route theory accurately predicts reading performance in young normal readers and in children with reading impairment due to developmental dyslexia or stroke. In this paper, we present evidence that the dual-route equation and a related multiple regression model also accurately predict both reading and spelling performance in adult neurological patients with acquired alexia and agraphia. These findings provide empirical support for dual-route theories of written language processing.

  11. Fast and accurate Bayesian model criticism and conflict diagnostics using R-INLA

    KAUST Repository

    Ferkingstad, Egil

    2017-10-16

    Bayesian hierarchical models are increasingly popular for realistic modelling and analysis of complex data. This trend is accompanied by the need for flexible, general and computationally efficient methods for model criticism and conflict detection. Usually, a Bayesian hierarchical model incorporates a grouping of the individual data points, as, for example, with individuals in repeated measurement data. In such cases, the following question arises: Are any of the groups “outliers,” or in conflict with the remaining groups? Existing general approaches aiming to answer such questions tend to be extremely computationally demanding when model fitting is based on Markov chain Monte Carlo. We show how group-level model criticism and conflict detection can be carried out quickly and accurately through integrated nested Laplace approximations (INLA). The new method is implemented as a part of the open-source R-INLA package for Bayesian computing (http://r-inla.org).

  12. Towards more accurate wind and solar power prediction by improving NWP model physics

    Science.gov (United States)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  13. Accurate modeling of high frequency microelectromechanical systems (MEMS switches in time- and frequency-domainc

    Directory of Open Access Journals (Sweden)

    F. Coccetti

    2003-01-01

    Full Text Available In this contribution we present an accurate investigation of three different techniques for the modeling of complex planar circuits. The em analysis is performed by means of different electromagnetic full-wave solvers in the timedomain and in the frequency-domain. The first one is the Transmission Line Matrix (TLM method. In the second one the TLM method is combined with the Integral Equation (IE method. The latter is based on the Generalized Transverse Resonance Diffraction (GTRD. In order to test the methods we model different structures and compare the calculated Sparameters to measured results, with good agreement.

  14. Accurate corresponding point search using sphere-attribute-image for statistical bone model generation

    International Nuclear Information System (INIS)

    Saito, Toki; Nakajima, Yoshikazu; Sugita, Naohiko; Mitsuishi, Mamoru; Hashizume, Hiroyuki; Kuramoto, Kouichi; Nakashima, Yosio

    2011-01-01

    Statistical deformable model based two-dimensional/three-dimensional (2-D/3-D) registration is a promising method for estimating the position and shape of patient bone in the surgical space. Since its accuracy depends on the statistical model capacity, we propose a method for accurately generating a statistical bone model from a CT volume. Our method employs the Sphere-Attribute-Image (SAI) and has improved the accuracy of corresponding point search in statistical model generation. At first, target bone surfaces are extracted as SAIs from the CT volume. Then the textures of SAIs are classified to some regions using Maximally-stable-extremal-regions methods. Next, corresponding regions are determined using Normalized cross-correlation (NCC). Finally, corresponding points in each corresponding region are determined using NCC. The application of our method to femur bone models was performed, and worked well in the experiments. (author)

  15. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  16. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    International Nuclear Information System (INIS)

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang; Hu, Ying; Xiong, Jing; Zhang, Jianwei

    2015-01-01

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm 3 ) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm 3 , 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm 3 , 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm

  17. Accurate anisotropic material modelling using only tensile tests for hot and cold forming

    Science.gov (United States)

    Abspoel, M.; Scholting, M. E.; Lansbergen, M.; Neelis, B. M.

    2017-09-01

    Accurate material data for simulations require a lot of effort. Advanced yield loci require many different kinds of tests and a Forming Limit Curve (FLC) needs a large amount of samples. Many people use simple material models to reduce the effort of testing, however some models are either not accurate enough (i.e. Hill’48), or do not describe new types of materials (i.e. Keeler). Advanced yield loci describe the anisotropic materials behaviour accurately, but are not widely adopted because of the specialized tests, and data post-processing is a hurdle for many. To overcome these issues, correlations between the advanced yield locus points (biaxial, plane strain and shear) and mechanical properties have been investigated. This resulted in accurate prediction of the advanced stress points using only Rm, Ag and r-values in three directions from which a Vegter yield locus can be constructed with low effort. FLC’s can be predicted with the equations of Abspoel & Scholting depending on total elongation A80, r-value and thickness. Both predictive methods are initially developed for steel, aluminium and stainless steel (BCC and FCC materials). The validity of the predicted Vegter yield locus is investigated with simulation and measurements on both hot and cold formed parts and compared with Hill’48. An adapted specimen geometry, to ensure a homogeneous temperature distribution in the Gleeble hot tensile test, was used to measure the mechanical properties needed to predict a hot Vegter yield locus. Since for hot material, testing of stress states other than uniaxial is really challenging, the prediction for the yield locus adds a lot of value. For the hot FLC an A80 sample with a homogeneous temperature distribution is needed which is due to size limitations not possible in the Gleeble tensile tester. Heating the sample in an industrial type furnace and tensile testing it in a dedicated device is a good alternative to determine the necessary parameters for the FLC

  18. An accurate fatigue damage model for welded joints subjected to variable amplitude loading

    Science.gov (United States)

    Aeran, A.; Siriwardane, S. C.; Mikkelsen, O.; Langen, I.

    2017-12-01

    Researchers in the past have proposed several fatigue damage models to overcome the shortcomings of the commonly used Miner’s rule. However, requirements of material parameters or S-N curve modifications restricts their practical applications. Also, application of most of these models under variable amplitude loading conditions have not been found. To overcome these restrictions, a new fatigue damage model is proposed in this paper. The proposed model can be applied by practicing engineers using only the S-N curve given in the standard codes of practice. The model is verified with experimentally derived damage evolution curves for C 45 and 16 Mn and gives better agreement compared to previous models. The model predicted fatigue lives are also in better correlation with experimental results compared to previous models as shown in earlier published work by the authors. The proposed model is applied to welded joints subjected to variable amplitude loadings in this paper. The model given around 8% shorter fatigue lives compared to Eurocode given Miner’s rule. This shows the importance of applying accurate fatigue damage models for welded joints.

  19. Production of accurate skeletal models of domestic animals using three-dimensional scanning and printing technology.

    Science.gov (United States)

    Li, Fangzheng; Liu, Chunying; Song, Xuexiong; Huan, Yanjun; Gao, Shansong; Jiang, Zhongling

    2018-01-01

    Access to adequate anatomical specimens can be an important aspect in learning the anatomy of domestic animals. In this study, the authors utilized a structured light scanner and fused deposition modeling (FDM) printer to produce highly accurate animal skeletal models. First, various components of the bovine skeleton, including the femur, the fifth rib, and the sixth cervical (C6) vertebra were used to produce digital models. These were then used to produce 1:1 scale physical models with the FDM printer. The anatomical features of the digital models and three-dimensional (3D) printed models were then compared with those of the original skeletal specimens. The results of this study demonstrated that both digital and physical scale models of animal skeletal components could be rapidly produced using 3D printing technology. In terms of accuracy between models and original specimens, the standard deviations of the femur and the fifth rib measurements were 0.0351 and 0.0572, respectively. All of the features except the nutrient foramina on the original bone specimens could be identified in the digital and 3D printed models. Moreover, the 3D printed models could serve as a viable alternative to original bone specimens when used in anatomy education, as determined from student surveys. This study demonstrated an important example of reproducing bone models to be used in anatomy education and veterinary clinical training. Anat Sci Educ 11: 73-80. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  20. Improvement of a land surface model for accurate prediction of surface energy and water balances

    International Nuclear Information System (INIS)

    Katata, Genki

    2009-02-01

    In order to predict energy and water balances between the biosphere and atmosphere accurately, sophisticated schemes to calculate evaporation and adsorption processes in the soil and cloud (fog) water deposition on vegetation were implemented in the one-dimensional atmosphere-soil-vegetation model including CO 2 exchange process (SOLVEG2). Performance tests in arid areas showed that the above schemes have a significant effect on surface energy and water balances. The framework of the above schemes incorporated in the SOLVEG2 and instruction for running the model are documented. With further modifications of the model to implement the carbon exchanges between the vegetation and soil, deposition processes of materials on the land surface, vegetation stress-growth-dynamics etc., the model is suited to evaluate an effect of environmental loads to ecosystems by atmospheric pollutants and radioactive substances under climate changes such as global warming and drought. (author)

  1. A hamster model for Marburg virus infection accurately recapitulates Marburg hemorrhagic fever.

    Science.gov (United States)

    Marzi, Andrea; Banadyga, Logan; Haddock, Elaine; Thomas, Tina; Shen, Kui; Horne, Eva J; Scott, Dana P; Feldmann, Heinz; Ebihara, Hideki

    2016-12-15

    Marburg virus (MARV), a close relative of Ebola virus, is the causative agent of a severe human disease known as Marburg hemorrhagic fever (MHF). No licensed vaccine or therapeutic exists to treat MHF, and MARV is therefore classified as a Tier 1 select agent and a category A bioterrorism agent. In order to develop countermeasures against this severe disease, animal models that accurately recapitulate human disease are required. Here we describe the development of a novel, uniformly lethal Syrian golden hamster model of MHF using a hamster-adapted MARV variant Angola. Remarkably, this model displayed almost all of the clinical features of MHF seen in humans and non-human primates, including coagulation abnormalities, hemorrhagic manifestations, petechial rash, and a severely dysregulated immune response. This MHF hamster model represents a powerful tool for further dissecting MARV pathogenesis and accelerating the development of effective medical countermeasures against human MHF.

  2. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners.

    Science.gov (United States)

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-04-11

    Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model.

  3. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners

    Directory of Open Access Journals (Sweden)

    Xuemiao Xu

    2016-04-01

    Full Text Available Exterior orientation parameters’ (EOP estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model.

  4. Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models

    Directory of Open Access Journals (Sweden)

    Stovgaard Kasper

    2010-08-01

    Full Text Available Abstract Background Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function, which is paramount for structure determination based on statistical inference. Results We present a method for the efficient calculation of accurate SAXS curves based on the Debye formula and a set of scattering form factors for dummy atom representations of amino acids. Such a method avoids the computationally costly iteration over all atoms. We estimated the form factors using generated data from a set of high quality protein structures. No ad hoc scaling or correction factors are applied in the calculation of the curves. Two coarse-grained representations of protein structure were investigated; two scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, TorusDBN. This resulted in a significant improvement in the decoy recognition performance. In conclusion, the presented method shows great promise for

  5. FULLY AUTOMATED GENERATION OF ACCURATE DIGITAL SURFACE MODELS WITH SUB-METER RESOLUTION FROM SATELLITE IMAGERY

    Directory of Open Access Journals (Sweden)

    J. Wohlfeil

    2012-07-01

    Full Text Available Modern pixel-wise image matching algorithms like Semi-Global Matching (SGM are able to compute high resolution digital surface models from airborne and spaceborne stereo imagery. Although image matching itself can be performed automatically, there are prerequisites, like high geometric accuracy, which are essential for ensuring the high quality of resulting surface models. Especially for line cameras, these prerequisites currently require laborious manual interaction using standard tools, which is a growing problem due to continually increasing demand for such surface models. The tedious work includes partly or fully manual selection of tie- and/or ground control points for ensuring the required accuracy of the relative orientation of images for stereo matching. It also includes masking of large water areas that seriously reduce the quality of the results. Furthermore, a good estimate of the depth range is required, since accurate estimates can seriously reduce the processing time for stereo matching. In this paper an approach is presented that allows performing all these steps fully automated. It includes very robust and precise tie point selection, enabling the accurate calculation of the images’ relative orientation via bundle adjustment. It is also shown how water masking and elevation range estimation can be performed automatically on the base of freely available SRTM data. Extensive tests with a large number of different satellite images from QuickBird and WorldView are presented as proof of the robustness and reliability of the proposed method.

  6. An accurate behavioral model for single-photon avalanche diode statistical performance simulation

    Science.gov (United States)

    Xu, Yue; Zhao, Tingchen; Li, Ding

    2018-01-01

    An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.

  7. A new accurate quadratic equation model for isothermal gas chromatography and its comparison with the linear model

    Science.gov (United States)

    Wu, Liejun; Chen, Maoxue; Chen, Yongli; Li, Qing X.

    2013-01-01

    The gas holdup time (tM) is a dominant parameter in gas chromatographic retention models. The difference equation (DE) model proposed by Wu et al. (J. Chromatogr. A 2012, http://dx.doi.org/10.1016/j.chroma.2012.07.077) excluded tM. In the present paper, we propose that the relationship between the adjusted retention time tRZ′ and carbon number z of n-alkanes follows a quadratic equation (QE) when an accurate tM is obtained. This QE model is the same as or better than the DE model for an accurate expression of the retention behavior of n-alkanes and model applications. The QE model covers a larger range of n-alkanes with better curve fittings than the linear model. The accuracy of the QE model was approximately 2–6 times better than the DE model and 18–540 times better than the LE model. Standard deviations of the QE model were approximately 2–3 times smaller than those of the DE model. PMID:22989489

  8. A simple but accurate procedure for solving the five-parameter model

    International Nuclear Information System (INIS)

    Mares, Oana; Paulescu, Marius; Badescu, Viorel

    2015-01-01

    Highlights: • A new procedure for extracting the parameters of the one-diode model is proposed. • Only the basic information listed in the datasheet of PV modules are required. • Results demonstrate a simple, robust and accurate procedure. - Abstract: The current–voltage characteristic of a photovoltaic module is typically evaluated by using a model based on the solar cell equivalent circuit. The complexity of the procedure applied for extracting the model parameters depends on data available in manufacture’s datasheet. Since the datasheet is not detailed enough, simplified models have to be used in many cases. This paper proposes a new procedure for extracting the parameters of the one-diode model in standard test conditions, using only the basic data listed by all manufactures in datasheet (short circuit current, open circuit voltage and maximum power point). The procedure is validated by using manufacturers’ data for six commercially crystalline silicon photovoltaic modules. Comparing the computed and measured current–voltage characteristics the determination coefficient is in the range 0.976–0.998. Thus, the proposed procedure represents a feasible tool for solving the five-parameter model applied to crystalline silicon photovoltaic modules. The procedure is described in detail, to guide potential users to derive similar models for other types of photovoltaic modules.

  9. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement

    Directory of Open Access Journals (Sweden)

    Suzhi Xiao

    2016-04-01

    Full Text Available In order to acquire an accurate three-dimensional (3D measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  10. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement.

    Science.gov (United States)

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-04-28

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the 'phase to 3D coordinates transformation' are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  11. Comprehensive Care For Joint Replacement Model - Provider Data

    Data.gov (United States)

    U.S. Department of Health & Human Services — Comprehensive Care for Joint Replacement Model - provider data. This data set includes provider data for two quality measures tracked during an episode of care:...

  12. Accurate calibration of the velocity-dependent one-scale model for domain walls

    Energy Technology Data Exchange (ETDEWEB)

    Leite, A.M.M., E-mail: up080322016@alunos.fc.up.pt [Centro de Astrofisica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Ecole Polytechnique, 91128 Palaiseau Cedex (France); Martins, C.J.A.P., E-mail: Carlos.Martins@astro.up.pt [Centro de Astrofisica, Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Shellard, E.P.S., E-mail: E.P.S.Shellard@damtp.cam.ac.uk [Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom)

    2013-01-08

    We study the asymptotic scaling properties of standard domain wall networks in several cosmological epochs. We carry out the largest field theory simulations achieved to date, with simulation boxes of size 2048{sup 3}, and confirm that a scale-invariant evolution of the network is indeed the attractor solution. The simulations are also used to obtain an accurate calibration for the velocity-dependent one-scale model for domain walls: we numerically determine the two free model parameters to have the values c{sub w}=0.34{+-}0.16 and k{sub w}=0.98{+-}0.07, which are of higher precision than (but in agreement with) earlier estimates.

  13. Accurate calibration of the velocity-dependent one-scale model for domain walls

    International Nuclear Information System (INIS)

    Leite, A.M.M.; Martins, C.J.A.P.; Shellard, E.P.S.

    2013-01-01

    We study the asymptotic scaling properties of standard domain wall networks in several cosmological epochs. We carry out the largest field theory simulations achieved to date, with simulation boxes of size 2048 3 , and confirm that a scale-invariant evolution of the network is indeed the attractor solution. The simulations are also used to obtain an accurate calibration for the velocity-dependent one-scale model for domain walls: we numerically determine the two free model parameters to have the values c w =0.34±0.16 and k w =0.98±0.07, which are of higher precision than (but in agreement with) earlier estimates.

  14. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    Science.gov (United States)

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  15. 2016 KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Carrington, David Bradley [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Waters, Jiajia [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-25

    Los Alamos National Laboratory and its collaborators are facilitating engine modeling by improving accuracy and robustness of the modeling, and improving the robustness of software. We also continue to improve the physical modeling methods. We are developing and implementing new mathematical algorithms, those that represent the physics within an engine. We provide software that others may use directly or that they may alter with various models e.g., sophisticated chemical kinetics, different turbulent closure methods or other fuel injection and spray systems.

  16. A new model for the accurate calculation of natural gas viscosity

    Directory of Open Access Journals (Sweden)

    Xiaohong Yang

    2017-03-01

    Full Text Available Viscosity of natural gas is a basic and important parameter, of theoretical and practical significance in the domain of natural gas recovery, transmission and processing. In order to obtain the accurate viscosity data efficiently at a low cost, a new model and its corresponding functional relation are derived on the basis of the relationship among viscosity, temperature and density derived from the kinetic theory of gases. After the model parameters were optimized using a lot of experimental data, the diagram showing the variation of viscosity along with temperature and density is prepared, showing that: ① the gas viscosity increases with the increase of density as well as the increase of temperature in the low density region; ② the gas viscosity increases with the decrease of temperature in high density region. With this new model, the viscosity of 9 natural gas samples was calculated precisely. The average relative deviation between these calculated values and 1539 experimental data measured at 250–450 K and 0.10–140.0 MPa is less than 1.9%. Compared with the 793 experimental data with a measurement error less than 0.5%, the maximum relative deviation is less than 0.98%. It is concluded that this new model is more advantageous than the previous 8 models in terms of simplicity, accuracy, fast calculation, and direct applicability to the CO2 bearing gas samples.

  17. HIGHLY-ACCURATE MODEL ORDER REDUCTION TECHNIQUE ON A DISCRETE DOMAIN

    Directory of Open Access Journals (Sweden)

    L. D. Ribeiro

    2015-09-01

    Full Text Available AbstractIn this work, we present a highly-accurate technique of model order reduction applied to staged processes. The proposed method reduces the dimension of the original system based on null values of moment-weighted sums of heat and mass balance residuals on real stages. To compute these sums of weighted residuals, a discrete form of Gauss-Lobatto quadrature was developed, allowing a high degree of accuracy in these calculations. The locations where the residuals are cancelled vary with time and operating conditions, characterizing a desirable adaptive nature of this technique. Balances related to upstream and downstream devices (such as condenser, reboiler, and feed tray of a distillation column are considered as boundary conditions of the corresponding difference-differential equations system. The chosen number of moments is the dimension of the reduced model being much lower than the dimension of the complete model and does not depend on the size of the original model. Scaling of the discrete independent variable related with the stages was crucial for the computational implementation of the proposed method, avoiding accumulation of round-off errors present even in low-degree polynomial approximations in the original discrete variable. Dynamical simulations of distillation columns were carried out to check the performance of the proposed model order reduction technique. The obtained results show the superiority of the proposed procedure in comparison with the orthogonal collocation method.

  18. Effective and accurate approach for modeling of commensurate-incommensurate transition in krypton monolayer on graphite.

    Science.gov (United States)

    Ustinov, E A

    2014-10-07

    Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system.

  19. Effective and accurate approach for modeling of commensurate–incommensurate transition in krypton monolayer on graphite

    International Nuclear Information System (INIS)

    Ustinov, E. A.

    2014-01-01

    Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system

  20. Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model

    International Nuclear Information System (INIS)

    Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.

    2007-01-01

    Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem

  1. Customer-Provider Strategic Alignment: A Maturity Model

    Science.gov (United States)

    Luftman, Jerry; Brown, Carol V.; Balaji, S.

    This chapter presents a new model for assessing the maturity of a ­customer-provider relationship from a collaborative service delivery perspective: the Customer-Provider Strategic Alignment Maturity (CPSAM) Model. This model builds on recent research for effectively managing the customer-provider relationship in IT service outsourcing contexts and a validated model for assessing alignment across internal IT service units and their business customers within the same organization. After reviewing relevant literature by service science and information systems researchers, the six overarching components of the maturity model are presented: value measurements, governance, partnership, communications, human resources and skills, and scope and architecture. A key assumption of the model is that all of the components need be addressed to assess and improve customer-provider alignment. Examples of specific metrics for measuring the maturity level of each component over the five levels of maturity are also presented.

  2. Spiral CT scanning plan to generate accurate Fe models of the human femur

    International Nuclear Information System (INIS)

    Zannoni, C.; Testi, D.; Capello, A.

    1999-01-01

    In spiral computed tomography (CT), source rotation, patient translation, and data acquisition are continuously conducted. Settings of the detector collimation and the table increment affect the image quality in terms of spatial and contrast resolution. This study assessed and measured the efficacy of spiral CT in those applications where the accurate reconstruction of bone morphology is critical: custom made prosthesis design or three dimensional modelling of the mechanical behaviour of long bones. Results show that conventional CT grants the highest accuracy. Spiral CT with D=5 mm and P=1,5 in the regions where the morphology is more regular, slightly degrades the image quality but allows to acquire at comparable cost an higher number of images increasing the longitudinal resolution of the acquired data set. (author)

  3. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    International Nuclear Information System (INIS)

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-01-01

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelity quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.

  4. Accurate Treatment of Collisions and Water-Delivery in Models of Terrestrial Planet Formation

    Science.gov (United States)

    Haghighipour, Nader; Maindl, Thomas; Schaefer, Christoph

    2017-10-01

    It is widely accepted that collisions among solid bodies, ignited by their interactions with planetary embryos is the key process in the formation of terrestrial planets and transport of volatiles and chemical compounds to their accretion zones. Unfortunately, due to computational complexities, these collisions are often treated in a rudimentary way. Impacts are considered to be perfectly inelastic and volatiles are considered to be fully transferred from one object to the other. This perfect-merging assumption has profound effects on the mass and composition of final planetary bodies as it grossly overestimates the masses of these objects and the amounts of volatiles and chemical elements transferred to them. It also entirely neglects collisional-loss of volatiles (e.g., water) and draws an unrealistic connection between these properties and the chemical structure of the protoplanetary disk (i.e., the location of their original carriers). We have developed a new and comprehensive methodology to simulate growth of embryos to planetary bodies where we use a combination of SPH and N-body codes to accurately model collisions as well as the transport/transfer of chemical compounds. Our methodology accounts for the loss of volatiles (e.g., ice sublimation) during the orbital evolution of their careers and accurately tracks their transfer from one body to another. Results of our simulations show that traditional N-body modeling of terrestrial planet formation overestimates the amount of the mass and water contents of the final planets by over 60% implying that not only the amount of water they suggest is far from being realistic, small planets such as Mars can also form in these simulations when collisions are treated properly. We will present details of our methodology and discuss its implications for terrestrial planet formation and water delivery to Earth.

  5. An Efficient Hybrid DSMC/MD Algorithm for Accurate Modeling of Micro Gas Flows

    KAUST Repository

    Liang, Tengfei

    2013-01-01

    Aiming at simulating micro gas flows with accurate boundary conditions, an efficient hybrid algorithmis developed by combining themolecular dynamics (MD) method with the direct simulationMonte Carlo (DSMC)method. The efficiency comes from the fact that theMD method is applied only within the gas-wall interaction layer, characterized by the cut-off distance of the gas-solid interaction potential, to resolve accurately the gas-wall interaction process, while the DSMC method is employed in the remaining portion of the flow field to efficiently simulate rarefied gas transport outside the gas-wall interaction layer. A unique feature about the present scheme is that the coupling between the two methods is realized by matching the molecular velocity distribution function at the DSMC/MD interface, hence there is no need for one-toone mapping between a MD gas molecule and a DSMC simulation particle. Further improvement in efficiency is achieved by taking advantage of gas rarefaction inside the gas-wall interaction layer and by employing the "smart-wall model" proposed by Barisik et al. The developed hybrid algorithm is validated on two classical benchmarks namely 1-D Fourier thermal problem and Couette shear flow problem. Both the accuracy and efficiency of the hybrid algorithm are discussed. As an application, the hybrid algorithm is employed to simulate thermal transpiration coefficient in the free-molecule regime for a system with atomically smooth surface. Result is utilized to validate the coefficients calculated from the pure DSMC simulation with Maxwell and Cercignani-Lampis gas-wall interaction models. ©c 2014 Global-Science Press.

  6. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation.

    Science.gov (United States)

    Gray, Alan; Harlen, Oliver G; Harris, Sarah A; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J; Pearson, Arwen R; Read, Daniel J; Richardson, Robin A

    2015-01-01

    Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  7. Modeling Market Shares of Competing (e)Care Providers

    Science.gov (United States)

    van Ooteghem, Jan; Tesch, Tom; Verbrugge, Sofie; Ackaert, Ann; Colle, Didier; Pickavet, Mario; Demeester, Piet

    In order to address the increasing costs of providing care to the growing group of elderly, efficiency gains through eCare solutions seem an obvious solution. Unfortunately not many techno-economic business models to evaluate the return of these investments are available. The construction of a business case for care for the elderly as they move through different levels of dependency and the effect of introducing an eCare service, is the intended application of the model. The simulation model presented in this paper allows for modeling evolution of market shares of competing care providers. Four tiers are defined, based on the dependency level of the elderly, for which the market shares are determined. The model takes into account available capacity of the different care providers, in- and outflow distribution between tiers and churn between providers within tiers.

  8. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    Directory of Open Access Journals (Sweden)

    Baoquan Kou

    2017-05-01

    Full Text Available To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  9. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    Science.gov (United States)

    Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi

    2017-05-01

    To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  10. Development of a Fast and Accurate PCRTM Radiative Transfer Model in the Solar Spectral Region

    Science.gov (United States)

    Liu, Xu; Yang, Qiguang; Li, Hui; Jin, Zhonghai; Wu, Wan; Kizer, Susan; Zhou, Daniel K.; Yang, Ping

    2016-01-01

    A fast and accurate principal component-based radiative transfer model in the solar spectral region (PCRTMSOLAR) has been developed. The algorithm is capable of simulating reflected solar spectra in both clear sky and cloudy atmospheric conditions. Multiple scattering of the solar beam by the multilayer clouds and aerosols are calculated using a discrete ordinate radiative transfer scheme. The PCRTM-SOLAR model can be trained to simulate top-of-atmosphere radiance or reflectance spectra with spectral resolution ranging from 1 cm(exp -1) resolution to a few nanometers. Broadband radiances or reflectance can also be calculated if desired. The current version of the PCRTM-SOLAR covers a spectral range from 300 to 2500 nm. The model is valid for solar zenith angles ranging from 0 to 80 deg, the instrument view zenith angles ranging from 0 to 70 deg, and the relative azimuthal angles ranging from 0 to 360 deg. Depending on the number of spectral channels, the speed of the current version of PCRTM-SOLAR is a few hundred to over one thousand times faster than the medium speed correlated-k option MODTRAN5. The absolute RMS error in channel radiance is smaller than 10(exp -3) mW/cm)exp 2)/sr/cm(exp -1) and the relative error is typically less than 0.2%.

  11. A new algebraic turbulence model for accurate description of airfoil flows

    Science.gov (United States)

    Xiao, Meng-Juan; She, Zhen-Su

    2017-11-01

    We report a new algebraic turbulence model (SED-SL) based on the SED theory, a symmetry-based approach to quantifying wall turbulence. The model specifies a multi-layer profile of a stress length (SL) function in both the streamwise and wall-normal directions, which thus define the eddy viscosity in the RANS equation (e.g. a zero-equation model). After a successful simulation of flat plate flow (APS meeting, 2016), we report here further applications of the model to the flow around airfoil, with significant improvement of the prediction accuracy of the lift (CL) and drag (CD) coefficients compared to other popular models (e.g. BL, SA, etc.). Two airfoils, namely RAE2822 airfoil and NACA0012 airfoil, are computed for over 50 cases. The results are compared to experimental data from AGARD report, which shows deviations of CL bounded within 2%, and CD within 2 counts (10-4) for RAE2822 and 6 counts for NACA0012 respectively (under a systematic adjustment of the flow conditions). In all these calculations, only one parameter (proportional to the Karmen constant) shows slight variation with Mach number. The most remarkable outcome is, for the first time, the accurate prediction of the drag coefficient. The other interesting outcome is the physical interpretation of the multi-layer parameters: they specify the corresponding multi-layer structure of turbulent boundary layer; when used together with simulation data, the SED-SL enables one to extract physical information from empirical data, and to understand the variation of the turbulent boundary layer.

  12. Modeling patients' acceptance of provider-delivered e-health.

    Science.gov (United States)

    Wilson, E Vance; Lankton, Nancy K

    2004-01-01

    Health care providers are beginning to deliver a range of Internet-based services to patients; however, it is not clear which of these e-health services patients need or desire. The authors propose that patients' acceptance of provider-delivered e-health can be modeled in advance of application development by measuring the effects of several key antecedents to e-health use and applying models of acceptance developed in the information technology (IT) field. This study tested three theoretical models of IT acceptance among patients who had recently registered for access to provider-delivered e-health. An online questionnaire administered items measuring perceptual constructs from the IT acceptance models (intrinsic motivation, perceived ease of use, perceived usefulness/extrinsic motivation, and behavioral intention to use e-health) and five hypothesized antecedents (satisfaction with medical care, health care knowledge, Internet dependence, information-seeking preference, and health care need). Responses were collected and stored in a central database. All tested IT acceptance models performed well in predicting patients' behavioral intention to use e-health. Antecedent factors of satisfaction with provider, information-seeking preference, and Internet dependence uniquely predicted constructs in the models. Information technology acceptance models provide a means to understand which aspects of e-health are valued by patients and how this may affect future use. In addition, antecedents to the models can be used to predict e-health acceptance in advance of system development.

  13. Lung ultrasound accurately detects pneumothorax in a preterm newborn lamb model.

    Science.gov (United States)

    Blank, Douglas A; Hooper, Stuart B; Binder-Heschl, Corinna; Kluckow, Martin; Gill, Andrew W; LaRosa, Domenic A; Inocencio, Ishmael M; Moxham, Alison; Rodgers, Karyn; Zahra, Valerie A; Davis, Peter G; Polglase, Graeme R

    2016-06-01

    Pneumothorax is a common emergency affecting extremely preterm. In adult studies, lung ultrasound has performed better than chest x-ray in the diagnosis of pneumothorax. The purpose of this study was to determine the efficacy of lung ultrasound (LUS) examination to detect pneumothorax using a preterm animal model. This was a prospective, observational study using newborn Border-Leicester lambs at gestational age = 126 days (equivalent to gestational age = 26 weeks in humans) receiving mechanical ventilation from birth to 2 h of life. At the conclusion of the experiment, LUS was performed, the lambs were then euthanised and a post-mortem exam was immediately performed. We used previously published ultrasound techniques to identify pneumothorax. Test characteristics of LUS to detect pneumothorax were calculated, using the post-mortem exam as the 'gold standard' test. Nine lambs (18 lungs) were examined. Four lambs had a unilateral pneumothorax, all of which were identified by LUS with no false positives. This was the first study to use post-mortem findings to test the efficacy of LUS to detect pneumothorax in a newborn animal model. Lung ultrasound accurately detected pneumothorax, verified by post-mortem exam, in premature, newborn lambs. © 2016 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

  14. Generating Converged Accurate Free Energy Surfaces for Chemical Reactions with a Force-Matched Semiempirical Model.

    Science.gov (United States)

    Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir

    2018-04-10

    We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .

  15. Blasting Vibration Safety Criterion Analysis with Equivalent Elastic Boundary: Based on Accurate Loading Model

    Directory of Open Access Journals (Sweden)

    Qingwen Li

    2015-01-01

    Full Text Available In the tunnel and underground space engineering, the blasting wave will attenuate from shock wave to stress wave to elastic seismic wave in the host rock. Also, the host rock will form crushed zone, fractured zone, and elastic seismic zone under the blasting loading and waves. In this paper, an accurate mathematical dynamic loading model was built. And the crushed zone as well as fractured zone was considered as the blasting vibration source thus deducting the partial energy for cutting host rock. So this complicated dynamic problem of segmented differential blasting was regarded as an equivalent elastic boundary problem by taking advantage of Saint-Venant’s Theorem. At last, a 3D model in finite element software FLAC3D accepted the constitutive parameters, uniformly distributed mutative loading, and the cylindrical attenuation law to predict the velocity curves and effective tensile curves for calculating safety criterion formulas of surrounding rock and tunnel liner after verifying well with the in situ monitoring data.

  16. Accurate estimate of the relic density and the kinetic decoupling in nonthermal dark matter models

    International Nuclear Information System (INIS)

    Arcadi, Giorgio; Ullio, Piero

    2011-01-01

    Nonthermal dark matter generation is an appealing alternative to the standard paradigm of thermal WIMP dark matter. We reconsider nonthermal production mechanisms in a systematic way, and develop a numerical code for accurate computations of the dark matter relic density. We discuss, in particular, scenarios with long-lived massive states decaying into dark matter particles, appearing naturally in several beyond the standard model theories, such as supergravity and superstring frameworks. Since nonthermal production favors dark matter candidates with large pair annihilation rates, we analyze the possible connection with the anomalies detected in the lepton cosmic-ray flux by Pamela and Fermi. Concentrating on supersymmetric models, we consider the effect of these nonstandard cosmologies in selecting a preferred mass scale for the lightest supersymmetric particle as a dark matter candidate, and the consequent impact on the interpretation of new physics discovered or excluded at the LHC. Finally, we examine a rather predictive model, the G2-MSSM, investigating some of the standard assumptions usually implemented in the solution of the Boltzmann equation for the dark matter component, including coannihilations. We question the hypothesis that kinetic equilibrium holds along the whole phase of dark matter generation, and the validity of the factorization usually implemented to rewrite the system of a coupled Boltzmann equation for each coannihilating species as a single equation for the sum of all the number densities. As a byproduct we develop here a formalism to compute the kinetic decoupling temperature in case of coannihilating particles, which can also be applied to other particle physics frameworks, and also to standard thermal relics within a standard cosmology.

  17. Simplified versus geometrically accurate models of forefoot anatomy to predict plantar pressures: A finite element study.

    Science.gov (United States)

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2016-01-25

    Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in 3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics

    Science.gov (United States)

    Pineda, M.; Stamatakis, M.

    2017-07-01

    Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.

  19. An evolutionary model-based algorithm for accurate phylogenetic breakpoint mapping and subtype prediction in HIV-1.

    Directory of Open Access Journals (Sweden)

    Sergei L Kosakovsky Pond

    2009-11-01

    Full Text Available Genetically diverse pathogens (such as Human Immunodeficiency virus type 1, HIV-1 are frequently stratified into phylogenetically or immunologically defined subtypes for classification purposes. Computational identification of such subtypes is helpful in surveillance, epidemiological analysis and detection of novel variants, e.g., circulating recombinant forms in HIV-1. A number of conceptually and technically different techniques have been proposed for determining the subtype of a query sequence, but there is not a universally optimal approach. We present a model-based phylogenetic method for automatically subtyping an HIV-1 (or other viral or bacterial sequence, mapping the location of breakpoints and assigning parental sequences in recombinant strains as well as computing confidence levels for the inferred quantities. Our Subtype Classification Using Evolutionary ALgorithms (SCUEAL procedure is shown to perform very well in a variety of simulation scenarios, runs in parallel when multiple sequences are being screened, and matches or exceeds the performance of existing approaches on typical empirical cases. We applied SCUEAL to all available polymerase (pol sequences from two large databases, the Stanford Drug Resistance database and the UK HIV Drug Resistance Database. Comparing with subtypes which had previously been assigned revealed that a minor but substantial (approximately 5% fraction of pure subtype sequences may in fact be within- or inter-subtype recombinants. A free implementation of SCUEAL is provided as a module for the HyPhy package and the Datamonkey web server. Our method is especially useful when an accurate automatic classification of an unknown strain is desired, and is positioned to complement and extend faster but less accurate methods. Given the increasingly frequent use of HIV subtype information in studies focusing on the effect of subtype on treatment, clinical outcome, pathogenicity and vaccine design, the importance

  20. Bring Your Own Device - Providing Reliable Model of Data Access

    Directory of Open Access Journals (Sweden)

    Stąpór Paweł

    2016-10-01

    Full Text Available The article presents a model of Bring Your Own Device (BYOD as a model network, which provides the user reliable access to network resources. BYOD is a model dynamically developing, which can be applied in many areas. Research network has been launched in order to carry out the test, in which as a service of BYOD model Work Folders service was used. This service allows the user to synchronize files between the device and the server. An access to the network is completed through the wireless communication by the 802.11n standard. Obtained results are shown and analyzed in this article.

  1. A simple highly accurate field-line mapping technique for three-dimensional Monte Carlo modeling of plasma edge transport

    International Nuclear Information System (INIS)

    Feng, Y.; Sardei, F.; Kisslinger, J.

    2005-01-01

    The paper presents a new simple and accurate numerical field-line mapping technique providing a high-quality representation of field lines as required by a Monte Carlo modeling of plasma edge transport in the complex magnetic boundaries of three-dimensional (3D) toroidal fusion devices. Using a toroidal sequence of precomputed 3D finite flux-tube meshes, the method advances field lines through a simple bilinear, forward/backward symmetric interpolation at the interfaces between two adjacent flux tubes. It is a reversible field-line mapping (RFLM) algorithm ensuring a continuous and unique reconstruction of field lines at any point of the 3D boundary. The reversibility property has a strong impact on the efficiency of modeling the highly anisotropic plasma edge transport in general closed or open configurations of arbitrary ergodicity as it avoids artificial cross-field diffusion of the fast parallel transport. For stellarator-symmetric magnetic configurations, which are the standard case for stellarators, the reversibility additionally provides an average cancellation of the radial interpolation errors of field lines circulating around closed magnetic flux surfaces. The RFLM technique has been implemented in the 3D edge transport code EMC3-EIRENE and is used routinely for plasma transport modeling in the boundaries of several low-shear and high-shear stellarators as well as in the boundary of a tokamak with 3D magnetic edge perturbations

  2. An accurate modelling of the two-diode model of PV module using a hybrid solution based on differential evolution

    International Nuclear Information System (INIS)

    Chin, Vun Jack; Salam, Zainal; Ishaque, Kashif

    2016-01-01

    Highlights: • An accurate computational method for the two-diode model of PV module is proposed. • The hybrid method employs analytical equations and Differential Evolution (DE). • I PV , I o1 , and R p are computed analytically, while a 1 , a 2 , I o2 and R s are optimized. • This allows the model parameters to be computed without using costly assumptions. - Abstract: This paper proposes an accurate computational technique for the two-diode model of PV module. Unlike previous methods, it does not rely on assumptions that cause the accuracy to be compromised. The key to this improvement is the implementation of a hybrid solution, i.e. by incorporating the analytical method with the differential evolution (DE) optimization technique. Three parameters, i.e. I PV , I o1 , and R p are computed analytically, while the remaining, a 1 , a 2 , I o2 and R s are optimized using the DE. To validate its accuracy, the proposed method is tested on three PV modules of different technologies: mono-crystalline, poly-crystalline and thin film. Furthermore, its performance is evaluated against two popular computational methods for the two-diode model. The proposed method is found to exhibit superior accuracy for the variation in irradiance and temperature for all module types. In particular, the improvement in accuracy is evident at low irradiance conditions; the root-mean-square error is one order of magnitude lower than that of the other methods. In addition, the values of the model parameters are consistent with the physics of PV cell. It is envisaged that the method can be very useful for PV simulation, in which accuracy of the model is of prime concern.

  3. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  4. Accurate Locally Conservative Discretizations for Modeling Multiphase Flow in Porous Media on General Hexahedra Grids

    KAUST Repository

    Wheeler, M.F.

    2010-09-06

    For many years there have been formulations considered for modeling single phase ow on general hexahedra grids. These include the extended mixed nite element method, and families of mimetic nite di erence methods. In most of these schemes either no rate of convergence of the algorithm has been demonstrated both theoret- ically and computationally or a more complicated saddle point system needs to be solved for an accurate solution. Here we describe a multipoint ux mixed nite element (MFMFE) method [5, 2, 3]. This method is motivated from the multipoint ux approximation (MPFA) method [1]. The MFMFE method is locally conservative with continuous ux approximations and is a cell-centered scheme for the pressure. Compared to the MPFA method, the MFMFE has a variational formulation, since it can be viewed as a mixed nite element with special approximating spaces and quadrature rules. The framework allows han- dling of hexahedral grids with non-planar faces by applying trilinear mappings from physical elements to reference cubic elements. In addition, there are several multi- scale and multiphysics extensions such as the mortar mixed nite element method that allows the treatment of non-matching grids [4]. Extensions to the two-phase oil-water ow are considered. We reformulate the two- phase model in terms of total velocity, capillary velocity, water pressure, and water saturation. We choose water pressure and water saturation as primary variables. The total velocity is driven by the gradient of the water pressure and total mobility. Iterative coupling scheme is employed for the coupled system. This scheme allows treatments of di erent time scales for the water pressure and water saturation. In each time step, we rst solve the pressure equation using the MFMFE method; we then Center for Subsurface Modeling, The University of Texas at Austin, Austin, TX 78712; mfw@ices.utexas.edu. yCenter for Subsurface Modeling, The University of Texas at Austin, Austin, TX 78712; gxue

  5. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Science.gov (United States)

    Wicke, Jason; Dumas, Geneviève A

    2014-06-03

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. Copyright © 2014. Published by Elsevier Ltd.

  6. Can segmental model reductions quantify whole-body balance accurately during dynamic activities?

    Science.gov (United States)

    Jamkrajang, Parunchaya; Robinson, Mark A; Limroongreungrat, Weerawat; Vanrenterghem, Jos

    2017-07-01

    When investigating whole-body balance in dynamic tasks, adequately tracking the whole-body centre of mass (CoM) or derivatives such as the extrapolated centre of mass (XCoM) can be crucial but add considerable measurement efforts. The aim of this study was to investigate whether reduced kinematic models can still provide adequate CoM and XCoM representations during dynamic sporting tasks. Seventeen healthy recreationally active subjects (14 males and 3 females; age, 24.9±3.2years; height, 177.3±6.9cm; body mass 72.6±7.0kg) participated in this study. Participants completed three dynamic movements, jumping, kicking, and overarm throwing. Marker-based kinematic data were collected with 10 optoelectronic cameras at 250Hz (Oqus Qualisys, Gothenburg, Sweden). The differences between (X)CoM from a full-body model (gold standard) and (X)CoM representations based on six selected model reductions were evaluated using a Bland-Altman approach. A threshold difference was set at ±2cm to help the reader interpret which model can still provide an acceptable (X)CoM representation. Antero-posterior and medio-lateral displacement profiles of the CoM representation based on lower limbs, trunk and upper limbs showed strong agreement, slightly reduced for lower limbs and trunk only. Representations based on lower limbs only showed less strong agreement, particularly for XCoM in kicking. Overall, our results provide justification of the use of certain model reductions for specific needs, saving measurement effort whilst limiting the error of tracking (X)CoM trajectories in the context of whole-body balance investigation. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Enhancement of a Turbulence Sub-Model for More Accurate Predictions of Vertical Stratifications in 3D Coastal and Estuarine Modeling

    Directory of Open Access Journals (Sweden)

    Wenrui Huang

    2010-03-01

    Full Text Available This paper presents an improvement of the Mellor and Yamada's 2nd order turbulence model in the Princeton Ocean Model (POM for better predictions of vertical stratifications of salinity in estuaries. The model was evaluated in the strongly stratified estuary, Apalachicola River, Florida, USA. The three-dimensional hydrodynamic model was applied to study the stratified flow and salinity intrusion in the estuary in response to tide, wind, and buoyancy forces. Model tests indicate that model predictions over estimate the stratification when using the default turbulent parameters. Analytic studies of density-induced and wind-induced flows indicate that accurate estimation of vertical eddy viscosity plays an important role in describing vertical profiles. Initial model revision experiments show that the traditional approach of modifying empirical constants in the turbulence model leads to numerical instability. In order to improve the performance of the turbulence model while maintaining numerical stability, a stratification factor was introduced to allow adjustment of the vertical turbulent eddy viscosity and diffusivity. Sensitivity studies indicate that the stratification factor, ranging from 1.0 to 1.2, does not cause numerical instability in Apalachicola River. Model simulations show that increasing the turbulent eddy viscosity by a stratification factor of 1.12 results in an optimal agreement between model predictions and observations in the case study presented in this study. Using the proposed stratification factor provides a useful way for coastal modelers to improve the turbulence model performance in predicting vertical turbulent mixing in stratified estuaries and coastal waters.

  8. Do Dual-Route Models Accurately Predict Reading and Spelling Performance in Individuals with Acquired Alexia and Agraphia?

    OpenAIRE

    Rapcsak, Steven Z.; Henry, Maya L.; Teague, Sommer L.; Carnahan, Susan D.; Beeson, Pélagie M.

    2007-01-01

    Coltheart and colleagues (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; Castles, Bates, & Coltheart, 2006) have demonstrated that an equation derived from dual-route theory accurately predicts reading performance in young normal readers and in children with reading impairment due to developmental dyslexia or stroke. In this paper we present evidence that the dual-route equation and a related multiple regression model also accurately predict both reading and spelling performance in adult...

  9. Accurate Models for Evaluating the Direct Conducted and Radiated Emissions from Integrated Circuits

    Directory of Open Access Journals (Sweden)

    Domenico Capriglione

    2018-03-01

    Full Text Available This paper deals with the electromagnetic compatibility (EMC issues related to the direct and radiated emissions from a high-speed integrated circuits (ICs. These emissions are evaluated here by means of circuital and electromagnetic models. As for the conducted emission, an equivalent circuit model is derived to describe the IC and the effect of its loads (package, printed circuit board, decaps, etc., based on the Integrated Circuit Emission Model template (ICEM. As for the radiated emission, an electromagnetic model is proposed, based on the superposition of the fields generated in the far field region by the loop currents flowing into the IC and the package pins. A custom experimental setup is designed for validating the models. Specifically, for the radiated emission measurement, a custom test board is designed and realized, able to highlight the contribution of the direct emission from the IC, usually hidden by the indirect emission coming from the printed circuit board. Measurements of the package currents and of the far-field emitted fields are carried out, providing a satisfactory agreement with the model predictions.

  10. SPARC: MASS MODELS FOR 175 DISK GALAXIES WITH SPITZER PHOTOMETRY AND ACCURATE ROTATION CURVES

    Energy Technology Data Exchange (ETDEWEB)

    Lelli, Federico; McGaugh, Stacy S. [Department of Astronomy, Case Western Reserve University, Cleveland, OH 44106 (United States); Schombert, James M., E-mail: federico.lelli@case.edu [Department of Physics, University of Oregon, Eugene, OR 97403 (United States)

    2016-12-01

    We introduce SPARC ( Spitzer Photometry and Accurate Rotation Curves): a sample of 175 nearby galaxies with new surface photometry at 3.6  μ m and high-quality rotation curves from previous H i/H α studies. SPARC spans a broad range of morphologies (S0 to Irr), luminosities (∼5 dex), and surface brightnesses (∼4 dex). We derive [3.6] surface photometry and study structural relations of stellar and gas disks. We find that both the stellar mass–H i mass relation and the stellar radius–H i radius relation have significant intrinsic scatter, while the H i   mass–radius relation is extremely tight. We build detailed mass models and quantify the ratio of baryonic to observed velocity ( V {sub bar}/ V {sub obs}) for different characteristic radii and values of the stellar mass-to-light ratio (ϒ{sub ⋆}) at [3.6]. Assuming ϒ{sub ⋆} ≃ 0.5 M {sub ⊙}/ L {sub ⊙} (as suggested by stellar population models), we find that (i) the gas fraction linearly correlates with total luminosity; (ii) the transition from star-dominated to gas-dominated galaxies roughly corresponds to the transition from spiral galaxies to dwarf irregulars, in line with density wave theory; and (iii)  V {sub bar}/ V {sub obs} varies with luminosity and surface brightness: high-mass, high-surface-brightness galaxies are nearly maximal, while low-mass, low-surface-brightness galaxies are submaximal. These basic properties are lost for low values of ϒ{sub ⋆} ≃ 0.2 M {sub ⊙}/ L {sub ⊙} as suggested by the DiskMass survey. The mean maximum-disk limit in bright galaxies is ϒ{sub ⋆} ≃ 0.7 M {sub ⊙}/ L {sub ⊙} at [3.6]. The SPARC data are publicly available and represent an ideal test bed for models of galaxy formation.

  11. An Interpretable Machine Learning Model for Accurate Prediction of Sepsis in the ICU.

    Science.gov (United States)

    Nemati, Shamim; Holder, Andre; Razmi, Fereshteh; Stanley, Matthew D; Clifford, Gari D; Buchman, Timothy G

    2018-04-01

    Sepsis is among the leading causes of morbidity, mortality, and cost overruns in critically ill patients. Early intervention with antibiotics improves survival in septic patients. However, no clinically validated system exists for real-time prediction of sepsis onset. We aimed to develop and validate an Artificial Intelligence Sepsis Expert algorithm for early prediction of sepsis. Observational cohort study. Academic medical center from January 2013 to December 2015. Over 31,000 admissions to the ICUs at two Emory University hospitals (development cohort), in addition to over 52,000 ICU patients from the publicly available Medical Information Mart for Intensive Care-III ICU database (validation cohort). Patients who met the Third International Consensus Definitions for Sepsis (Sepsis-3) prior to or within 4 hours of their ICU admission were excluded, resulting in roughly 27,000 and 42,000 patients within our development and validation cohorts, respectively. None. High-resolution vital signs time series and electronic medical record data were extracted. A set of 65 features (variables) were calculated on hourly basis and passed to the Artificial Intelligence Sepsis Expert algorithm to predict onset of sepsis in the proceeding T hours (where T = 12, 8, 6, or 4). Artificial Intelligence Sepsis Expert was used to predict onset of sepsis in the proceeding T hours and to produce a list of the most significant contributing factors. For the 12-, 8-, 6-, and 4-hour ahead prediction of sepsis, Artificial Intelligence Sepsis Expert achieved area under the receiver operating characteristic in the range of 0.83-0.85. Performance of the Artificial Intelligence Sepsis Expert on the development and validation cohorts was indistinguishable. Using data available in the ICU in real-time, Artificial Intelligence Sepsis Expert can accurately predict the onset of sepsis in an ICU patient 4-12 hours prior to clinical recognition. A prospective study is necessary to determine the

  12. Levels of Interaction Provided by Online Distance Education Models

    Science.gov (United States)

    Alhih, Mohammed; Ossiannilsson, Ebba; Berigel, Muhammet

    2017-01-01

    Interaction plays a significant role to foster usability and quality in online education. It is one of the quality standard to reveal the evidence of practice in online distance education models. This research study aims to evaluate levels of interaction in the practices of distance education centres. It is aimed to provide online distance…

  13. An accurate description of Aspergillus niger organic acid batch fermentation through dynamic metabolic modelling.

    Science.gov (United States)

    Upton, Daniel J; McQueen-Mason, Simon J; Wood, A Jamie

    2017-01-01

    Aspergillus niger fermentation has provided the chief source of industrial citric acid for over 50 years. Traditional strain development of this organism was achieved through random mutagenesis, but advances in genomics have enabled the development of genome-scale metabolic modelling that can be used to make predictive improvements in fermentation performance. The parent citric acid-producing strain of A. niger , ATCC 1015, has been described previously by a genome-scale metabolic model that encapsulates its response to ambient pH. Here, we report the development of a novel double optimisation modelling approach that generates time-dependent citric acid fermentation using dynamic flux balance analysis. The output from this model shows a good match with empirical fermentation data. Our studies suggest that citric acid production commences upon a switch to phosphate-limited growth and this is validated by fitting to empirical data, which confirms the diauxic growth behaviour and the role of phosphate storage as polyphosphate. The calibrated time-course model reflects observed metabolic events and generates reliable in silico data for industrially relevant fermentative time series, and for the behaviour of engineered strains suggesting that our approach can be used as a powerful tool for predictive metabolic engineering.

  14. Short communication: Genetic lag represents commercial herd genetic merit more accurately than the 4-path selection model.

    Science.gov (United States)

    Dechow, C D; Rogers, G W

    2018-05-01

    Expectation of genetic merit in commercial dairy herds is routinely estimated using a 4-path genetic selection model that was derived for a closed population, but commercial herds using artificial insemination sires are not closed. The 4-path model also predicts a higher rate of genetic progress in elite herds that provide artificial insemination sires than in commercial herds that use such sires, which counters other theoretical assumptions and observations of realized genetic responses. The aim of this work is to clarify whether genetic merit in commercial herds is more accurately reflected under the assumptions of the 4-path genetic response formula or by a genetic lag formula. We demonstrate by tracing the transmission of genetic merit from parents to offspring that the rate of genetic progress in commercial dairy farms is expected to be the same as that in the genetic nucleus. The lag in genetic merit between the nucleus and commercial farms is a function of sire and dam generation interval, the rate of genetic progress in elite artificial insemination herds, and genetic merit of sires and dams. To predict how strategies such as the use of young versus daughter-proven sires, culling heifers following genomic testing, or selective use of sexed semen will alter genetic merit in commercial herds, genetic merit expectations for commercial herds should be modeled using genetic lag expectations. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Neonatal tolerance induction enables accurate evaluation of gene therapy for MPS I in a canine model.

    Science.gov (United States)

    Hinderer, Christian; Bell, Peter; Louboutin, Jean-Pierre; Katz, Nathan; Zhu, Yanqing; Lin, Gloria; Choa, Ruth; Bagel, Jessica; O'Donnell, Patricia; Fitzgerald, Caitlin A; Langan, Therese; Wang, Ping; Casal, Margret L; Haskins, Mark E; Wilson, James M

    2016-09-01

    High fidelity animal models of human disease are essential for preclinical evaluation of novel gene and protein therapeutics. However, these studies can be complicated by exaggerated immune responses against the human transgene. Here we demonstrate that dogs with a genetic deficiency of the enzyme α-l-iduronidase (IDUA), a model of the lysosomal storage disease mucopolysaccharidosis type I (MPS I), can be rendered immunologically tolerant to human IDUA through neonatal exposure to the enzyme. Using MPS I dogs tolerized to human IDUA as neonates, we evaluated intrathecal delivery of an adeno-associated virus serotype 9 vector expressing human IDUA as a therapy for the central nervous system manifestations of MPS I. These studies established the efficacy of the human vector in the canine model, and allowed for estimation of the minimum effective dose, providing key information for the design of first-in-human trials. This approach can facilitate evaluation of human therapeutics in relevant animal models, and may also have clinical applications for the prevention of immune responses to gene and protein replacement therapies. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. An accurate coarse-grained model for chitosan polysaccharides in aqueous solution.

    Directory of Open Access Journals (Sweden)

    Levan Tsereteli

    Full Text Available Computational models can provide detailed information about molecular conformations and interactions in solution, which is currently inaccessible by other means in many cases. Here we describe an efficient and precise coarse-grained model for long polysaccharides in aqueous solution at different physico-chemical conditions such as pH and ionic strength. The Model is carefully constructed based on all-atom simulations of small saccharides and metadynamics sampling of the dihedral angles in the glycosidic links, which represent the most flexible degrees of freedom of the polysaccharides. The model is validated against experimental data for Chitosan molecules in solution with various degree of deacetylation, and is shown to closely reproduce the available experimental data. For long polymers, subtle differences of the free energy maps of the glycosidic links are found to significantly affect the measurable polymer properties. Therefore, for titratable monomers the free energy maps of the corresponding links are updated according to the current charge of the monomers. We then characterize the microscopic and mesoscopic structural properties of large chitosan polysaccharides in solution for a wide range of solvent pH and ionic strength, and investigate the effect of polymer length and degree and pattern of deacetylation on the polymer properties.

  17. An accurate coarse-grained model for chitosan polysaccharides in aqueous solution.

    Science.gov (United States)

    Tsereteli, Levan; Grafmüller, Andrea

    2017-01-01

    Computational models can provide detailed information about molecular conformations and interactions in solution, which is currently inaccessible by other means in many cases. Here we describe an efficient and precise coarse-grained model for long polysaccharides in aqueous solution at different physico-chemical conditions such as pH and ionic strength. The Model is carefully constructed based on all-atom simulations of small saccharides and metadynamics sampling of the dihedral angles in the glycosidic links, which represent the most flexible degrees of freedom of the polysaccharides. The model is validated against experimental data for Chitosan molecules in solution with various degree of deacetylation, and is shown to closely reproduce the available experimental data. For long polymers, subtle differences of the free energy maps of the glycosidic links are found to significantly affect the measurable polymer properties. Therefore, for titratable monomers the free energy maps of the corresponding links are updated according to the current charge of the monomers. We then characterize the microscopic and mesoscopic structural properties of large chitosan polysaccharides in solution for a wide range of solvent pH and ionic strength, and investigate the effect of polymer length and degree and pattern of deacetylation on the polymer properties.

  18. Towards Relaxing the Spherical Solar Radiation Pressure Model for Accurate Orbit Predictions

    Science.gov (United States)

    Lachut, M.; Bennett, J.

    2016-09-01

    The well-known cannonball model has been used ubiquitously to capture the effects of atmospheric drag and solar radiation pressure on satellites and/or space debris for decades. While it lends itself naturally to spherical objects, its validity in the case of non-spherical objects has been debated heavily for years throughout the space situational awareness community. One of the leading motivations to improve orbit predictions by relaxing the spherical assumption, is the ongoing demand for more robust and reliable conjunction assessments. In this study, we explore the orbit propagation of a flat plate in a near-GEO orbit under the influence of solar radiation pressure, using a Lambertian BRDF model. Consequently, this approach will account for the spin rate and orientation of the object, which is typically determined in practice using a light curve analysis. Here, simulations will be performed which systematically reduces the spin rate to demonstrate the point at which the spherical model no longer describes the orbital elements of the spinning plate. Further understanding of this threshold would provide insight into when a higher fidelity model should be used, thus resulting in improved orbit propagations. Therefore, the work presented here is of particular interest to organizations and researchers that maintain their own catalog, and/or perform conjunction analyses.

  19. High Fidelity Non-Gravitational Force Models for Precise and Accurate Orbit Determination of TerraSAR-X

    Science.gov (United States)

    Hackel, Stefan; Montenbruck, Oliver; Steigenberger, -Peter; Eineder, Michael; Gisinger, Christoph

    Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The increasing demand for precise radar products relies on sophisticated validation methods, which require precise and accurate orbit products. Basically, the precise reconstruction of the satellite’s trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency receiver onboard the spacecraft. The Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for the gravitational and non-gravitational forces. Following a proper analysis of the orbit quality, systematics in the orbit products have been identified, which reflect deficits in the non-gravitational force models. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). Due to the dusk-dawn orbit configuration of TerraSAR-X, the satellite is almost constantly illuminated by the Sun. Therefore, the direct SRP has an effect on the lateral stability of the determined orbit. The indirect effect of the solar radiation principally contributes to the Earth Radiation Pressure (ERP). The resulting force depends on the sunlight, which is reflected by the illuminated Earth surface in the visible, and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed within the presentation. The presentation highlights the influence of non-gravitational force and satellite macro models on the orbit quality of TerraSAR-X.

  20. A Model for the Growth of Network Service Providers

    Science.gov (United States)

    2011-12-01

    Service Provider O-D Origin-Destination POP Point of Presence UCG Unilateral Connection Game xiv THIS PAGE INTENTIONALLY LEFT BLANK xv EXECUTIVE...xvi We make use of the Abilene dataset as input to the network provisioning model and assume that the NSP is new to the market and is building an...has to decide on the connections to build and the markets to serve in order to maximize its profits. The NSP makes these decisions based on the market

  1. A Real-Time Accurate Model and Its Predictive Fuzzy PID Controller for Pumped Storage Unit via Error Compensation

    Directory of Open Access Journals (Sweden)

    Jianzhong Zhou

    2017-12-01

    Full Text Available Model simulation and control of pumped storage unit (PSU are essential to improve the dynamic quality of power station. Only under the premise of the PSU models reflecting the actual transient process, the novel control method can be properly applied in the engineering. The contributions of this paper are that (1 a real-time accurate equivalent circuit model (RAECM of PSU via error compensation is proposed to reconcile the conflict between real-time online simulation and accuracy under various operating conditions, and (2 an adaptive predicted fuzzy PID controller (APFPID based on RAECM is put forward to overcome the instability of conventional control under no-load conditions with low water head. Respectively, all hydraulic factors in pipeline system are fully considered based on equivalent lumped-circuits theorem. The pretreatment, which consists of improved Suter-transformation and BP neural network, and online simulation method featured by two iterative loops are synthetically proposed to improve the solving accuracy of the pump-turbine. Moreover, the modified formulas for compensating error are derived with variable-spatial discretization to improve the accuracy of the real-time simulation further. The implicit RadauIIA method is verified to be more suitable for PSUGS owing to wider stable domain. Then, APFPID controller is constructed based on the integration of fuzzy PID and the model predictive control. Rolling prediction by RAECM is proposed to replace rolling optimization with its computational speed guaranteed. Finally, the simulation and on-site measurements are compared to prove trustworthy of RAECM under various running conditions. Comparative experiments also indicate that APFPID controller outperforms other controllers in most cases, especially low water head conditions. Satisfying results of RAECM have been achieved in engineering and it provides a novel model reference for PSUGS.

  2. Conceptual Models of the Individual Public Service Provider

    DEFF Research Database (Denmark)

    Andersen, Lotte Bøgh; Pedersen, Lene Holm; Bhatti, Yosef

    are used to gain insight on the motivation of public service providers; namely principal-agent theory, self-determination theory and public service motivation theory. We situate the theoretical discussions in the context of public service providers being transferred to private organizations......Individual public service providers’ motivation can be conceptualized as either extrinsic, autonomous or prosocial, and the question is how we can best theoretically understand this complexity without losing too much coherence and parsimony. Drawing on Allison’s approach (1969), three perspectives...... theoretical – to develop a coherent model of individual public service providers – but the empirical illustration also contributes to our understanding of motivation in the context of public sector outsourcing....

  3. Model of Providing Assistive Technologies in Special Education Schools.

    Science.gov (United States)

    Lersilp, Suchitporn; Putthinoi, Supawadee; Chakpitak, Nopasit

    2015-05-14

    Most students diagnosed with disabilities in Thai special education schools received assistive technologies, but this did not guarantee the greatest benefits. The purpose of this study was to survey the provision, use and needs of assistive technologies, as well as the perspectives of key informants regarding a model of providing them in special education schools. The participants were selected by the purposive sampling method, and they comprised 120 students with visual, physical, hearing or intellectual disabilities from four special education schools in Chiang Mai, Thailand; and 24 key informants such as parents or caregivers, teachers, school principals and school therapists. The instruments consisted of an assistive technology checklist and a semi-structured interview. Results showed that a category of assistive technologies was provided for students with disabilities, with the highest being "services", followed by "media" and then "facilities". Furthermore, mostly students with physical disabilities were provided with assistive technologies, but those with visual disabilities needed it more. Finally, the model of providing assistive technologies was composed of 5 components: Collaboration; Holistic perspective; Independent management of schools; Learning systems and a production manual for users; and Development of an assistive technology center, driven by 3 major sources such as Government and Private organizations, and Schools.

  4. Can crop-climate models be accurate and precise? A case study for wheat production in Denmark

    DEFF Research Database (Denmark)

    Montesino San Martin, Manuel; Olesen, Jørgen E.; Porter, John Roy

    2015-01-01

    Crop models, used to make projections of climate change impacts, differ greatly in structural detail. Complexity of model structure has generic effects on uncertainty and error propagation in climate change impact assessments. We applied Bayesian calibration to three distinctly different empirical....... Yields predicted by the mechanistic model were generally more accurate than the empirical models for extrapolated conditions. This trend does not hold for all extrapolations; mechanistic and empirical models responded differently due to their sensitivities to distinct weather features. However, higher...... suitable for generic model ensembles for near-term agricultural impact assessments of climate change....

  5. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Alan [The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); Harlen, Oliver G. [University of Leeds, Leeds LS2 9JT (United Kingdom); Harris, Sarah A., E-mail: s.a.harris@leeds.ac.uk [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Leeds, Leeds LS2 9JT (United Kingdom); Khalid, Syma; Leung, Yuk Ming [University of Southampton, Southampton SO17 1BJ (United Kingdom); Lonsdale, Richard [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Philipps-Universität Marburg, Hans-Meerwein Strasse, 35032 Marburg (Germany); Mulholland, Adrian J. [University of Bristol, Bristol BS8 1TS (United Kingdom); Pearson, Arwen R. [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Hamburg, Hamburg (Germany); Read, Daniel J.; Richardson, Robin A. [University of Leeds, Leeds LS2 9JT (United Kingdom); The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom)

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  6. Modelling catchment areas for secondary care providers: a case study.

    Science.gov (United States)

    Jones, Simon; Wardlaw, Jessica; Crouch, Susan; Carolan, Michelle

    2011-09-01

    Hospitals need to understand patient flows in an increasingly competitive health economy. New initiatives like Patient Choice and the Darzi Review further increase this demand. Essential to understanding patient flows are demographic and geographic profiles of health care service providers, known as 'catchment areas' and 'catchment populations'. This information helps Primary Care Trusts (PCTs) to review how their populations are accessing services, measure inequalities and commission services; likewise it assists Secondary Care Providers (SCPs) to measure and assess potential gains in market share, redesign services, evaluate admission thresholds and plan financial budgets. Unlike PCTs, SCPs do not operate within fixed geographic boundaries. Traditionally, SCPs have used administrative boundaries or arbitrary drive times to model catchment areas. Neither approach satisfactorily represents current patient flows. Furthermore, these techniques are time-consuming and can be challenging for healthcare managers to exploit. This paper presents three different approaches to define catchment areas, each more detailed than the previous method. The first approach 'First Past the Post' defines catchment areas by allocating a dominant SCP to each Census Output Area (OA). The SCP with the highest proportion of activity within each OA is considered the dominant SCP. The second approach 'Proportional Flow' allocates activity proportionally to each OA. This approach allows for cross-boundary flows to be captured in a catchment area. The third and final approach uses a gravity model to define a catchment area, which incorporates drive or travel time into the analysis. Comparing approaches helps healthcare providers to understand whether using more traditional and simplistic approaches to define catchment areas and populations achieves the same or similar results as complex mathematical modelling. This paper has demonstrated, using a case study of Manchester, that when estimating

  7. Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models

    DEFF Research Database (Denmark)

    Stovgaard, Kasper; Andreetta, Christian; Ferkinghoff-Borg, Jesper

    2010-01-01

    , which is paramount for structure determination based on statistical inference. Results: We present a method for the efficient calculation of accurate SAXS curves based on the Debye formula and a set of scattering form factors for dummy atom representations of amino acids. Such a method avoids......DBN. This resulted in a significant improvement in the decoy recognition performance. In conclusion, the presented method shows great promise for use in statistical inference of protein structures from SAXS data....

  8. Flow Modeling in Pelton Turbines by an Accurate Eulerian and a Fast Lagrangian Evaluation Method

    Directory of Open Access Journals (Sweden)

    A. Panagiotopoulos

    2015-01-01

    Full Text Available The recent development of CFD has allowed the flow modeling in impulse hydro turbines that includes complex phenomena like free surface flow, multifluid interaction, and unsteady, time dependent flow. Some commercial and open-source CFD codes, which implement Eulerian methods, have been validated against experimental results showing satisfactory accuracy. Nevertheless, further improvement of accuracy is still a challenge, while the computational cost is very high and unaffordable for multiparametric design optimization of the turbine’s runner. In the present work a CFD Eulerian approach is applied at first, in order to simulate the flow in the runner of a Pelton turbine model installed at the laboratory. Then, a particulate method, the Fast Lagrangian Simulation (FLS, is used for the same case, which is much faster and hence potentially suitable for numerical design optimization, providing that it can achieve adequate accuracy. The results of both methods for various turbine operation conditions, as also for modified runner and bucket designs, are presented and discussed in the paper. In all examined cases the FLS method shows very good accuracy in predicting the hydraulic efficiency of the runner, although the computed flow evolution and the torque curve exhibit some systematic differences from the Eulerian results.

  9. National Water Model: Providing the Nation with Actionable Water Intelligence

    Science.gov (United States)

    Aggett, G. R.; Bates, B.

    2017-12-01

    The National Water Model (NWM) provides national, street-level detail of water movement through time and space. Operating hourly, this flood of information offers enormous benefits in the form of water resource management, natural disaster preparedness, and the protection of life and property. The Geo-Intelligence Division at the NOAA National Water Center supplies forecasters and decision-makers with timely, actionable water intelligence through the processing of billions of NWM data points every hour. These datasets include current streamflow estimates, short and medium range streamflow forecasts, and many other ancillary datasets. The sheer amount of NWM data produced yields a dataset too large to allow for direct human comprehension. As such, it is necessary to undergo model data post-processing, filtering, and data ingestion by visualization web apps that make use of cartographic techniques to bring attention to the areas of highest urgency. This poster illustrates NWM output post-processing and cartographic visualization techniques being developed and employed by the Geo-Intelligence Division at the NOAA National Water Center to provide national actionable water intelligence.

  10. 3D Vision Provides Shorter Operative Time and More Accurate Intraoperative Surgical Performance in Laparoscopic Hiatal Hernia Repair Compared With 2D Vision.

    Science.gov (United States)

    Leon, Piera; Rivellini, Roberta; Giudici, Fabiola; Sciuto, Antonio; Pirozzi, Felice; Corcione, Francesco

    2017-04-01

    The aim of this study is to evaluate if 3-dimensional high-definition (3D) vision in laparoscopy can prompt advantages over conventional 2D high-definition vision in hiatal hernia (HH) repair. Between September 2012 and September 2015, we randomized 36 patients affected by symptomatic HH to undergo surgery; 17 patients underwent 2D laparoscopic HH repair, whereas 19 patients underwent the same operation in 3D vision. No conversion to open surgery occurred. Overall operative time was significantly reduced in the 3D laparoscopic group compared with the 2D one (69.9 vs 90.1 minutes, P = .006). Operative time to perform laparoscopic crura closure did not differ significantly between the 2 groups. We observed a tendency to a faster crura closure in the 3D group in the subgroup of patients with mesh positioning (7.5 vs 8.9 minutes, P = .09). Nissen fundoplication was faster in the 3D group without mesh positioning ( P = .07). 3D vision in laparoscopic HH repair helps surgeon's visualization and seems to lead to operative time reduction. Advantages can result from the enhanced spatial perception of narrow spaces. Less operative time and more accurate surgery translate to benefit for patients and cost savings, compensating the high costs of the 3D technology. However, more data from larger series are needed to firmly assess the advantages of 3D over 2D vision in laparoscopic HH repair.

  11. Simple and accurate model for voltage-dependent resistance of metallic carbon nanotube interconnects: An ab initio study

    International Nuclear Information System (INIS)

    Yamacli, Serhan; Avci, Mutlu

    2009-01-01

    In this work, development of a voltage dependent resistance model for metallic carbon nanotubes is aimed. Firstly, the resistance of metallic carbon nanotube interconnects are obtained from ab initio simulations and then the voltage dependence of the resistance is modeled through regression. Self-consistent non-equilibrium Green's function formalism combined with density functional theory is used for calculating the voltage dependent resistance of metallic carbon nanotubes. It is shown that voltage dependent resistances of carbon nanotubes can be accurately modeled as a polynomial function which enables rapid integration of carbon nanotube interconnect models into electronic design automation tools.

  12. Governance, Government, and the Search for New Provider Models

    Directory of Open Access Journals (Sweden)

    Richard B. Saltman

    2016-01-01

    Full Text Available A central problem in designing effective models of provider governance in health systems has been to ensure an appropriate balance between the concerns of public sector and/or government decision-makers, on the one hand, and of non-governmental health services actors in civil society and private life, on the other. In tax-funded European health systems up to the 1980s, the state and other public sector decision-makers played a dominant role over health service provision, typically operating hospitals through national or regional governments on a command-and-control basis. In a number of countries, however, this state role has started to change, with governments first stepping out of direct service provision and now de facto pushed to focus more on steering provider organizations rather than on direct public management. In this new approach to provider governance, the state has pulled back into a regulatory role that introduces market-like incentives and management structures, which then apply to both public and private sector providers alike. This article examines some of the main operational complexities in implementing this new governance reality/strategy, specifically from a service provision (as opposed to mostly a financing or even regulatory perspective. After briefly reviewing some of the key theoretical dilemmas, the paper presents two case studies where this new approach was put into practice: primary care in Sweden and hospitals in Spain. The article concludes that good governance today needs to reflect practical operational realities if it is to have the desired effect on health sector reform outcome.

  13. Governance, Government, and the Search for New Provider Models.

    Science.gov (United States)

    Saltman, Richard B; Duran, Antonio

    2015-11-03

    A central problem in designing effective models of provider governance in health systems has been to ensure an appropriate balance between the concerns of public sector and/or government decision-makers, on the one hand, and of non-governmental health services actors in civil society and private life, on the other. In tax-funded European health systems up to the 1980s, the state and other public sector decision-makers played a dominant role over health service provision, typically operating hospitals through national or regional governments on a command-and-control basis. In a number of countries, however, this state role has started to change, with governments first stepping out of direct service provision and now de facto pushed to focus more on steering provider organizations rather than on direct public management. In this new approach to provider governance, the state has pulled back into a regulatory role that introduces market-like incentives and management structures, which then apply to both public and private sector providers alike. This article examines some of the main operational complexities in implementing this new governance reality/strategy, specifically from a service provision (as opposed to mostly a financing or even regulatory) perspective. After briefly reviewing some of the key theoretical dilemmas, the paper presents two case studies where this new approach was put into practice: primary care in Sweden and hospitals in Spain. The article concludes that good governance today needs to reflect practical operational realities if it is to have the desired effect on health sector reform outcome. © 2016 by Kerman University of Medical Sciences.

  14. An accurate and efficient system model of iterative image reconstruction in high-resolution pinhole SPECT for small animal research

    Energy Technology Data Exchange (ETDEWEB)

    Huang, P-C; Hsu, C-H [Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan (China); Hsiao, I-T [Department Medical Imaging and Radiological Sciences, Chang Gung University, Tao-Yuan, Taiwan (China); Lin, K M [Medical Engineering Research Division, National Health Research Institutes, Zhunan Town, Miaoli County, Taiwan (China)], E-mail: cghsu@mx.nthu.edu.tw

    2009-06-15

    Accurate modeling of the photon acquisition process in pinhole SPECT is essential for optimizing resolution. In this work, the authors develop an accurate system model in which pinhole finite aperture and depth-dependent geometric sensitivity are explicitly included. To achieve high-resolution pinhole SPECT, the voxel size is usually set in the range of sub-millimeter so that the total number of image voxels increase accordingly. It is inevitably that a system matrix that models a variety of favorable physical factors will become extremely sophisticated. An efficient implementation for such an accurate system model is proposed in this research. We first use the geometric symmetries to reduce redundant entries in the matrix. Due to the sparseness of the matrix, only non-zero terms are stored. A novel center-to-radius recording rule is also developed to effectively describe the relation between a voxel and its related detectors at every projection angle. The proposed system matrix is also suitable for multi-threaded computing. Finally, the accuracy and effectiveness of the proposed system model is evaluated in a workstation equipped with two Quad-Core Intel X eon processors.

  15. Low resolution scans can provide a sufficiently accurate, cost- and time-effective alternative to high resolution scans for 3D shape analyses

    Directory of Open Access Journals (Sweden)

    Ariel E. Marcy

    2018-06-01

    Full Text Available Background Advances in 3D shape capture technology have made powerful shape analyses, such as geometric morphometrics, more feasible. While the highly accurate micro-computed tomography (µCT scanners have been the “gold standard,” recent improvements in 3D surface scanners may make this technology a faster, portable, and cost-effective alternative. Several studies have already compared the two devices but all use relatively large specimens such as human crania. Here we perform shape analyses on Australia’s smallest rodent to test whether a 3D scanner produces similar results to a µCT scanner. Methods We captured 19 delicate mouse (Pseudomys delicatulus crania with a µCT scanner and a 3D scanner for geometric morphometrics. We ran multiple Procrustes ANOVAs to test how variation due to scan device compared to other sources such as biologically relevant variation and operator error. We quantified operator error as levels of variation and repeatability. Further, we tested if the two devices performed differently at classifying individuals based on sexual dimorphism. Finally, we inspected scatterplots of principal component analysis (PCA scores for non-random patterns. Results In all Procrustes ANOVAs, regardless of factors included, differences between individuals contributed the most to total variation. The PCA plots reflect this in how the individuals are dispersed. Including only the symmetric component of shape increased the biological signal relative to variation due to device and due to error. 3D scans showed a higher level of operator error as evidenced by a greater spread of their replicates on the PCA, a higher level of multivariate variation, and a lower repeatability score. However, the 3D scan and µCT scan datasets performed identically in classifying individuals based on intra-specific patterns of sexual dimorphism. Discussion Compared to µCT scans, we find that even low resolution 3D scans of very small specimens are

  16. A gp41-based heteroduplex mobility assay provides rapid and accurate assessment of intrasubtype epidemiological linkage in HIV type 1 heterosexual transmission Pairs.

    Science.gov (United States)

    Manigart, Olivier; Boeras, Debrah I; Karita, Etienne; Hawkins, Paulina A; Vwalika, Cheswa; Makombe, Nathan; Mulenga, Joseph; Derdeyn, Cynthia A; Allen, Susan; Hunter, Eric

    2012-12-01

    A critical step in HIV-1 transmission studies is the rapid and accurate identification of epidemiologically linked transmission pairs. To date, this has been accomplished by comparison of polymerase chain reaction (PCR)-amplified nucleotide sequences from potential transmission pairs, which can be cost-prohibitive for use in resource-limited settings. Here we describe a rapid, cost-effective approach to determine transmission linkage based on the heteroduplex mobility assay (HMA), and validate this approach by comparison to nucleotide sequencing. A total of 102 HIV-1-infected Zambian and Rwandan couples, with known linkage, were analyzed by gp41-HMA. A 400-base pair fragment within the envelope gp41 region of the HIV proviral genome was PCR amplified and HMA was applied to both partners' amplicons separately (autologous) and as a mixture (heterologous). If the diversity between gp41 sequences was low (<5%), a homoduplex was observed upon gel electrophoresis and the transmission was characterized as having occurred between partners (linked). If a new heteroduplex formed, within the heterologous migration, the transmission was determined to be unlinked. Initial blind validation of gp-41 HMA demonstrated 90% concordance between HMA and sequencing with 100% concordance in the case of linked transmissions. Following validation, 25 newly infected partners in Kigali and 12 in Lusaka were evaluated prospectively using both HMA and nucleotide sequences. Concordant results were obtained in all but one case (97.3%). The gp41-HMA technique is a reliable and feasible tool to detect linked transmissions in the field. All identified unlinked results should be confirmed by sequence analyses.

  17. MULTI SENSOR DATA INTEGRATION FOR AN ACCURATE 3D MODEL GENERATION

    Directory of Open Access Journals (Sweden)

    S. Chhatkuli

    2015-05-01

    Full Text Available The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other’s weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  18. A globally accurate theory for a class of binary mixture models

    Science.gov (United States)

    Dickman, Adriana G.; Stell, G.

    The self-consistent Ornstein-Zernike approximation results for the 3D Ising model are used to obtain phase diagrams for binary mixtures described by decorated models, yielding the plait point, binodals, and closed-loop coexistence curves for the models proposed by Widom, Clark, Neece, and Wheeler. The results are in good agreement with series expansions and experiments.

  19. Genomic inference accurately predicts the timing and severity of a recent bottleneck in a non-model insect population

    Science.gov (United States)

    McCoy, Rajiv C.; Garud, Nandita R.; Kelley, Joanna L.; Boggs, Carol L.; Petrov, Dmitri A.

    2015-01-01

    The analysis of molecular data from natural populations has allowed researchers to answer diverse ecological questions that were previously intractable. In particular, ecologists are often interested in the demographic history of populations, information that is rarely available from historical records. Methods have been developed to infer demographic parameters from genomic data, but it is not well understood how inferred parameters compare to true population history or depend on aspects of experimental design. Here we present and evaluate a method of SNP discovery using RNA-sequencing and demographic inference using the program δaδi, which uses a diffusion approximation to the allele frequency spectrum to fit demographic models. We test these methods in a population of the checkerspot butterfly Euphydryas gillettii. This population was intentionally introduced to Gothic, Colorado in 1977 and has since experienced extreme fluctuations including bottlenecks of fewer than 25 adults, as documented by nearly annual field surveys. Using RNA-sequencing of eight individuals from Colorado and eight individuals from a native population in Wyoming, we generate the first genomic resources for this system. While demographic inference is commonly used to examine ancient demography, our study demonstrates that our inexpensive, all-in-one approach to marker discovery and genotyping provides sufficient data to accurately infer the timing of a recent bottleneck. This demographic scenario is relevant for many species of conservation concern, few of which have sequenced genomes. Our results are remarkably insensitive to sample size or number of genomic markers, which has important implications for applying this method to other non-model systems. PMID:24237665

  20. Providing surgical care in Somalia: A model of task shifting.

    Science.gov (United States)

    Chu, Kathryn M; Ford, Nathan P; Trelles, Miguel

    2011-07-15

    Somalia is one of the most political unstable countries in the world. Ongoing insecurity has forced an inconsistent medical response by the international community, with little data collection. This paper describes the "remote" model of surgical care by Medecins Sans Frontieres, in Guri-El, Somalia. The challenges of providing the necessary prerequisites for safe surgery are discussed as well as the successes and limitations of task shifting in this resource-limited context. In January 2006, MSF opened a project in Guri-El located between Mogadishu and Galcayo. The objectives were to reduce mortality due to complications of pregnancy and childbirth and from violent and non-violent trauma. At the start of the program, expatriate surgeons and anesthesiologists established safe surgical practices and performed surgical procedures. After January 2008, expatriates were evacuated due to insecurity and surgical care has been provided by local Somalian doctors and nurses with periodic supervisory visits from expatriate staff. Between October 2006 and December 2009, 2086 operations were performed on 1602 patients. The majority (1049, 65%) were male and the median age was 22 (interquartile range, 17-30). 1460 (70%) of interventions were emergent. Trauma accounted for 76% (1585) of all surgical pathology; gunshot wounds accounted for 89% (584) of violent injuries. Operative mortality (0.5% of all surgical interventions) was not higher when Somalian staff provided care compared to when expatriate surgeons and anesthesiologists. The delivery of surgical care in any conflict-settings is difficult, but in situations where international support is limited, the challenges are more extreme. In this model, task shifting, or the provision of services by less trained cadres, was utilized and peri-operative mortality remained low demonstrating that safe surgical practices can be accomplished even without the presence of fully trained surgeon and anesthesiologists. If security improves

  1. Full-waveform modeling of Zero-Offset Electromagnetic Induction for Accurate Characterization of Subsurface Electrical Properties

    Science.gov (United States)

    Moghadas, D.; André, F.; Vereecken, H.; Lambot, S.

    2009-04-01

    Water is a vital resource for human needs, agriculture, sanitation and industrial supply. The knowledge of soil water dynamics and solute transport is essential in agricultural and environmental engineering as it controls plant growth, hydrological processes, and the contamination of surface and subsurface water. Increased irrigation efficiency has also an important role for water conservation, reducing drainage and mitigating some of the water pollution and soil salinity. Geophysical methods are effective techniques for monitoring the vadose zone. In particular, electromagnetic induction (EMI) can provide in a non-invasive way important information about the soil electrical properties at the field scale, which are mainly correlated to important variables such as soil water content, salinity, and texture. EMI is based on the radiation of a VLF EM wave into the soil. Depending on its electrical conductivity, Foucault currents are generated and produce a secondary EM field which is then recorded by the EMI system. Advanced techniques for EMI data interpretation resort to inverse modeling. Yet, a major gap in current knowledge is the limited accuracy of the forward model used for describing the EMI-subsurface system, usually relying on strongly simplifying assumptions. We present a new low frequency EMI method based on Vector Network Analyzer (VNA) technology and advanced forward modeling using a linear system of complex transfer functions for describing the EMI loop antenna and a three-dimensional solution of Maxwell's equations for wave propagation in multilayered media. VNA permits simple, international standard calibration of the EMI system. We derived a Green's function for the zero-offset, off-ground horizontal loop antenna and also proposed an optimal integration path for faster evaluation of the spatial-domain Green's function from its spectral counterpart. This new integration path shows fewer oscillations compared with the real path and permits to avoid the

  2. THE IMPACT OF ACCURATE EXTINCTION MEASUREMENTS FOR X-RAY SPECTRAL MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Randall K. [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Valencic, Lynne A. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Corrales, Lia, E-mail: lynne.a.valencic@nasa.gov [MIT Kavli Institute for Astrophysics and Space Research, 77 Massachusetts Avenue, 37-241, Cambridge, MA 02139 (United States)

    2016-02-20

    Interstellar extinction includes both absorption and scattering of photons from interstellar gas and dust grains, and it has the effect of altering a source's spectrum and its total observed intensity. However, while multiple absorption models exist, there are no useful scattering models in standard X-ray spectrum fitting tools, such as XSPEC. Nonetheless, X-ray halos, created by scattering from dust grains, are detected around even moderately absorbed sources, and the impact on an observed source spectrum can be significant, if modest, compared to direct absorption. By convolving the scattering cross section with dust models, we have created a spectral model as a function of energy, type of dust, and extraction region that can be used with models of direct absorption. This will ensure that the extinction model is consistent and enable direct connections to be made between a source's X-ray spectral fits and its UV/optical extinction.

  3. Fault Tolerance for Industrial Actuators in Absence of Accurate Models and Hardware Redundancy

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios; Blanke, Mogens; Niemann, Hans Henrik

    2015-01-01

    This paper investigates Fault-Tolerant Control for closed-loop systems where only coarse models are available and there is lack of actuator and sensor redundancies. The problem is approached in the form of a typical servomotor in closed-loop. A linear model is extracted from input/output data to ...

  4. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    Science.gov (United States)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  5. Highly Accurate Tree Models Derived from Terrestrial Laser Scan Data: A Method Description

    Directory of Open Access Journals (Sweden)

    Jan Hackenberg

    2014-05-01

    Full Text Available This paper presents a method for fitting cylinders into a point cloud, derived from a terrestrial laser-scanned tree. Utilizing high scan quality data as the input, the resulting models describe the branching structure of the tree, capable of detecting branches with a diameter smaller than a centimeter. The cylinders are stored as a hierarchical tree-like data structure encapsulating parent-child neighbor relations and incorporating the tree’s direction of growth. This structure enables the efficient extraction of tree components, such as the stem or a single branch. The method was validated both by applying a comparison of the resulting cylinder models with ground truth data and by an analysis between the input point clouds and the models. Tree models were accomplished representing more than 99% of the input point cloud, with an average distance from the cylinder model to the point cloud within sub-millimeter accuracy. After validation, the method was applied to build two allometric models based on 24 tree point clouds as an example of the application. Computation terminated successfully within less than 30 min. For the model predicting the total above ground volume, the coefficient of determination was 0.965, showing the high potential of terrestrial laser-scanning for forest inventories.

  6. Accurate Fabrication of Hydroxyapatite Bone Models with Porous Scaffold Structures by Using Stereolithography

    International Nuclear Information System (INIS)

    Maeda, Chiaki; Tasaki, Satoko; Kirihara, Soshu

    2011-01-01

    Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.

  7. Accurate Fabrication of Hydroxyapatite Bone Models with Porous Scaffold Structures by Using Stereolithography

    Energy Technology Data Exchange (ETDEWEB)

    Maeda, Chiaki; Tasaki, Satoko; Kirihara, Soshu, E-mail: c-maeda@jwri.osaka-u.ac.jp [Joining and Welding Research Institute, Osaka University, 11-1 Mihogaoka, Ibaraki City, Osaka 567-0047 (Japan)

    2011-05-15

    Computer graphic models of bioscaffolds with four-coordinate lattice structures of solid rods in artificial bones were designed by using a computer aided design. The scaffold models composed of acryl resin with hydroxyapatite particles at 45vol. % were fabricated by using stereolithography of a computer aided manufacturing. After dewaxing and sintering heat treatment processes, the ceramics scaffold models with four-coordinate lattices and fine hydroxyapatite microstructures were obtained successfully. By using a computer aided analysis, it was found that bio-fluids could flow extensively inside the sintered scaffolds. This result shows that the lattice structures will realize appropriate bio-fluid circulations and promote regenerations of new bones.

  8. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions

    DEFF Research Database (Denmark)

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving...... average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre...

  9. submitter A model for the accurate computation of the lateral scattering of protons in water

    CERN Document Server

    Bellinzona, EV; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T

    2016-01-01

    A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.

  10. Accurate Locally Conservative Discretizations for Modeling Multiphase Flow in Porous Media on General Hexahedra Grids

    KAUST Repository

    Wheeler, M.F.; Xue, G.

    2010-01-01

    For many years there have been formulations considered for modeling single phase ow on general hexahedra grids. These include the extended mixed nite element method, and families of mimetic nite di erence methods. In most of these schemes either

  11. Solitary mammals provide an animal model for autism spectrum disorders.

    Science.gov (United States)

    Reser, Jared Edward

    2014-02-01

    Species of solitary mammals are known to exhibit specialized, neurological adaptations that prepare them to focus working memory on food procurement and survival rather than on social interaction. Solitary and nonmonogamous mammals, which do not form strong social bonds, have been documented to exhibit behaviors and biomarkers that are similar to endophenotypes in autism. Both individuals on the autism spectrum and certain solitary mammals have been reported to be low on measures of affiliative need, bodily expressiveness, bonding and attachment, direct and shared gazing, emotional engagement, conspecific recognition, partner preference, separation distress, and social approach behavior. Solitary mammals also exhibit certain biomarkers that are characteristic of autism, including diminished oxytocin and vasopressin signaling, dysregulation of the endogenous opioid system, increased Hypothalamic-pituitary-adrenal axis (HPA) activity to social encounters, and reduced HPA activity to separation and isolation. The extent of these similarities suggests that solitary mammals may offer a useful model of autism spectrum disorders and an opportunity for investigating genetic and epigenetic etiological factors. If the brain in autism can be shown to exhibit distinct homologous or homoplastic similarities to the brains of solitary animals, it will reveal that they may be central to the phenotype and should be targeted for further investigation. Research of the neurological, cellular, and molecular basis of these specializations in other mammals may provide insight for behavioral analysis, communication intervention, and psychopharmacology for autism.

  12. Scalable and Accurate SMT-Based Model Checking of Data Flow Systems

    Science.gov (United States)

    2013-10-31

    of variable x is always less than that of variable y) can be represented in this theory. • A theory of inductive datatypes . Modeling software... datatypes can be done directly in this theory. • A theory of arrays. Software that uses arrays can be modeled with constraints in this theory, as can...Arithmetic (and specialized fragments) Arrays Inductive datatypes Bit-vectors Uninterpreted functions SMT Engine Input interfaces FEATURES Support for

  13. Efficient and accurate log-Lévy approximations to Lévy driven LIBOR models

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David

    2011-01-01

    The LIBOR market model is very popular for pricing interest rate derivatives, but is known to have several pitfalls. In addition, if the model is driven by a jump process, then the complexity of the drift term is growing exponentially fast (as a function of the tenor length). In this work, we con...... ratchet caps show that the approximations perform very well. In addition, we also consider the log-L\\'evy approximation of annuities, which offers good approximations for high volatility regimes....

  14. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    Science.gov (United States)

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Color-SIFT model: a robust and an accurate shot boundary detection algorithm

    Science.gov (United States)

    Sharmila Kumari, M.; Shekar, B. H.

    2010-02-01

    In this paper, a new technique called color-SIFT model is devised for shot boundary detection. Unlike scale invariant feature transform model that uses only grayscale information and misses important visual information regarding color, here we have adopted different color planes to extract keypoints which are subsequently used to detect shot boundaries. The basic SIFT model has four stages namely scale-space peak selection, keypoint localization, orientation assignment and keypoint descriptor and all these four stages were employed to extract key descriptors in each color plane. The proposed model works on three different color planes and a fusion has been made to take a decision on number of keypoint matches for shot boundary identification and hence is different from the color global scale invariant feature transform that works on quantized images. In addition, the proposed algorithm possess invariance to linear transformation and robust to occlusion and noisy environment. Experiments have been conducted on the standard TRECVID video database to reveal the performance of the proposed model.

  16. Change in volume parameters induced by neoadjuvant chemotherapy provide accurate prediction of overall survival after resection in patients with oesophageal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Tamandl, Dietmar; Fueger, Barbara; Kinsperger, Patrick; Haug, Alexander; Ba-Ssalamah, Ahmed [Medical University of Vienna, Department of Biomedical Imaging and Image-Guided Therapy, Comprehensive Cancer Center GET-Unit, Vienna (Austria); Gore, Richard M. [University of Chicago Pritzker School of Medicine, Department of Radiology, Chicago, IL (United States); Hejna, Michael [Medical University of Vienna, Department of Internal Medicine, Division of Medical Oncology, Comprehensive Cancer Center GET-Unit, Vienna (Austria); Paireder, Matthias; Schoppmann, Sebastian F. [Medical University of Vienna, Department of Surgery, Upper-GI-Service, Comprehensive Cancer Center GET-Unit, Vienna (Austria)

    2016-02-15

    To assess the prognostic value of volumetric parameters measured with CT and PET/CT in patients with neoadjuvant chemotherapy (NACT) and resection for oesophageal cancer (EC). Patients with locally advanced EC, who were treated with NACT and resection, were retrospectively analysed. Data from CT volumetry and {sup 18} F-FDG PET/CT (maximum standardized uptake [SUVmax], metabolic tumour volume [MTV], and total lesion glycolysis [TLG]) were recorded before and after NACT. The impact of volumetric parameter changes induced by NACT (MTV{sub RATIO}, TLG{sub RATIO}, etc.) on overall survival (OS) was assessed using a Cox proportional hazards model. Eighty-four patients were assessed using CT volumetry; of those, 50 also had PET/CT before and after NACT. Low post-treatment CT volume and thickness, MTV, TLG, and SUVmax were all associated with longer OS (p < 0.05), as were CTthickness{sub RATIO}, MTV{sub RATIO}, TLG{sub RATIO}, and SUVmax{sub RATIO} (p < 0.05). In the multivariate analysis, only MTV{sub RATIO} (Hazard ratio, HR 2.52 [95 % Confidence interval, CI 1.33-4.78], p = 0.005), TLG{sub RATIO} (HR 3.89 [95%CI 1.46-10.34], p = 0.006), and surgical margin status (p < 0.05), were independent predictors of OS. MTV{sub RATIO} and TLG{sub RATIO} are independent prognostic factors for survival in patients after NACT and resection for EC. (orig.)

  17. Accurate Estimation of Target amounts Using Expanded BASS Model for Demand-Side Management

    Science.gov (United States)

    Kim, Hyun-Woong; Park, Jong-Jin; Kim, Jin-O.

    2008-10-01

    The electricity demand in Korea has rapidly increased along with a steady economic growth since 1970s. Therefore Korea has positively propelled not only SSM (Supply-Side Management) but also DSM (Demand-Side Management) activities to reduce investment cost of generating units and to save supply costs of electricity through the enhancement of whole national energy utilization efficiency. However study for rebate, which have influence on success or failure on DSM program, is not sufficient. This paper executed to modeling mathematically expanded Bass model considering rebates, which have influence on penetration amounts for DSM program. To reflect rebate effect more preciously, the pricing function using in expanded Bass model directly reflects response of potential participants for rebate level.

  18. Modelling of Limestone Dissolution in Wet FGD Systems: The Importance of an Accurate Particle Size Distribution

    DEFF Research Database (Denmark)

    Kiil, Søren; Johnsson, Jan Erik; Dam-Johansen, Kim

    1999-01-01

    Danish limestone types with very different particle size distributions (PSDs). All limestones were of a high purity. Model predictions were found to be qualitatively in good agreement with experimental data without any use of adjustable parameters. Deviations between measurements and simulations were...... attributed primarily to the PSD measurements of the limestone particles, which were used as model inputs. The PSDs, measured using a laser diffrac-tion-based Malvern analyser, were probably not representative of the limestone samples because agglomeration phenomena took place when the particles were...

  19. Fast and accurate modeling of nonlinear pulse propagation in graded-index multimode fibers.

    Science.gov (United States)

    Conforti, Matteo; Mas Arabi, Carlos; Mussot, Arnaud; Kudlinski, Alexandre

    2017-10-01

    We develop a model for the description of nonlinear pulse propagation in multimode optical fibers with a parabolic refractive index profile. It consists of a 1+1D generalized nonlinear Schrödinger equation with a periodic nonlinear coefficient, which can be solved in an extremely fast and efficient way. The model is able to quantitatively reproduce recently observed phenomena like geometric parametric instability and broadband dispersive wave emission. We envisage that our equation will represent a valuable tool for the study of spatiotemporal nonlinear dynamics in the growing field of multimode fiber optics.

  20. Efficient and Accurate Log-Levy Approximations of Levy-Driven LIBOR Models

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David

    2012-01-01

    The LIBOR market model is very popular for pricing interest rate derivatives but is known to have several pitfalls. In addition, if the model is driven by a jump process, then the complexity of the drift term grows exponentially fast (as a function of the tenor length). We consider a Lévy-driven ...... ratchet caps show that the approximations perform very well. In addition, we also consider the log-Lévy approximation of annuities, which offers good approximations for high-volatility regimes....

  1. Affine-response model of molecular solvation of ions: Accurate predictions of asymmetric charging free energies

    Czech Academy of Sciences Publication Activity Database

    Bardhan, J. P.; Jungwirth, Pavel; Makowski, L.

    Roč. 137, č. 12 ( 2012 ), 124101/1-124101/6 ISSN 0021-9606 R&D Projects: GA MŠk LH12001 Institutional research plan: CEZ:AV0Z40550506 Keywords : ion solvation * continuum models * linear response Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012

  2. Fast and accurate Bayesian model criticism and conflict diagnostics using R-INLA

    KAUST Repository

    Ferkingstad, Egil; Held, Leonhard; Rue, Haavard

    2017-01-01

    . Usually, a Bayesian hierarchical model incorporates a grouping of the individual data points, as, for example, with individuals in repeated measurement data. In such cases, the following question arises: Are any of the groups “outliers,” or in conflict

  3. Accurate reduction of a model of circadian rhythms by delayed quasi steady state assumptions

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš

    2014-01-01

    Roč. 139, č. 4 (2014), s. 577-585 ISSN 0862-7959 Grant - others:European Commission(XE) StochDetBioModel(328008) Program:FP7 Institutional support: RVO:67985840 Keywords : biochemical networks * gene regulatory networks * oscillating systems * periodic solution Subject RIV: BA - General Mathematics http://hdl.handle.net/10338.dmlcz/144135

  4. Efficient accurate syntactic direct translation models: one tree at a time

    NARCIS (Netherlands)

    Hassan, H.; Sima'an, K.; Way, A.

    2011-01-01

    A challenging aspect of Statistical Machine Translation from Arabic to English lies in bringing the Arabic source morpho-syntax to bear on the lexical as well as word-order choices of the English target string. In this article, we extend the feature-rich discriminative Direct Translation Model 2

  5. Statistically accurate low-order models for uncertainty quantification in turbulent dynamical systems.

    Science.gov (United States)

    Sapsis, Themistoklis P; Majda, Andrew J

    2013-08-20

    A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra.

  6. A semi-implicit, second-order-accurate numerical model for multiphase underexpanded volcanic jets

    Directory of Open Access Journals (Sweden)

    S. Carcano

    2013-11-01

    Full Text Available An improved version of the PDAC (Pyroclastic Dispersal Analysis Code, Esposti Ongaro et al., 2007 numerical model for the simulation of multiphase volcanic flows is presented and validated for the simulation of multiphase volcanic jets in supersonic regimes. The present version of PDAC includes second-order time- and space discretizations and fully multidimensional advection discretizations in order to reduce numerical diffusion and enhance the accuracy of the original model. The model is tested on the problem of jet decompression in both two and three dimensions. For homogeneous jets, numerical results are consistent with experimental results at the laboratory scale (Lewis and Carlson, 1964. For nonequilibrium gas–particle jets, we consider monodisperse and bidisperse mixtures, and we quantify nonequilibrium effects in terms of the ratio between the particle relaxation time and a characteristic jet timescale. For coarse particles and low particle load, numerical simulations well reproduce laboratory experiments and numerical simulations carried out with an Eulerian–Lagrangian model (Sommerfeld, 1993. At the volcanic scale, we consider steady-state conditions associated with the development of Vulcanian and sub-Plinian eruptions. For the finest particles produced in these regimes, we demonstrate that the solid phase is in mechanical and thermal equilibrium with the gas phase and that the jet decompression structure is well described by a pseudogas model (Ogden et al., 2008. Coarse particles, on the other hand, display significant nonequilibrium effects, which associated with their larger relaxation time. Deviations from the equilibrium regime, with maximum velocity and temperature differences on the order of 150 m s−1 and 80 K across shock waves, occur especially during the rapid acceleration phases, and are able to modify substantially the jet dynamics with respect to the homogeneous case.

  7. Assessing the performance of commercial Agisoft PhotoScan software to deliver reliable data for accurate3D modelling

    Directory of Open Access Journals (Sweden)

    Jebur Ahmed

    2018-01-01

    Full Text Available 3D models delivered from digital photogrammetric techniques have massively increased and developed to meet the requirements of many applications. The reliability of these models is basically dependent on the data processing cycle and the adopted tool solution in addition to data quality. Agisoft PhotoScan is a professional image-based 3D modelling software, which seeks to create orderly, precise n 3D content from fixed images. It works with arbitrary images those qualified in both controlled and uncontrolled conditions. Following the recommendations of many users all around the globe, Agisoft PhotoScan, has become an important source to generate precise 3D data for different applications. How reliable is this data for accurate 3D modelling applications is the current question that needs an answer. Therefore; in this paper, the performance of the Agisoft PhotoScan software was assessed and analyzed to show the potential of the software for accurate 3D modelling applications. To investigate this, a study was carried out in the University of Baghdad / Al-Jaderia campus using data collected from airborne metric camera with 457m flying height. The Agisoft results show potential according to the research objective and the dataset quality following statistical and validation shape analysis.

  8. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    International Nuclear Information System (INIS)

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-01-01

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  9. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    Energy Technology Data Exchange (ETDEWEB)

    Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036, Spain and Servei de Física Mèdica i Protecció Radiològica, Institut Català d’Oncologia, L’Hospitalet de Llobregat 08907 (Spain); Roé, Nuria [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036 (Spain); Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Complexo Hospitalario Universitario de Santiago de Compostela 15706, Spain and Grupo de Imagen Molecular, Instituto de Investigacións Sanitarias de Santiago de Compostela (IDIS), Galicia 15782 (Spain); Falcon, Carles; Ros, Domènec [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 08036, Spain and CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Pavía, Javier [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 080836 (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); and Servei de Medicina Nuclear, Hospital Clínic, Barcelona 08036 (Spain)

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  10. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    International Nuclear Information System (INIS)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  11. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California 90024 (United States)

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  12. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery.

    Science.gov (United States)

    Yu, Victoria Y; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A; Sheng, Ke

    2015-11-01

    Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was attributed to phantom setup

  13. Accurate Simulation of 802.11 Indoor Links: A "Bursty" Channel Model Based on Real Measurements

    Directory of Open Access Journals (Sweden)

    Agüero Ramón

    2010-01-01

    Full Text Available We propose a novel channel model to be used for simulating indoor wireless propagation environments. An extensive measurement campaign was carried out to assess the performance of different transport protocols over 802.11 links. This enabled us to better adjust our approach, which is based on an autoregressive filter. One of the main advantages of this proposal lies in its ability to reflect the "bursty" behavior which characterizes indoor wireless scenarios, having a great impact on the behavior of upper layer protocols. We compare this channel model, integrated within the Network Simulator (ns-2 platform, with other traditional approaches, showing that it is able to better reflect the real behavior which was empirically assessed.

  14. Accurate Modeling of The Siemens S7 SCADA Protocol For Intrusion Detection And Digital Forensic

    Directory of Open Access Journals (Sweden)

    Amit Kleinmann

    2014-09-01

    Full Text Available The Siemens S7 protocol is commonly used in SCADA systems for communications between a Human Machine Interface (HMI and the Programmable Logic Controllers (PLCs. This paper presents a model-based Intrusion Detection Systems (IDS designed for S7 networks. The approach is based on the key observation that S7 traffic to and from a specific PLC is highly periodic; as a result, each HMI-PLC channel can be modeled using its own unique Deterministic Finite Automaton (DFA. The resulting DFA-based IDS is very sensitive and is able to flag anomalies such as a message appearing out of its position in the normal sequence or a message referring to a single unexpected bit. The intrusion detection approach was evaluated on traffic from two production systems. Despite its high sensitivity, the system had a very low false positive rate - over 99.82% of the traffic was identified as normal.

  15. Physical Model for Rapid and Accurate Determination of Nanopore Size via Conductance Measurement.

    Science.gov (United States)

    Wen, Chenyu; Zhang, Zhen; Zhang, Shi-Li

    2017-10-27

    Nanopores have been explored for various biochemical and nanoparticle analyses, primarily via characterizing the ionic current through the pores. At present, however, size determination for solid-state nanopores is experimentally tedious and theoretically unaccountable. Here, we establish a physical model by introducing an effective transport length, L eff , that measures, for a symmetric nanopore, twice the distance from the center of the nanopore where the electric field is the highest to the point along the nanopore axis where the electric field falls to e -1 of this maximum. By [Formula: see text], a simple expression S 0 = f (G, σ, h, β) is derived to algebraically correlate minimum nanopore cross-section area S 0 to nanopore conductance G, electrolyte conductivity σ, and membrane thickness h with β to denote pore shape that is determined by the pore fabrication technique. The model agrees excellently with experimental results for nanopores in graphene, single-layer MoS 2 , and ultrathin SiN x films. The generality of the model is verified by applying it to micrometer-size pores.

  16. An accurate Kriging-based regional ionospheric model using combined GPS/BeiDou observations

    Science.gov (United States)

    Abdelazeem, Mohamed; Çelik, Rahmi N.; El-Rabbany, Ahmed

    2018-01-01

    In this study, we propose a regional ionospheric model (RIM) based on both of the GPS-only and the combined GPS/BeiDou observations for single-frequency precise point positioning (SF-PPP) users in Europe. GPS/BeiDou observations from 16 reference stations are processed in the zero-difference mode. A least-squares algorithm is developed to determine the vertical total electron content (VTEC) bi-linear function parameters for a 15-minute time interval. The Kriging interpolation method is used to estimate the VTEC values at a 1 ° × 1 ° grid. The resulting RIMs are validated for PPP applications using GNSS observations from another set of stations. The SF-PPP accuracy and convergence time obtained through the proposed RIMs are computed and compared with those obtained through the international GNSS service global ionospheric maps (IGS-GIM). The results show that the RIMs speed up the convergence time and enhance the overall positioning accuracy in comparison with the IGS-GIM model, particularly the combined GPS/BeiDou-based model.

  17. An accurate higher order displacement model with shear and normal deformations effects for functionally graded plates

    International Nuclear Information System (INIS)

    Jha, D.K.; Kant, Tarun; Srinivas, K.; Singh, R.K.

    2013-01-01

    Highlights: • We model through-thickness variation of material properties in functionally graded (FG) plates. • Effect of material grading index on deformations, stresses and natural frequency of FG plates is studied. • Effect of higher order terms in displacement models is studied for plate statics. • The benchmark solutions for the static analysis and free vibration of thick FG plates are presented. -- Abstract: Functionally graded materials (FGMs) are the potential candidates under consideration for designing the first wall of fusion reactors with a view to make best use of potential properties of available materials under severe thermo-mechanical loading conditions. A higher order shear and normal deformations plate theory is employed for stress and free vibration analyses of functionally graded (FG) elastic, rectangular, and simply (diaphragm) supported plates. Although FGMs are highly heterogeneous in nature, they are generally idealized as continua with mechanical properties changing smoothly with respect to spatial coordinates. The material properties of FG plates are assumed here to vary through thickness of plate in a continuous manner. Young's modulii and material densities are considered to be varying continuously in thickness direction according to volume fraction of constituents which are mathematically modeled here as exponential and power law functions. The effects of variation of material properties in terms of material gradation index on deformations, stresses and natural frequency of FG plates are investigated. The accuracy of present numerical solutions has been established with respect to exact three-dimensional (3D) elasticity solutions and the other models’ solutions available in literature

  18. Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum

    Science.gov (United States)

    Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.

    2013-02-01

    Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.

  19. An accurate higher order displacement model with shear and normal deformations effects for functionally graded plates

    Energy Technology Data Exchange (ETDEWEB)

    Jha, D.K., E-mail: dkjha@barc.gov.in [Civil Engineering Division, Bhabha Atomic Research Centre, Trombay, Mumbai 400 085 (India); Kant, Tarun [Department of Civil Engineering, Indian Institute of Technology Bombay, Powai, Mumbai 400 076 (India); Srinivas, K. [Civil Engineering Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India); Singh, R.K. [Reactor Safety Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India)

    2013-12-15

    Highlights: • We model through-thickness variation of material properties in functionally graded (FG) plates. • Effect of material grading index on deformations, stresses and natural frequency of FG plates is studied. • Effect of higher order terms in displacement models is studied for plate statics. • The benchmark solutions for the static analysis and free vibration of thick FG plates are presented. -- Abstract: Functionally graded materials (FGMs) are the potential candidates under consideration for designing the first wall of fusion reactors with a view to make best use of potential properties of available materials under severe thermo-mechanical loading conditions. A higher order shear and normal deformations plate theory is employed for stress and free vibration analyses of functionally graded (FG) elastic, rectangular, and simply (diaphragm) supported plates. Although FGMs are highly heterogeneous in nature, they are generally idealized as continua with mechanical properties changing smoothly with respect to spatial coordinates. The material properties of FG plates are assumed here to vary through thickness of plate in a continuous manner. Young's modulii and material densities are considered to be varying continuously in thickness direction according to volume fraction of constituents which are mathematically modeled here as exponential and power law functions. The effects of variation of material properties in terms of material gradation index on deformations, stresses and natural frequency of FG plates are investigated. The accuracy of present numerical solutions has been established with respect to exact three-dimensional (3D) elasticity solutions and the other models’ solutions available in literature.

  20. Inference Under a Wright-Fisher Model Using an Accurate Beta Approximation

    DEFF Research Database (Denmark)

    Tataru, Paula; Bataillon, Thomas; Hobolth, Asger

    2015-01-01

    frequencies and the influence of evolutionary pressures, such as mutation and selection. Despite its simple mathematical formulation, exact results for the distribution of allele frequency (DAF) as a function of time are not available in closed analytic form. Existing approximations build......, the probability of being on the boundary can be positive, corresponding to the allele being either lost or fixed. Here, we introduce the beta with spikes, an extension of the beta approximation, which explicitly models the loss and fixation probabilities as two spikes at the boundaries. We show that the addition...

  1. Affine-response model of molecular solvation of ions: Accurate predictions of asymmetric charging free energies.

    Science.gov (United States)

    Bardhan, Jaydeep P; Jungwirth, Pavel; Makowski, Lee

    2012-09-28

    Two mechanisms have been proposed to drive asymmetric solvent response to a solute charge: a static potential contribution similar to the liquid-vapor potential, and a steric contribution associated with a water molecule's structure and charge distribution. In this work, we use free-energy perturbation molecular-dynamics calculations in explicit water to show that these mechanisms act in complementary regimes; the large static potential (∼44 kJ/mol/e) dominates asymmetric response for deeply buried charges, and the steric contribution dominates for charges near the solute-solvent interface. Therefore, both mechanisms must be included in order to fully account for asymmetric solvation in general. Our calculations suggest that the steric contribution leads to a remarkable deviation from the popular "linear response" model in which the reaction potential changes linearly as a function of charge. In fact, the potential varies in a piecewise-linear fashion, i.e., with different proportionality constants depending on the sign of the charge. This discrepancy is significant even when the charge is completely buried, and holds for solutes larger than single atoms. Together, these mechanisms suggest that implicit-solvent models can be improved using a combination of affine response (an offset due to the static potential) and piecewise-linear response (due to the steric contribution).

  2. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Science.gov (United States)

    Xie, Weihong; Yu, Yang

    2017-01-01

    Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG) in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM) estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly. PMID:29124062

  3. Beating Heart Motion Accurate Prediction Method Based on Interactive Multiple Model: An Information Fusion Approach

    Directory of Open Access Journals (Sweden)

    Fan Liang

    2017-01-01

    Full Text Available Robot-assisted motion compensated beating heart surgery has the advantage over the conventional Coronary Artery Bypass Graft (CABG in terms of reduced trauma to the surrounding structures that leads to shortened recovery time. The severe nonlinear and diverse nature of irregular heart rhythm causes enormous difficulty for the robot to realize the clinic requirements, especially under arrhythmias. In this paper, we propose a fusion prediction framework based on Interactive Multiple Model (IMM estimator, allowing each model to cover a distinguishing feature of the heart motion in underlying dynamics. We find that, at normal state, the nonlinearity of the heart motion with slow time-variant changing dominates the beating process. When an arrhythmia occurs, the irregularity mode, the fast uncertainties with random patterns become the leading factor of the heart motion. We deal with prediction problem in the case of arrhythmias by estimating the state with two behavior modes which can adaptively “switch” from one to the other. Also, we employed the signal quality index to adaptively determine the switch transition probability in the framework of IMM. We conduct comparative experiments to evaluate the proposed approach with four distinguished datasets. The test results indicate that the new proposed approach reduces prediction errors significantly.

  4. Multiconjugate adaptive optics applied to an anatomically accurate human eye model.

    Science.gov (United States)

    Bedggood, P A; Ashman, R; Smith, G; Metha, A B

    2006-09-04

    Aberrations of both astronomical telescopes and the human eye can be successfully corrected with conventional adaptive optics. This produces diffraction-limited imagery over a limited field of view called the isoplanatic patch. A new technique, known as multiconjugate adaptive optics, has been developed recently in astronomy to increase the size of this patch. The key is to model atmospheric turbulence as several flat, discrete layers. A human eye, however, has several curved, aspheric surfaces and a gradient index lens, complicating the task of correcting aberrations over a wide field of view. Here we utilize a computer model to determine the degree to which this technology may be applied to generate high resolution, wide-field retinal images, and discuss the considerations necessary for optimal use with the eye. The Liou and Brennan schematic eye simulates the aspheric surfaces and gradient index lens of real human eyes. We show that the size of the isoplanatic patch of the human eye is significantly increased through multiconjugate adaptive optics.

  5. Affine-response model of molecular solvation of ions: Accurate predictions of asymmetric charging free energies

    Science.gov (United States)

    Bardhan, Jaydeep P.; Jungwirth, Pavel; Makowski, Lee

    2012-01-01

    Two mechanisms have been proposed to drive asymmetric solvent response to a solute charge: a static potential contribution similar to the liquid-vapor potential, and a steric contribution associated with a water molecule's structure and charge distribution. In this work, we use free-energy perturbation molecular-dynamics calculations in explicit water to show that these mechanisms act in complementary regimes; the large static potential (∼44 kJ/mol/e) dominates asymmetric response for deeply buried charges, and the steric contribution dominates for charges near the solute-solvent interface. Therefore, both mechanisms must be included in order to fully account for asymmetric solvation in general. Our calculations suggest that the steric contribution leads to a remarkable deviation from the popular “linear response” model in which the reaction potential changes linearly as a function of charge. In fact, the potential varies in a piecewise-linear fashion, i.e., with different proportionality constants depending on the sign of the charge. This discrepancy is significant even when the charge is completely buried, and holds for solutes larger than single atoms. Together, these mechanisms suggest that implicit-solvent models can be improved using a combination of affine response (an offset due to the static potential) and piecewise-linear response (due to the steric contribution). PMID:23020318

  6. Multiconjugate adaptive optics applied to an anatomically accurate human eye model

    Science.gov (United States)

    Bedggood, P. A.; Ashman, R.; Smith, G.; Metha, A. B.

    2006-09-01

    Aberrations of both astronomical telescopes and the human eye can be successfully corrected with conventional adaptive optics. This produces diffraction-limited imagery over a limited field of view called the isoplanatic patch. A new technique, known as multiconjugate adaptive optics, has been developed recently in astronomy to increase the size of this patch. The key is to model atmospheric turbulence as several flat, discrete layers. A human eye, however, has several curved, aspheric surfaces and a gradient index lens, complicating the task of correcting aberrations over a wide field of view. Here we utilize a computer model to determine the degree to which this technology may be applied to generate high resolution, wide-field retinal images, and discuss the considerations necessary for optimal use with the eye. The Liou and Brennan schematic eye simulates the aspheric surfaces and gradient index lens of real human eyes. We show that the size of the isoplanatic patch of the human eye is significantly increased through multiconjugate adaptive optics.

  7. Effects of early afterdepolarizations on excitation patterns in an accurate model of the human ventricles

    Science.gov (United States)

    Seemann, Gunnar; Panfilov, Alexander V.; Vandersickel, Nele

    2017-01-01

    Early Afterdepolarizations, EADs, are defined as the reversal of the action potential before completion of the repolarization phase, which can result in ectopic beats. However, the series of mechanisms of EADs leading to these ectopic beats and related cardiac arrhythmias are not well understood. Therefore, we aimed to investigate the influence of this single cell behavior on the whole heart level. For this study we used a modified version of the Ten Tusscher-Panfilov model of human ventricular cells (TP06) which we implemented in a 3D ventricle model including realistic fiber orientations. To increase the likelihood of EAD formation at the single cell level, we reduced the repolarization reserve (RR) by reducing the rapid delayed rectifier Potassium current and raising the L-type Calcium current. Varying these parameters defined a 2D parametric space where different excitation patterns could be classified. Depending on the initial conditions, by either exciting the ventricles with a spiral formation or burst pacing protocol, we found multiple different spatio-temporal excitation patterns. The spiral formation protocol resulted in the categorization of a stable spiral (S), a meandering spiral (MS), a spiral break-up regime (SB), spiral fibrillation type B (B), spiral fibrillation type A (A) and an oscillatory excitation type (O). The last three patterns are a 3D generalization of previously found patterns in 2D. First, the spiral fibrillation type B showed waves determined by a chaotic bi-excitable regime, i.e. mediated by both Sodium and Calcium waves at the same time and in same tissue settings. In the parameter region governed by the B pattern, single cells were able to repolarize completely and different (spiral) waves chaotically burst into each other without finishing a 360 degree rotation. Second, spiral fibrillation type A patterns consisted of multiple small rotating spirals. Single cells failed to repolarize to the resting membrane potential hence

  8. Disturbance observer based model predictive control for accurate atmospheric entry of spacecraft

    Science.gov (United States)

    Wu, Chao; Yang, Jun; Li, Shihua; Li, Qi; Guo, Lei

    2018-05-01

    Facing the complex aerodynamic environment of Mars atmosphere, a composite atmospheric entry trajectory tracking strategy is investigated in this paper. External disturbances, initial states uncertainties and aerodynamic parameters uncertainties are the main problems. The composite strategy is designed to solve these problems and improve the accuracy of Mars atmospheric entry. This strategy includes a model predictive control for optimized trajectory tracking performance, as well as a disturbance observer based feedforward compensation for external disturbances and uncertainties attenuation. 500-run Monte Carlo simulations show that the proposed composite control scheme achieves more precise Mars atmospheric entry (3.8 km parachute deployment point distribution error) than the baseline control scheme (8.4 km) and integral control scheme (5.8 km).

  9. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    Energy Technology Data Exchange (ETDEWEB)

    Andrade-Ines, Eduardo [Institute de Mécanique Céleste et des Calcul des Éphémérides—Observatoire de Paris, 77 Avenue Denfert Rochereau, F-75014 Paris (France); Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, 91109 Pasadena, CA (United States)

    2017-04-01

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-order models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.

  10. A more accurate modeling of the effects of actuators in large space structures

    Science.gov (United States)

    Hablani, H. B.

    1981-01-01

    The paper deals with finite actuators. A nonspinning three-axis stabilized space vehicle having a two-dimensional large structure and a rigid body at the center is chosen for analysis. The torquers acting on the vehicle are modeled as antisymmetric forces distributed in a small but finite area. In the limit they represent point torquers which also are treated as a special case of surface distribution of dipoles. Ordinary and partial differential equations governing the forced vibrations of the vehicle are derived by using Hamilton's principle. Associated modal inputs are obtained for both the distributed moments and the distributed forces. It is shown that the finite torquers excite the higher modes less than the point torquers. Modal cost analysis proves to be a suitable methodology to this end.

  11. Constant size descriptors for accurate machine learning models of molecular properties

    Science.gov (United States)

    Collins, Christopher R.; Gordon, Geoffrey J.; von Lilienfeld, O. Anatole; Yaron, David J.

    2018-06-01

    Two different classes of molecular representations for use in machine learning of thermodynamic and electronic properties are studied. The representations are evaluated by monitoring the performance of linear and kernel ridge regression models on well-studied data sets of small organic molecules. One class of representations studied here counts the occurrence of bonding patterns in the molecule. These require only the connectivity of atoms in the molecule as may be obtained from a line diagram or a SMILES string. The second class utilizes the three-dimensional structure of the molecule. These include the Coulomb matrix and Bag of Bonds, which list the inter-atomic distances present in the molecule, and Encoded Bonds, which encode such lists into a feature vector whose length is independent of molecular size. Encoded Bonds' features introduced here have the advantage of leading to models that may be trained on smaller molecules and then used successfully on larger molecules. A wide range of feature sets are constructed by selecting, at each rank, either a graph or geometry-based feature. Here, rank refers to the number of atoms involved in the feature, e.g., atom counts are rank 1, while Encoded Bonds are rank 2. For atomization energies in the QM7 data set, the best graph-based feature set gives a mean absolute error of 3.4 kcal/mol. Inclusion of 3D geometry substantially enhances the performance, with Encoded Bonds giving 2.4 kcal/mol, when used alone, and 1.19 kcal/mol, when combined with graph features.

  12. A new, accurate, global hydrography data for remote sensing and modelling of river hydrodynamics

    Science.gov (United States)

    Yamazaki, D.

    2017-12-01

    A high-resolution hydrography data is an important baseline data for remote sensing and modelling of river hydrodynamics, given the spatial scale of river network is much smaller than that of land hydrology or atmosphere/ocean circulations. For about 10 years, HydroSHEDS, developed based on the SRTM3 DEM, has been the only available global-scale hydrography data. However, the data availability at the time of HydroSHEDS development limited the quality of the represented river networks. Here, we developed a new global hydrography data using latest geodata such as the multi-error-removed elevation data (MERIT DEM), Landsat-based global water body data (GSWO & G3WBM), cloud-sourced open geography database (OpenStreetMap). The new hydrography data covers the entire globe (including boreal regions above 60N), and it represents more detailed structure of the world river network and contains consistent supplementary data layers such as hydrologically adjusted elevations and river channel width. In the AGU meeting, the developing methodology, assessed quality, and potential applications of the new global hydrography data will be introduced.

  13. The human skin/chick chorioallantoic membrane model accurately predicts the potency of cosmetic allergens.

    Science.gov (United States)

    Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S

    2009-04-01

    The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin.

  14. Model organoids provide new research opportunities for ductal pancreatic cancer

    NARCIS (Netherlands)

    Boj, Sylvia F; Hwang, Chang-Il; Baker, Lindsey A; Engle, Dannielle D; Tuveson, David A; Clevers, Hans

    We recently established organoid models from normal and neoplastic murine and human pancreas tissues. These organoids exhibit ductal- and disease stage-specific characteristics and, after orthotopic transplantation, recapitulate the full spectrum of tumor progression. Pancreatic organoid technology

  15. Statistical and RBF NN models : providing forecasts and risk assessment

    OpenAIRE

    Marček, Milan

    2009-01-01

    Forecast accuracy of economic and financial processes is a popular measure for quantifying the risk in decision making. In this paper, we develop forecasting models based on statistical (stochastic) methods, sometimes called hard computing, and on a soft method using granular computing. We consider the accuracy of forecasting models as a measure for risk evaluation. It is found that the risk estimation process based on soft methods is simplified and less critical to the question w...

  16. Non-isothermal kinetics model to predict accurate phase transformation and hardness of 22MnB5 boron steel

    Energy Technology Data Exchange (ETDEWEB)

    Bok, H.-H.; Kim, S.N.; Suh, D.W. [Graduate Institute of Ferrous Technology, POSTECH, San 31, Hyoja-dong, Nam-gu, Pohang, Gyeongsangbuk-do (Korea, Republic of); Barlat, F., E-mail: f.barlat@postech.ac.kr [Graduate Institute of Ferrous Technology, POSTECH, San 31, Hyoja-dong, Nam-gu, Pohang, Gyeongsangbuk-do (Korea, Republic of); Lee, M.-G., E-mail: myounglee@korea.ac.kr [Department of Materials Science and Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul (Korea, Republic of)

    2015-02-25

    A non-isothermal phase transformation kinetics model obtained by modifying the well-known JMAK approach is proposed for application to a low carbon boron steel (22MnB5) sheet. In the modified kinetics model, the parameters are functions of both temperature and cooling rate, and can be identified by a numerical optimization method. Moreover, in this approach the transformation start and finish temperatures are variable instead of the constants that depend on chemical composition. These variable reference temperatures are determined from the measured CCT diagram using dilatation experiments. The kinetics model developed in this work captures the complex transformation behavior of the boron steel sheet sample accurately. In particular, the predicted hardness and phase fractions in the specimens subjected to a wide range of cooling rates were validated by experiments.

  17. GENERATING ACCURATE 3D MODELS OF ARCHITECTURAL HERITAGE STRUCTURES USING LOW-COST CAMERA AND OPEN SOURCE ALGORITHMS

    Directory of Open Access Journals (Sweden)

    M. Zacharek

    2017-05-01

    Full Text Available These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters, but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  18. Antenna modeling considerations for accurate SAR calculations in human phantoms in close proximity to GSM cellular base station antennas.

    Science.gov (United States)

    van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C

    2005-09-01

    International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. (c) 2005 Wiley-Liss, Inc.

  19. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...... between horizontal sliding and rocking is discussed....

  20. Combining structural modeling with ensemble machine learning to accurately predict protein fold stability and binding affinity effects upon mutation.

    Directory of Open Access Journals (Sweden)

    Niklas Berliner

    Full Text Available Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases.

  1. Fast and Accurate Hybrid Stream PCRTMSOLAR Radiative Transfer Model for Reflected Solar Spectrum Simulation in the Cloudy Atmosphere

    Science.gov (United States)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.

    2016-01-01

    A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.

  2. Centralised gaming models: providing optimal gambling behaviour controls

    OpenAIRE

    Griffiths, MD; Wood, RTA

    2009-01-01

    The expansion in the gaming industry and its widening attraction points to the need for ever more verifiable means of controlling problem gambling. Various strategies have been built into casino venue operations to address this, but recently, following a new focus on social responsibility, a group of experts considered the possibilities of a centralised gaming model as a more effective control mechanism for dealing with gambling behaviours.

  3. A Flexible Collaborative Innovation Model for SOA Services Providers

    OpenAIRE

    Santanna-Filho , João ,; Rabelo , Ricardo ,; Pereira-Klen , Alexandra ,

    2015-01-01

    Part 5: Innovation Networks; International audience; Software sector plays a very relevant role in current world economy. One of its characteristics is that they are mostly composed of SMEs. SMEs have been pushed to invest in innovation to keep competitive. Service Oriented Architecture (SOA) is a recent and powerful ICT paradigm for more sustainable business models. A SOA product has many differences when compared to manufacturing sector. Besides that, SOA projects are however very complex, ...

  4. Accurate prediction of the dew points of acidic combustion gases by using an artificial neural network model

    International Nuclear Information System (INIS)

    ZareNezhad, Bahman; Aminian, Ali

    2011-01-01

    This paper presents a new approach based on using an artificial neural network (ANN) model for predicting the acid dew points of the combustion gases in process and power plants. The most important acidic combustion gases namely, SO 3 , SO 2 , NO 2 , HCl and HBr are considered in this investigation. Proposed Network is trained using the Levenberg-Marquardt back propagation algorithm and the hyperbolic tangent sigmoid activation function is applied to calculate the output values of the neurons of the hidden layer. According to the network's training, validation and testing results, a three layer neural network with nine neurons in the hidden layer is selected as the best architecture for accurate prediction of the acidic combustion gases dew points over wide ranges of acid and moisture concentrations. The proposed neural network model can have significant application in predicting the condensation temperatures of different acid gases to mitigate the corrosion problems in stacks, pollution control devices and energy recovery systems.

  5. Evaluating a nurse-led model for providing neonatal care.

    Science.gov (United States)

    2004-07-01

    The paper presents an overview of a multi-dimensional, prospective, comparative 5-year audit of the quality of the neonatal care provided by a maternity unit in the UK delivering 2000 babies a year, where all neonatal care after 1995 was provided by advanced neonatal nurse practitioners, in relation to that provided by a range of other medically staffed comparator units. The audit includes 11 separate comparative studies supervised by a panel of independent external advisors. Data on intrapartum and neonatal mortality is reported. A review of resuscitation at birth, and a two-tier confidential inquiry into sentinel events in six units were carried out. The reliability of the routine predischarge neonatal examination was studied and, in particular, the recognition of congenital heart disease. A review of the quality of postdischarge letters was undertaken alongside an interview survey to elicit parental views on care provision. An audit of all hospital readmissions within 28 days of birth is reported. Other areas of study include management of staff stress, perceived adequacy of the training of nurse practitioners coming into post, and an assessment of unit costs. Intrapartum and neonatal death among women with a singleton pregnancy originally booked for delivery in Ashington fell 39% between 1991-1995 and 1996-2000 (5.12 vs. 3.11 deaths per 1000 births); the decline for the whole region was 27% (4.10 vs. 2.99). By all other indicators the quality of care in the nurse-managed unit was as good as, or better than, that in the medically staffed comparator units. An appropriately trained, stable team with a store of experience can deliver cot-side care of a higher quality than staff rostered to this task for a few months to gain experience, and this is probably more important than their medical or nursing background. Factors limiting the on-site availability of medical staff with paediatric expertise do not need to dictate the future disposition of maternity services.

  6. Do Cochrane reviews provide a good model for social science?

    DEFF Research Database (Denmark)

    Konnerup, Merete; Kongsted, Hans Christian

    2012-01-01

    Formalised research synthesis to underpin evidence-based policy and practice has become increasingly important in areas of public policy. In this paper we discuss whether the Cochrane standard for systematic reviews of healthcare interventions is appropriate for social research. We examine...... the formal criteria of the Cochrane Collaboration for including particular study designs and search the Cochrane Library to provide quantitative evidence on the de facto standard of actual Cochrane reviews. By identifying the sample of Cochrane reviews that consider observational designs, we are able...... to conclude that the majority of reviews appears limited to considering randomised controlled trials only. Because recent studies have delineated conditions for observational studies in social research to produce valid evidence, we argue that an inclusive approach is essential for truly evidence-based policy...

  7. A new model of dispersion for metals leading to a more accurate modeling of plasmonic structures using the FDTD method

    Energy Technology Data Exchange (ETDEWEB)

    Vial, A.; Dridi, M.; Cunff, L. le [Universite de Technologie de Troyes, Institut Charles Delaunay, CNRS UMR 6279, Laboratoire de Nanotechnologie et d' Instrumentation Optique, 12, rue Marie Curie, BP-2060, Troyes Cedex (France); Laroche, T. [Universite de Franche-Comte, Institut FEMTO-ST, CNRS UMR 6174, Departement de Physique et de Metrologie des Oscillateurs, Besancon Cedex (France)

    2011-06-15

    We present FDTD simulations results obtained using the Drude critical points model. This model enables spectroscopic studies of metallic structures over wider wavelength ranges than usually used, and it facilitates the study of structures made of several metals. (orig.)

  8. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    Science.gov (United States)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  9. Accurate analytic model potentials for D2 and H2 based on the perturbed-Morse--oscillator model

    International Nuclear Information System (INIS)

    Huffaker, J.N.; Cohen, D.I.

    1986-01-01

    Model potentials with as few as 19 free parameters are fitted to published ab initio abiabatic potentials for D 2 and H 2 , with accuracy such that rovibrational eigenvalues are in error by only about 10 -2 cm -1 . A three-parameter model is suggested for describing nonadiabatic effects on eigenvalues, with the intention that such a model might be suitable for all hydrides. Dunham coefficients are calculated from the perturbed-Morse--oscillator series expansion of the model, permitting a critical evaluation of convergence properties of both the Dunham series and the WKB series

  10. The CPA Equation of State and an Activity Coefficient Model for Accurate Molar Enthalpy Calculations of Mixtures with Carbon Dioxide and Water/Brine

    Energy Technology Data Exchange (ETDEWEB)

    Myint, P. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hao, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Firoozabadi, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-03-27

    Thermodynamic property calculations of mixtures containing carbon dioxide (CO2) and water, including brines, are essential in theoretical models of many natural and industrial processes. The properties of greatest practical interest are density, solubility, and enthalpy. Many models for density and solubility calculations have been presented in the literature, but there exists only one study, by Spycher and Pruess, that has compared theoretical molar enthalpy predictions with experimental data [1]. In this report, we recommend two different models for enthalpy calculations: the CPA equation of state by Li and Firoozabadi [2], and the CO2 activity coefficient model by Duan and Sun [3]. We show that the CPA equation of state, which has been demonstrated to provide good agreement with density and solubility data, also accurately calculates molar enthalpies of pure CO2, pure water, and both CO2-rich and aqueous (H2O-rich) mixtures of the two species. It is applicable to a wider range of conditions than the Spycher and Pruess model. In aqueous sodium chloride (NaCl) mixtures, we show that Duan and Sun’s model yields accurate results for the partial molar enthalpy of CO2. It can be combined with another model for the brine enthalpy to calculate the molar enthalpy of H2O-CO2-NaCl mixtures. We conclude by explaining how the CPA equation of state may be modified to further improve agreement with experiments. This generalized CPA is the basis of our future work on this topic.

  11. Accurate Mapping of Multilevel Rydberg Atoms on Interacting Spin-1 /2 Particles for the Quantum Simulation of Ising Models

    Science.gov (United States)

    de Léséleuc, Sylvain; Weber, Sebastian; Lienhard, Vincent; Barredo, Daniel; Büchler, Hans Peter; Lahaye, Thierry; Browaeys, Antoine

    2018-03-01

    We study a system of atoms that are laser driven to n D3 /2 Rydberg states and assess how accurately they can be mapped onto spin-1 /2 particles for the quantum simulation of anisotropic Ising magnets. Using nonperturbative calculations of the pair potentials between two atoms in the presence of electric and magnetic fields, we emphasize the importance of a careful selection of experimental parameters in order to maintain the Rydberg blockade and avoid excitation of unwanted Rydberg states. We benchmark these theoretical observations against experiments using two atoms. Finally, we show that in these conditions, the experimental dynamics observed after a quench is in good agreement with numerical simulations of spin-1 /2 Ising models in systems with up to 49 spins, for which numerical simulations become intractable.

  12. Mathematical modeling provides kinetic details of the human immune response to vaccination

    Directory of Open Access Journals (Sweden)

    Dustin eLe

    2015-01-01

    Full Text Available With major advances in experimental techniques to track antigen-specific immune responses many basic questions on the kinetics of virus-specific immunity in humans remain unanswered. To gain insights into kinetics of T and B cell responses in human volunteers we combine mathematical models and experimental data from recent studies employing vaccines against yellow fever and smallpox. Yellow fever virus-specific CD8 T cell population expanded slowly with the average doubling time of 2 days peaking 2.5 weeks post immunization. Interestingly, we found that the peak of the yellow fever-specific CD8 T cell response is determined by the rate of T cell proliferation and not by the precursor frequency of antigen-specific cells as has been suggested in several studies in mice. We also found that while the frequency of virus-specific T cells increases slowly, the slow increase can still accurately explain clearance of yellow fever virus in the blood. Our additional mathematical model describes well the kinetics of virus-specific antibody-secreting cell and antibody response to vaccinia virus in vaccinated individuals suggesting that most of antibodies in 3 months post immunization are derived from the population of circulating antibody-secreting cells. Taken together, our analysis provides novel insights into mechanisms by which live vaccines induce immunity to viral infections and highlight challenges of applying methods of mathematical modeling to the current, state-of-the-art yet limited immunological data.

  13. Mathematical modeling provides kinetic details of the human immune response to vaccination.

    Science.gov (United States)

    Le, Dustin; Miller, Joseph D; Ganusov, Vitaly V

    2014-01-01

    With major advances in experimental techniques to track antigen-specific immune responses many basic questions on the kinetics of virus-specific immunity in humans remain unanswered. To gain insights into kinetics of T and B cell responses in human volunteers we combined mathematical models and experimental data from recent studies employing vaccines against yellow fever and smallpox. Yellow fever virus-specific CD8 T cell population expanded slowly with the average doubling time of 2 days peaking 2.5 weeks post immunization. Interestingly, we found that the peak of the yellow fever-specific CD8 T cell response was determined by the rate of T cell proliferation and not by the precursor frequency of antigen-specific cells as has been suggested in several studies in mice. We also found that while the frequency of virus-specific T cells increased slowly, the slow increase could still accurately explain clearance of yellow fever virus in the blood. Our additional mathematical model described well the kinetics of virus-specific antibody-secreting cell and antibody response to vaccinia virus in vaccinated individuals suggesting that most of antibodies in 3 months post immunization were derived from the population of circulating antibody-secreting cells. Taken together, our analysis provided novel insights into mechanisms by which live vaccines induce immunity to viral infections and highlighted challenges of applying methods of mathematical modeling to the current, state-of-the-art yet limited immunological data.

  14. Automatic generation of accurate subject-specific bone finite element models to be used in clinical studies.

    Science.gov (United States)

    Viceconti, Marco; Davinelli, Mario; Taddei, Fulvia; Cappello, Angelo

    2004-10-01

    Most of the finite element models of bones used in orthopaedic biomechanics research are based on generic anatomies. However, in many cases it would be useful to generate from CT data a separate finite element model for each subject of a study group. In a recent study a hexahedral mesh generator based on a grid projection algorithm was found very effective in terms of accuracy and automation. However, so far the use of this method has been documented only on data collected in vitro and only for long bones. The present study was aimed at verifying if this method represents a procedure for the generation of finite element models of human bones from data collected in vivo, robust, accurate, automatic and general enough to be used in clinical studies. Robustness, automation and numerical accuracy of the proposed method were assessed on five femoral CT data sets of patients affected by various pathologies. The generality of the method was verified by processing a femur, an ileum, a phalanx, a proximal femur reconstruction, and the micro-CT of a small sample of spongy bone. The method was found robust enough to cope with the variability of the five femurs, producing meshes with a numerical accuracy and a computational weight comparable to those found in vitro. Even when the method was used to process the other bones the levels of mesh conditioning remained within acceptable limits. Thus, it may be concluded that the method presents a generality sufficient to cope with almost any orthopaedic application.

  15. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM+-up scheme

    International Nuclear Information System (INIS)

    Chang, Chih-Hao; Liou, Meng-Sing

    2007-01-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion

  16. Fast and accurate global multiphase arrival tracking: the irregular shortest-path method in a 3-D spherical earth model

    Science.gov (United States)

    Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart

    2013-09-01

    The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.

  17. The isotropic local Wigner-Seitz model: An accurate theoretical model for the quasi-free electron energy in fluids

    Science.gov (United States)

    Evans, Cherice; Findley, Gary L.

    The quasi-free electron energy V0 (ρ) is important in understanding electron transport through a fluid, as well as for modeling electron attachment reactions in fluids. Our group has developed an isotropic local Wigner-Seitz model that allows one to successfully calculate the quasi-free electron energy for a variety of atomic and molecular fluids from low density to the density of the triple point liquid with only a single adjustable parameter. This model, when coupled with the quasi-free electron energy data and the thermodynamic data for the fluids, also can yield optimized intermolecular potential parameters and the zero kinetic energy electron scattering length. In this poster, we give a review of the isotropic local Wigner-Seitz model in comparison to previous theoretical models for the quasi-free electron energy. All measurements were performed at the University of Wisconsin Synchrotron Radiation Center. This work was supported by a Grants from the National Science Foundation (NSF CHE-0956719), the Petroleum Research Fund (45728-B6 and 5-24880), the Louisiana Board of Regents Support Fund (LEQSF(2006-09)-RD-A33), and the Professional Staff Congress City University of New York.

  18. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  19. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    Science.gov (United States)

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  20. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    Directory of Open Access Journals (Sweden)

    Shiyao Wang

    2016-02-01

    Full Text Available A high-performance differential global positioning system (GPS  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU/dead reckoning (DR data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  1. Combining first-principles and data modeling for the accurate prediction of the refractive index of organic polymers

    Science.gov (United States)

    Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes

    2018-06-01

    Organic materials with a high index of refraction (RI) are attracting considerable interest due to their potential application in optic and optoelectronic devices. However, most of these applications require an RI value of 1.7 or larger, while typical carbon-based polymers only exhibit values in the range of 1.3-1.5. This paper introduces an efficient computational protocol for the accurate prediction of RI values in polymers to facilitate in silico studies that can guide the discovery and design of next-generation high-RI materials. Our protocol is based on the Lorentz-Lorenz equation and is parametrized by the polarizability and number density values of a given candidate compound. In the proposed scheme, we compute the former using first-principles electronic structure theory and the latter using an approximation based on van der Waals volumes. The critical parameter in the number density approximation is the packing fraction of the bulk polymer, for which we have devised a machine learning model. We demonstrate the performance of the proposed RI protocol by testing its predictions against the experimentally known RI values of 112 optical polymers. Our approach to combine first-principles and data modeling emerges as both a successful and a highly economical path to determining the RI values for a wide range of organic polymers.

  2. Biological Model Development as an Opportunity to Provide Content Auditing for the Foundational Model of Anatomy Ontology.

    Science.gov (United States)

    Wang, Lucy L; Grunblatt, Eli; Jung, Hyunggu; Kalet, Ira J; Whipple, Mark E

    2015-01-01

    Constructing a biological model using an established ontology provides a unique opportunity to perform content auditing on the ontology. We built a Markov chain model to study tumor metastasis in the regional lymphatics of patients with head and neck squamous cell carcinoma (HNSCC). The model attempts to determine regions with high likelihood for metastasis, which guides surgeons and radiation oncologists in selecting the boundaries of treatment. To achieve consistent anatomical relationships, the nodes in our model are populated using lymphatic objects extracted from the Foundational Model of Anatomy (FMA) ontology. During this process, we discovered several classes of inconsistencies in the lymphatic representations within the FMA. We were able to use this model building opportunity to audit the entities and connections in this region of interest (ROI). We found five subclasses of errors that are computationally detectable and resolvable, one subclass of errors that is computationally detectable but unresolvable, requiring the assistance of a content expert, and also errors of content, which cannot be detected through computational means. Mathematical descriptions of detectable errors along with expert review were used to discover inconsistencies and suggest concepts for addition and removal. Out of 106 organ and organ parts in the ROI, 8 unique entities were affected, leading to the suggestion of 30 concepts for addition and 4 for removal. Out of 27 lymphatic chain instances, 23 were found to have errors, with a total of 32 concepts suggested for addition and 15 concepts for removal. These content corrections are necessary for the accurate functioning of the FMA and provide benefits for future research and educational uses.

  3. New and Accurate Predictive Model for the Efficacy of Extracorporeal Shock Wave Therapy in Managing Patients With Chronic Plantar Fasciitis.

    Science.gov (United States)

    Yin, Mengchen; Chen, Ni; Huang, Quan; Marla, Anastasia Sulindro; Ma, Junming; Ye, Jie; Mo, Wen

    2017-12-01

    Youden index was .4243, .3003, and .7189, respectively. The Hosmer-Lemeshow test showed a good fitting of the predictive model, with an overall accuracy of 89.6%. This study establishes a new and accurate predictive model for the efficacy of ESWT in managing patients with chronic plantar fasciitis. The use of these parameters, in the form of a predictive model for ESWT efficacy, has the potential to improve decision-making in the application of ESWT. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  4. Accurate Monte Carlo modeling of cyclotrons for optimization of shielding and activation calculations in the biomedical field

    Science.gov (United States)

    Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano

    2015-11-01

    Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended

  5. RCK: accurate and efficient inference of sequence- and structure-based protein-RNA binding models from RNAcompete data.

    Science.gov (United States)

    Orenstein, Yaron; Wang, Yuhao; Berger, Bonnie

    2016-06-15

    Protein-RNA interactions, which play vital roles in many processes, are mediated through both RNA sequence and structure. CLIP-based methods, which measure protein-RNA binding in vivo, suffer from experimental noise and systematic biases, whereas in vitro experiments capture a clearer signal of protein RNA-binding. Among them, RNAcompete provides binding affinities of a specific protein to more than 240 000 unstructured RNA probes in one experiment. The computational challenge is to infer RNA structure- and sequence-based binding models from these data. The state-of-the-art in sequence models, Deepbind, does not model structural preferences. RNAcontext models both sequence and structure preferences, but is outperformed by GraphProt. Unfortunately, GraphProt cannot detect structural preferences from RNAcompete data due to the unstructured nature of the data, as noted by its developers, nor can it be tractably run on the full RNACompete dataset. We develop RCK, an efficient, scalable algorithm that infers both sequence and structure preferences based on a new k-mer based model. Remarkably, even though RNAcompete data is designed to be unstructured, RCK can still learn structural preferences from it. RCK significantly outperforms both RNAcontext and Deepbind in in vitro binding prediction for 244 RNAcompete experiments. Moreover, RCK is also faster and uses less memory, which enables scalability. While currently on par with existing methods in in vivo binding prediction on a small scale test, we demonstrate that RCK will increasingly benefit from experimentally measured RNA structure profiles as compared to computationally predicted ones. By running RCK on the entire RNAcompete dataset, we generate and provide as a resource a set of protein-RNA structure-based models on an unprecedented scale. Software and models are freely available at http://rck.csail.mit.edu/ bab@mit.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by

  6. Thermodynamically accurate modeling of the catalytic cycle of photosynthetic oxygen evolution: a mathematical solution to asymmetric Markov chains.

    Science.gov (United States)

    Vinyard, David J; Zachary, Chase E; Ananyev, Gennady; Dismukes, G Charles

    2013-07-01

    Forty-three years ago, Kok and coworkers introduced a phenomenological model describing period-four oscillations in O2 flash yields during photosynthetic water oxidation (WOC), which had been first reported by Joliot and coworkers. The original two-parameter Kok model was subsequently extended in its level of complexity to better simulate diverse data sets, including intact cells and isolated PSII-WOCs, but at the expense of introducing physically unrealistic assumptions necessary to enable numerical solutions. To date, analytical solutions have been found only for symmetric Kok models (inefficiencies are equally probable for all intermediates, called "S-states"). However, it is widely accepted that S-state reaction steps are not identical and some are not reversible (by thermodynamic restraints) thereby causing asymmetric cycles. We have developed a mathematically more rigorous foundation that eliminates unphysical assumptions known to be in conflict with experiments and adopts a new experimental constraint on solutions. This new algorithm termed STEAMM for S-state Transition Eigenvalues of Asymmetric Markov Models enables solutions to models having fewer adjustable parameters and uses automated fitting to experimental data sets, yielding higher accuracy and precision than the classic Kok or extended Kok models. This new tool provides a general mathematical framework for analyzing damped oscillations arising from any cycle period using any appropriate Markov model, regardless of symmetry. We illustrate applications of STEAMM that better describe the intrinsic inefficiencies for photon-to-charge conversion within PSII-WOCs that are responsible for damped period-four and period-two oscillations of flash O2 yields across diverse species, while using simpler Markov models free from unrealistic assumptions. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Mixture models reveal multiple positional bias types in RNA-Seq data and lead to accurate transcript concentration estimates.

    Directory of Open Access Journals (Sweden)

    Andreas Tuerk

    2017-05-01

    Full Text Available Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare", a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.

  8. Combined endeavor of Neutrosophic Set and Chan-Vese model to extract accurate liver image from CT scan.

    Science.gov (United States)

    Siri, Sangeeta K; Latte, Mrityunjaya V

    2017-11-01

    Many different diseases can occur in the liver, including infections such as hepatitis, cirrhosis, cancer and over effect of medication or toxins. The foremost stage for computer-aided diagnosis of liver is the identification of liver region. Liver segmentation algorithms extract liver image from scan images which helps in virtual surgery simulation, speedup the diagnosis, accurate investigation and surgery planning. The existing liver segmentation algorithms try to extort exact liver image from abdominal Computed Tomography (CT) scan images. It is an open problem because of ambiguous boundaries, large variation in intensity distribution, variability of liver geometry from patient to patient and presence of noise. A novel approach is proposed to meet challenges in extracting the exact liver image from abdominal CT scan images. The proposed approach consists of three phases: (1) Pre-processing (2) CT scan image transformation to Neutrosophic Set (NS) and (3) Post-processing. In pre-processing, the noise is removed by median filter. The "new structure" is designed to transform a CT scan image into neutrosophic domain which is expressed using three membership subset: True subset (T), False subset (F) and Indeterminacy subset (I). This transform approximately extracts the liver image structure. In post processing phase, morphological operation is performed on indeterminacy subset (I) and apply Chan-Vese (C-V) model with detection of initial contour within liver without user intervention. This resulted in liver boundary identification with high accuracy. Experiments show that, the proposed method is effective, robust and comparable with existing algorithm for liver segmentation of CT scan images. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. ModFOLD6: an accurate web server for the global and local quality estimation of 3D protein models.

    Science.gov (United States)

    Maghrabi, Ali H A; McGuffin, Liam J

    2017-07-03

    Methods that reliably estimate the likely similarity between the predicted and native structures of proteins have become essential for driving the acceptance and adoption of three-dimensional protein models by life scientists. ModFOLD6 is the latest version of our leading resource for Estimates of Model Accuracy (EMA), which uses a pioneering hybrid quasi-single model approach. The ModFOLD6 server integrates scores from three pure-single model methods and three quasi-single model methods using a neural network to estimate local quality scores. Additionally, the server provides three options for producing global score estimates, depending on the requirements of the user: (i) ModFOLD6_rank, which is optimized for ranking/selection, (ii) ModFOLD6_cor, which is optimized for correlations of predicted and observed scores and (iii) ModFOLD6 global for balanced performance. The ModFOLD6 methods rank among the top few for EMA, according to independent blind testing by the CASP12 assessors. The ModFOLD6 server is also continuously automatically evaluated as part of the CAMEO project, where significant performance gains have been observed compared to our previous server and other publicly available servers. The ModFOLD6 server is freely available at: http://www.reading.ac.uk/bioinf/ModFOLD/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Image charge models for accurate construction of the electrostatic self-energy of 3D layered nanostructure devices

    Science.gov (United States)

    Barker, John R.; Martinez, Antonio

    2018-04-01

    Efficient analytical image charge models are derived for the full spatial variation of the electrostatic self-energy of electrons in semiconductor nanostructures that arises from dielectric mismatch using semi-classical analysis. The methodology provides a fast, compact and physically transparent computation for advanced device modeling. The underlying semi-classical model for the self-energy has been established and validated during recent years and depends on a slight modification of the macroscopic static dielectric constants for individual homogeneous dielectric regions. The model has been validated for point charges as close as one interatomic spacing to a sharp interface. A brief introduction to image charge methodology is followed by a discussion and demonstration of the traditional failure of the methodology to derive the electrostatic potential at arbitrary distances from a source charge. However, the self-energy involves the local limit of the difference between the electrostatic Green functions for the full dielectric heterostructure and the homogeneous equivalent. It is shown that high convergence may be achieved for the image charge method for this local limit. A simple re-normalisation technique is introduced to reduce the number of image terms to a minimum. A number of progressively complex 3D models are evaluated analytically and compared with high precision numerical computations. Accuracies of 1% are demonstrated. Introducing a simple technique for modeling the transition of the self-energy between disparate dielectric structures we generate an analytical model that describes the self-energy as a function of position within the source, drain and gated channel of a silicon wrap round gate field effect transistor on a scale of a few nanometers cross-section. At such scales the self-energies become large (typically up to ~100 meV) close to the interfaces as well as along the channel. The screening of a gated structure is shown to reduce the self

  11. Accurate measuring of cross-sections for e+e- → hadrons: Testing the Standard Model and applications to QCD

    International Nuclear Information System (INIS)

    Malaescu, B.

    2010-01-01

    The scope of this thesis is to obtain and use accurate data on e + e - annihilation into hadrons at energies of 1 GeV of magnitude order. These data represent a very valuable input for Standard Model tests involving vacuum polarization, such as the comparison of the muon magnetic moment to theory, and for QCD tests and applications. The different parts of this thesis describe four aspects of my work in this context. First, the measurements of cross sections as a function of energy necessitate the unfolding of data spectra from detector effects. I have proposed a new iterative unfolding method for experimental data, with improved capabilities compared to existing tools. Secondly, the experimental core of this thesis is a study of the process e + e - → K + K - from threshold to 5 GeV using the initial state radiation (ISR) method (through the measurement of e + e - → K + K - γ) with the BABAR detector. All relevant efficiencies are measured with experimental data and the absolute normalization comes from the simultaneously measured μμγ process. I have performed the full analysis which achieves a systematic uncertainty of 0.7% on the dominant φ resonance. Results on e + e - → π + π - from threshold to 3 GeV are also presented. Thirdly, a comparison based on 2 different ways to get a prediction of the muon magnetic moment: the Standard Model and the hadronic tau decay, shows an interesting hint for new physics effects (3.2 σ effect). Fourthly, QCD sum rules are powerful tools for obtaining precise information on QCD parameters, such as the strong coupling α S . I have worked on experimental data concerning the spectral functions from τ decays measured by ALEPH. I have discussed to some detail the perturbative QCD prediction obtained with two different methods: fixed-order perturbation theory (FOPT) and contour-improved perturbative theory (CIPT). The corresponding theoretical uncertainties have been studied at the τ and Z mass scales. The CIPT method

  12. Accurate nonlinear modeling for flexible manipulators using mixed finite element formulation in order to obtain maximum allowable load

    International Nuclear Information System (INIS)

    Esfandiar, Habib; KoraYem, Moharam Habibnejad

    2015-01-01

    In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.

  13. StatSTEM: An efficient approach for accurate and precise model-based quantification of atomic resolution electron microscopy images

    Energy Technology Data Exchange (ETDEWEB)

    De Backer, A.; Bos, K.H.W. van den [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Van den Broek, W. [AG Strukturforschung/Elektronenmikroskopie, Institut für Physik, Humboldt-Universität zu Berlin, Newtonstraße 15, 12489 Berlin (Germany); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Van Aert, S., E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)

    2016-12-15

    An efficient model-based estimation algorithm is introduced to quantify the atomic column positions and intensities from atomic resolution (scanning) transmission electron microscopy ((S)TEM) images. This algorithm uses the least squares estimator on image segments containing individual columns fully accounting for overlap between neighbouring columns, enabling the analysis of a large field of view. For this algorithm, the accuracy and precision with which measurements for the atomic column positions and scattering cross-sections from annular dark field (ADF) STEM images can be estimated, has been investigated. The highest attainable precision is reached even for low dose images. Furthermore, the advantages of the model-based approach taking into account overlap between neighbouring columns are highlighted. This is done for the estimation of the distance between two neighbouring columns as a function of their distance and for the estimation of the scattering cross-section which is compared to the integrated intensity from a Voronoi cell. To provide end-users this well-established quantification method, a user friendly program, StatSTEM, is developed which is freely available under a GNU public license. - Highlights: • An efficient model-based method for quantitative electron microscopy is introduced. • Images are modelled as a superposition of 2D Gaussian peaks. • Overlap between neighbouring columns is taken into account. • Structure parameters can be obtained with the highest precision and accuracy. • StatSTEM, auser friendly program (GNU public license) is developed.

  14. Accurate nonlinear modeling for flexible manipulators using mixed finite element formulation in order to obtain maximum allowable load

    Energy Technology Data Exchange (ETDEWEB)

    Esfandiar, Habib; KoraYem, Moharam Habibnejad [Islamic Azad University, Tehran (Iran, Islamic Republic of)

    2015-09-15

    In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.

  15. Fast and Accurate Icepak-PSpice Co-Simulation of IGBTs under Short-Circuit with an Advanced PSpice Model

    DEFF Research Database (Denmark)

    Wu, Rui; Iannuzzo, Francesco; Wang, Huai

    2014-01-01

    A basic problem in the IGBT short-circuit failure mechanism study is to obtain realistic temperature distribution inside the chip, which demands accurate electrical simulation to obtain power loss distribution as well as detailed IGBT geometry and material information. This paper describes an unp...

  16. Accurate Theoretical Methane Line Lists in the Infrared up to 3000 K and Quasi-continuum Absorption/Emission Modeling for Astrophysical Applications

    Science.gov (United States)

    Rey, Michael; Nikitin, Andrei V.; Tyuterev, Vladimir G.

    2017-10-01

    Modeling atmospheres of hot exoplanets and brown dwarfs requires high-T databases that include methane as the major hydrocarbon. We report a complete theoretical line list of 12CH4 in the infrared range 0-13,400 cm-1 up to T max = 3000 K computed via a full quantum-mechanical method from ab initio potential energy and dipole moment surfaces. Over 150 billion transitions were generated with the lower rovibrational energy cutoff 33,000 cm-1 and intensity cutoff down to 10-33 cm/molecule to ensure convergent opacity predictions. Empirical corrections for 3.7 million of the strongest transitions permitted line position accuracies of 0.001-0.01 cm-1. Full data are partitioned into two sets. “Light lists” contain strong and medium transitions necessary for an accurate description of sharp features in absorption/emission spectra. For a fast and efficient modeling of quasi-continuum cross sections, billions of tiny lines are compressed in “super-line” libraries according to Rey et al. These combined data will be freely accessible via the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru), which provides a user-friendly interface for simulations of absorption coefficients, cross-sectional transmittance, and radiance. Comparisons with cold, room, and high-T experimental data show that the data reported here represent the first global theoretical methane lists suitable for high-resolution astrophysical applications.

  17. Development of dual stream PCRTM-SOLAR for fast and accurate radiative transfer modeling in the cloudy atmosphere with solar radiation

    Science.gov (United States)

    Yang, Q.; Liu, X.; Wu, W.; Kizer, S.; Baize, R. R.

    2016-12-01

    Fast and accurate radiative transfer model is the key for satellite data assimilation and observation system simulation experiments for numerical weather prediction and climate study applications. We proposed and developed a dual stream PCRTM-SOLAR model which may simulate radiative transfer in the cloudy atmosphere with solar radiation quickly and accurately. Multi-scattering of multiple layers of clouds/aerosols is included in the model. The root-mean-square errors are usually less than 5x10-4 mW/cm2.sr.cm-1. The computation speed is 3 to 4 orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This model will enable a vast new set of scientific calculations that were previously limited due to the computational expenses of available radiative transfer models.

  18. Testing the importance of accurate meteorological input fields and parameterizations in atmospheric transport modelling using DREAM - Validation against ETEX-1

    DEFF Research Database (Denmark)

    Brandt, J.; Bastrup-Birk, A.; Christensen, J.H.

    1998-01-01

    A tracer model, the DREAM, which is based on a combination of a near-range Lagrangian model and a long-range Eulerian model, has been developed. The meteorological meso-scale model, MM5V1, is implemented as a meteorological driver for the tracer model. The model system is used for studying...

  19. Development of the Japanese version of an information aid to provide accurate information on prognosis to patients with advanced non-small-cell lung cancer receiving chemotherapy: a pilot study.

    Science.gov (United States)

    Nakano, Kikuo; Kitahara, Yoshihiro; Mito, Mineyo; Seno, Misato; Sunada, Shoji

    2018-02-27

    Without explicit prognostic information, patients may overestimate their life expectancy and make poor choices at the end of life. We sought to design the Japanese version of an information aid (IA) to provide accurate information on prognosis to patients with advanced non-small-cell lung cancer (NSCLC) and to assess the effects of the IA on hope, psychosocial status, and perception of curability. We developed the Japanese version of an IA, which provided information on survival and cure rates as well as numerical survival estimates for patients with metastatic NSCLC receiving first-line chemotherapy. We then assessed the pre- and post-intervention effects of the IA on hope, anxiety, and perception of curability and treatment benefits. A total of 20 (95%) of 21 patients (65% male; median age, 72 years) completed the IA pilot test. Based on the results, scores on the Distress and Impact Thermometer screening tool for adjustment disorders and major depression tended to decrease (from 4.5 to 2.5; P = 0.204), whereas no significant changes were seen in scores for anxiety on the Japanese version of the Support Team Assessment Schedule or in scores on the Hearth Hope Index (from 41.9 to 41.5; p = 0.204). The majority of the patients (16/20, 80%) had high expectations regarding the curative effects of chemotherapy. The Japanese version of the IA appeared to help patients with NSCLC maintain hope, and did not increase their anxiety when they were given explicit prognostic information; however, the IA did not appear to help such patients understand the goal of chemotherapy. Further research is needed to test the findings in a larger sample and measure the outcomes of explicit prognostic information on hope, psychological status, and perception of curability.

  20. Accurate Laser Measurements of the Water Vapor Self-Continuum Absorption in Four Near Infrared Atmospheric Windows. a Test of the MT_CKD Model.

    Science.gov (United States)

    Campargue, Alain; Kassi, Samir; Mondelain, Didier; Romanini, Daniele; Lechevallier, Loïc; Vasilchenko, Semyon

    2017-06-01

    The semi empirical MT_CKD model of the absorption continuum of water vapor is widely used in atmospheric radiative transfer codes of the atmosphere of Earth and exoplanets but lacks of experimental validation in the atmospheric windows. Recent laboratory measurements by Fourier transform Spectroscopy have led to self-continuum cross-sections much larger than the MT_CKD values in the near infrared transparency windows. In the present work, we report on accurate water vapor absorption continuum measurements by Cavity Ring Down Spectroscopy (CRDS) and Optical-Feedback-Cavity Enhanced Laser Spectroscopy (OF-CEAS) at selected spectral points of the transparency windows centered around 4.0, 2.1 and 1.25 μm. The temperature dependence of the absorption continuum at 4.38 μm and 3.32 μm is measured in the 23-39 °C range. The self-continuum water vapor absorption is derived either from the baseline variation of spectra recorded for a series of pressure values over a small spectral interval or from baseline monitoring at fixed laser frequency, during pressure ramps. In order to avoid possible bias approaching the water saturation pressure, the maximum pressure value was limited to about 16 Torr, corresponding to a 75% humidity rate. After subtraction of the local water monomer lines contribution, self-continuum cross-sections, C_{S}, were determined with a few % accuracy from the pressure squared dependence of the spectra base line level. Together with our previous CRDS and OF-CEAS measurements in the 2.1 and 1.6 μm windows, the derived water vapor self-continuum provides a unique set of water vapor self-continuum cross-sections for a test of the MT_CKD model in four transparency windows. Although showing some important deviations of the absolute values (up to a factor of 4 at the center of the 2.1 μm window), our accurate measurements validate the overall frequency dependence of the MT_CKD2.8 model.

  1. The type IIP supernova 2012aw in M95: Hydrodynamical modeling of the photospheric phase from accurate spectrophotometric monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Ora, M.; Botticella, M. T.; Della Valle, M. [INAF, Osservatorio Astronomico di Capodimonte, Napoli (Italy); Pumo, M. L.; Zampieri, L.; Tomasella, L.; Cappellaro, E.; Benetti, S. [INAF, Osservatorio Astronomico di Padova, I-35122 Padova (Italy); Pignata, G.; Bufano, F. [Departamento de Ciencias Fisicas, Universidad Andres Bello, Avda. Republica 252, Santiago (Chile); Bayless, A. J. [Southwest Research Institute, Department of Space Science, 6220 Culebra Road, San Antonio, TX 78238 (United States); Pritchard, T. A. [Department of Astronomy and Astrophysics, Penn State University, 525 Davey Lab, University Park, PA 16802 (United States); Taubenberger, S.; Benitez, S. [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85741 Garching (Germany); Kotak, R.; Inserra, C.; Fraser, M. [Astrophysics Research Centre, School of Mathematics and Physics, Queen' s University Belfast, Belfast, BT7 1NN (United Kingdom); Elias-Rosa, N. [Institut de Ciències de l' Espai (CSIC-IEEC) Campus UAB, Torre C5, Za plata, E-08193 Bellaterra, Barcelona (Spain); Haislip, J. B. [Department of Physics and Astronomy, University of North Carolina at Chapel Hill, 120 E. Cameron Ave., Chapel Hill, NC 27599 (United States); Harutyunyan, A. [Fundación Galileo Galilei - Telescopio Nazionale Galileo, Rambla José Ana Fernández Pérez 7, E-38712 Breña Baja, TF - Spain (Spain); and others

    2014-06-01

    We present an extensive optical and near-infrared photometric and spectroscopic campaign of the Type IIP supernova SN 2012aw. The data set densely covers the evolution of SN 2012aw shortly after the explosion through the end of the photospheric phase, with two additional photometric observations collected during the nebular phase, to fit the radioactive tail and estimate the {sup 56}Ni mass. Also included in our analysis is the previously published Swift UV data, therefore providing a complete view of the ultraviolet-optical-infrared evolution of the photospheric phase. On the basis of our data set, we estimate all the relevant physical parameters of SN 2012aw with our radiation-hydrodynamics code: envelope mass M {sub env} ∼ 20 M {sub ☉}, progenitor radius R ∼ 3 × 10{sup 13} cm (∼430 R {sub ☉}), explosion energy E ∼ 1.5 foe, and initial {sup 56}Ni mass ∼0.06 M {sub ☉}. These mass and radius values are reasonably well supported by independent evolutionary models of the progenitor, and may suggest a progenitor mass higher than the observational limit of 16.5 ± 1.5 M {sub ☉} of the Type IIP events.

  2. Temperature Field Accurate Modeling and Cooling Performance Evaluation of Direct-Drive Outer-Rotor Air-Cooling In-Wheel Motor

    Directory of Open Access Journals (Sweden)

    Feng Chai

    2016-10-01

    Full Text Available High power density outer-rotor motors commonly use water or oil cooling. A reasonable thermal design for outer-rotor air-cooling motors can effectively enhance the power density without the fluid circulating device. Research on the heat dissipation mechanism of an outer-rotor air-cooling motor can provide guidelines for the selection of the suitable cooling mode and the design of the cooling structure. This study investigates the temperature field of the motor through computational fluid dynamics (CFD and presents a method to overcome the difficulties in building an accurate temperature field model. The proposed method mainly includes two aspects: a new method for calculating the equivalent thermal conductivity (ETC of the air-gap in the laminar state and an equivalent treatment to the thermal circuit that comprises a hub, shaft, and bearings. Using an outer-rotor air-cooling in-wheel motor as an example, the temperature field of this motor is calculated numerically using the proposed method; the results are experimentally verified. The heat transfer rate (HTR of each cooling path is obtained using the numerical results and analytic formulas. The influences of the structural parameters on temperature increases and the HTR of each cooling path are analyzed. Thereafter, the overload capability of the motor is analyzed in various overload conditions.

  3. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter.

    Science.gov (United States)

    Chowdhury, Amor; Sarjaš, Andrej

    2016-09-15

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.

  4. Application of anatomically accurate, patient-specific 3D printed models from MRI data in urological oncology

    International Nuclear Information System (INIS)

    Wake, N.; Chandarana, H.; Huang, W.C.; Taneja, S.S.; Rosenkrantz, A.B.

    2016-01-01

    Highlights: • We examine 3D printing in the context of urologic oncology. • Patient-specific 3D printed kidney and prostate tumor models were created. • 3D printed models extend the current capabilities of conventional 3D visualization. • 3D printed models may be used for surgical planning and intraoperative guidance.

  5. Accurate Theoretical Methane Line Lists in the Infrared up to 3000 K and Quasi-continuum Absorption/Emission Modeling for Astrophysical Applications

    Energy Technology Data Exchange (ETDEWEB)

    Rey, Michael; Tyuterev, Vladimir G. [Groupe de Spectrométrie Moléculaire et Atmosphérique, UMR CNRS 7331, BP 1039, F-51687, Reims Cedex 2 (France); Nikitin, Andrei V., E-mail: michael.rey@univ-reims.fr [Laboratory of Theoretical Spectroscopy, Institute of Atmospheric Optics, SB RAS, 634055 Tomsk (Russian Federation)

    2017-10-01

    Modeling atmospheres of hot exoplanets and brown dwarfs requires high- T databases that include methane as the major hydrocarbon. We report a complete theoretical line list of {sup 12}CH{sub 4} in the infrared range 0–13,400 cm{sup −1} up to T {sub max} = 3000 K computed via a full quantum-mechanical method from ab initio potential energy and dipole moment surfaces. Over 150 billion transitions were generated with the lower rovibrational energy cutoff 33,000 cm{sup −1} and intensity cutoff down to 10{sup −33} cm/molecule to ensure convergent opacity predictions. Empirical corrections for 3.7 million of the strongest transitions permitted line position accuracies of 0.001–0.01 cm{sup −1}. Full data are partitioned into two sets. “Light lists” contain strong and medium transitions necessary for an accurate description of sharp features in absorption/emission spectra. For a fast and efficient modeling of quasi-continuum cross sections, billions of tiny lines are compressed in “super-line” libraries according to Rey et al. These combined data will be freely accessible via the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru), which provides a user-friendly interface for simulations of absorption coefficients, cross-sectional transmittance, and radiance. Comparisons with cold, room, and high- T experimental data show that the data reported here represent the first global theoretical methane lists suitable for high-resolution astrophysical applications.

  6. 3D reconstruction of coronary arteries from 2D angiographic projections using non-uniform rational basis splines (NURBS for accurate modelling of coronary stenoses.

    Directory of Open Access Journals (Sweden)

    Francesca Galassi

    Full Text Available Assessment of coronary stenosis severity is crucial in clinical practice. This study proposes a novel method to generate 3D models of stenotic coronary arteries, directly from 2D coronary images, and suitable for immediate assessment of the stenosis severity.From multiple 2D X-ray coronary arteriogram projections, 2D vessels were extracted. A 3D centreline was reconstructed as intersection of surfaces from corresponding branches. Next, 3D luminal contours were generated in a two-step process: first, a Non-Uniform Rational B-Spline (NURBS circular contour was designed and, second, its control points were adjusted to interpolate computed 3D boundary points. Finally, a 3D surface was generated as an interpolation across the control points of the contours and used in the analysis of the severity of a lesion. To evaluate the method, we compared 3D reconstructed lesions with Optical Coherence Tomography (OCT, an invasive imaging modality that enables high-resolution endoluminal visualization of lesion anatomy.Validation was performed on routine clinical data. Analysis of paired cross-sectional area discrepancies indicated that the proposed method more closely represented OCT contours than conventional approaches in luminal surface reconstruction, with overall root-mean-square errors ranging from 0.213mm2 to 1.013mm2, and maximum error of 1.837mm2. Comparison of volume reduction due to a lesion with corresponding FFR measurement suggests that the method may help in estimating the physiological significance of a lesion.The algorithm accurately reconstructed 3D models of lesioned arteries and enabled quantitative assessment of stenoses. The proposed method has the potential to allow immediate analysis of the stenoses in clinical practice, thereby providing incremental diagnostic and prognostic information to guide treatments in real time and without the need for invasive techniques.

  7. The enhanced local pressure model for the accurate analysis of fluid pressure driven fracture in porous materials

    NARCIS (Netherlands)

    Remij, E.W.; Remmers, J.J.C.; Huyghe, J.M.R.J.; Smeulders, D.M.J.

    2015-01-01

    In this paper, we present an enhanced local pressure model for modelling fluid pressure driven fractures in porous saturated materials. Using the partition-of-unity property of finite element shape functions, we describe the displacement and pressure fields across the fracture as a strong

  8. On the more accurate channel model and positioning based on time-of-arrival for visible light localization

    Science.gov (United States)

    Amini, Changeez; Taherpour, Abbas; Khattab, Tamer; Gazor, Saeed

    2017-01-01

    This paper presents an improved propagation channel model for the visible light in indoor environments. We employ this model to derive an enhanced positioning algorithm using on the relation between the time-of-arrivals (TOAs) and the distances for two cases either by assuming known or unknown transmitter and receiver vertical distances. We propose two estimators, namely the maximum likelihood estimator and an estimator by employing the method of moments. To have an evaluation basis for these methods, we calculate the Cramer-Rao lower bound (CRLB) for the performance of the estimations. We show that the proposed model and estimations result in a superior performance in positioning when the transmitter and receiver are perfectly synchronized in comparison to the existing state-of-the-art counterparts. Moreover, the corresponding CRLB of the proposed model represents almost about 20 dB reduction in the localization error bound in comparison with the previous model for some practical scenarios.

  9. New Provider Models for Sweden and Spain: Public, Private or Non-profit? Comment on "Governance, Government, and the Search for New Provider Models".

    Science.gov (United States)

    Jeurissen, Patrick P T; Maarse, Hans

    2016-06-29

    Sweden and Spain experiment with different provider models to reform healthcare provision. Both models have in common that they extend the role of the for-profit sector in healthcare. As the analysis of Saltman and Duran demonstrates, privatisation is an ambiguous and contested strategy that is used for quite different purposes. In our comment, we emphasize that their analysis leaves questions open on the consequences of privatisation for the performance of healthcare and the role of the public sector in healthcare provision. Furthermore, we briefly address the absence of the option of healthcare provision by not-for-profit providers in the privatisation strategy of Sweden and Spain. © 2016 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  10. An improved mixing model providing joint statistics of scalar and scalar dissipation

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Daniel W. [Department of Energy Resources Engineering, Stanford University, Stanford, CA (United States); Jenny, Patrick [Institute of Fluid Dynamics, ETH Zurich (Switzerland)

    2008-11-15

    For the calculation of nonpremixed turbulent flames with thin reaction zones the joint probability density function (PDF) of the mixture fraction and its dissipation rate plays an important role. The corresponding PDF transport equation involves a mixing model for the closure of the molecular mixing term. Here, the parameterized scalar profile (PSP) mixing model is extended to provide the required joint statistics. Model predictions are validated using direct numerical simulation (DNS) data of a passive scalar mixing in a statistically homogeneous turbulent flow. Comparisons between the DNS and the model predictions are provided, which involve different initial scalar-field lengthscales. (author)

  11. Improving optimal control of grid-connected lithium-ion batteries through more accurate battery and degradation modelling

    Science.gov (United States)

    Reniers, Jorn M.; Mulder, Grietus; Ober-Blöbaum, Sina; Howey, David A.

    2018-03-01

    The increased deployment of intermittent renewable energy generators opens up opportunities for grid-connected energy storage. Batteries offer significant flexibility but are relatively expensive at present. Battery lifetime is a key factor in the business case, and it depends on usage, but most techno-economic analyses do not account for this. For the first time, this paper quantifies the annual benefits of grid-connected batteries including realistic physical dynamics and nonlinear electrochemical degradation. Three lithium-ion battery models of increasing realism are formulated, and the predicted degradation of each is compared with a large-scale experimental degradation data set (Mat4Bat). A respective improvement in RMS capacity prediction error from 11% to 5% is found by increasing the model accuracy. The three models are then used within an optimal control algorithm to perform price arbitrage over one year, including degradation. Results show that the revenue can be increased substantially while degradation can be reduced by using more realistic models. The estimated best case profit using a sophisticated model is a 175% improvement compared with the simplest model. This illustrates that using a simplistic battery model in a techno-economic assessment of grid-connected batteries might substantially underestimate the business case and lead to erroneous conclusions.

  12. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    Science.gov (United States)

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement

  13. A deformation model of flexible, HAMR objects for accurate propagation under perturbations and the self-shadowing effects

    Science.gov (United States)

    Channumsin, Sittiporn; Ceriotti, Matteo; Radice, Gianmarco

    2018-02-01

    A new type of space debris in near geosynchronous orbit (GEO) was recently discovered and later identified as exhibiting unique characteristics associated with high area-to-mass ratio (HAMR) objects, such as high rotation rates and high reflection properties. Observations have shown that this debris type is very sensitive to environmental disturbances, particularly solar radiation pressure, due to the fact that its motion depends on the actual effective area, orientation of that effective area, reflection properties and the area-to-mass ratio of the object is not stable over time. Previous investigations have modelled this type of debris as rigid bodies (constant area-to-mass ratios) or discrete deformed body; however, these simplifications will lead to inaccurate long term orbital predictions. This paper proposes a simple yet reliable model of a thin, deformable membrane based on multibody dynamics. The membrane is modelled as a series of flat plates, connected through joints, representing the flexibility of the membrane itself. The mass of the membrane, albeit low, is taken into account through lump masses at the joints. The attitude and orbital motion of this flexible membrane model is then propagated near GEO to predict its orbital evolution under the perturbations of solar radiation pressure, Earth's gravity field (J2), third body gravitational fields (the Sun and Moon) and self-shadowing. These results are then compared to those obtained for two rigid body models (cannonball and flat rigid plate). In addition, Monte Carlo simulations of the flexible model by varying initial attitude and deformation angle (different shape) are investigated and compared with the two rigid models (cannonball and flat rigid plate) over a period of 100 days. The numerical results demonstrate that cannonball and rigid flat plate are not appropriate to capture the true dynamical evolution of these objects, at the cost of increased computational time.

  14. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  15. Effectiveness of Video Modeling Provided by Mothers in Teaching Play Skills to Children with Autism

    Science.gov (United States)

    Besler, Fatma; Kurt, Onur

    2016-01-01

    Video modeling is an evidence-based practice that can be used to provide instruction to individuals with autism. Studies show that this instructional practice is effective in teaching many types of skills such as self-help skills, social skills, and academic skills. However, in previous studies, videos used in the video modeling process were…

  16. HPC Institutional Computing Project: W15_lesreactiveflow KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Carrington, David Bradley [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Waters, Jiajia [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-05

    KIVA-hpFE is a high performance computer software for solving the physics of multi-species and multiphase turbulent reactive flow in complex geometries having immersed moving parts. The code is written in Fortran 90/95 and can be used on any computer platform with any popular complier. The code is in two versions, a serial version and a parallel version utilizing MPICH2 type Message Passing Interface (MPI or Intel MPI) for solving distributed domains. The parallel version is at least 30x faster than the serial version and much faster than our previous generation of parallel engine modeling software, by many factors. The 5th generation algorithm construction is a Galerkin type Finite Element Method (FEM) solving conservative momentum, species, and energy transport equations along with two-equation turbulent model k-ω Reynolds Averaged Navier-Stokes (RANS) model and a Vreman type dynamic Large Eddy Simulation (LES) method. The LES method is capable modeling transitional flow from laminar to fully turbulent; therefore, this LES method does not require special hybrid or blending to walls. The FEM projection method also uses a Petrov-Galerkin (P-G) stabilization along with pressure stabilization. We employ hierarchical basis sets, constructed on the fly with enrichment in areas associated with relatively larger error as determined by error estimation methods. In addition, when not using the hp-adaptive module, the code employs Lagrangian basis or shape functions. The shape functions are constructed for hexahedral, prismatic and tetrahedral elements. The software is designed to solve many types of reactive flow problems, from burners to internal combustion engines and turbines. In addition, the formulation allows for direct integration of solid bodies (conjugate heat transfer), as in heat transfer through housings, parts, cylinders. It can also easily be extended to stress modeling of solids, used in fluid structure interactions problems, solidification, porous media

  17. Development of accurate UWB dielectric properties dispersion at CST simulation tool for modeling microwave interactions with numerical breast phantoms

    International Nuclear Information System (INIS)

    Maher, A.; Quboa, K. M.

    2011-01-01

    In this paper, a reformulation for the recently published dielectric properties dispersion models of the breast tissues is carried out to be used by CST simulation tool. The reformulation includes tabulation of the real and imaginary parts versus frequency on ultra-wideband (UWB) for these models by MATLAB programs. The tables are imported and fitted by CST simulation tool to second or first order general equations. The results have shown good agreement between the original and the imported data. The MATLAB programs written in MATLAB code are included in the appendix.

  18. Low- and high-order accurate boundary conditions: From Stokes to Darcy porous flow modeled with standard and improved Brinkman lattice Boltzmann schemes

    International Nuclear Information System (INIS)

    Silva, Goncalo; Talon, Laurent; Ginzburg, Irina

    2017-01-01

    and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.

  19. Low- and high-order accurate boundary conditions: From Stokes to Darcy porous flow modeled with standard and improved Brinkman lattice Boltzmann schemes

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Goncalo, E-mail: goncalo.nuno.silva@gmail.com [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France); Talon, Laurent, E-mail: talon@fast.u-psud.fr [CNRS (UMR 7608), Laboratoire FAST, Batiment 502, Campus University, 91405 Orsay (France); Ginzburg, Irina, E-mail: irina.ginzburg@irstea.fr [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France)

    2017-04-15

    and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.

  20. How accurate are estimates of glacier ice thickness? Results from ITMIX, the Ice Thickness Models Intercomparison eXperiment

    DEFF Research Database (Denmark)

    Farinotti, Daniel; Brinkerhoff, Douglas J.; Clarke, Garry K. C.

    2017-01-01

    Knowledge of the ice thickness distribution of glaciers and ice caps is an important prerequisite for many glaciological and hydrological investigations. A wealth of approaches has recently been presented for inferring ice thickness from characteristics of the surface. With the Ice Thickness Models...

  1. Accurate hardening modeling as basis for the realistic simulation of sheet forming processes with complex strain-path changes

    International Nuclear Information System (INIS)

    Levkovitch, Vladislav; Svendsen, Bob

    2007-01-01

    Sheet metal forming involves large strains and severe strain-path changes. Large plastic strains lead in many metals to the development of persistent dislocation structures resulting in strong flow anisotropy. This induced anisotropic behavior manifests itself in the case of a strain path change through very different stress-strain responses depending on the type of the strain-path change. While many metals exhibit a drop of the yield stress (Bauschinger effect) after a load reversal, some metals show an increase of the yield stress after an orthogonal strain-path change (so-called cross hardening). To model the Bauschinger effect, kinematic hardening has been successfully used for years. However, the usage of the kinematic hardening leads automatically to a drop of the yield stress after an orthogonal strain-path change contradicting tests exhibiting the cross hardening effect. Another effect, not accounted for in the classical elasto-plasticity, is the difference between the tensile and compressive strength, exhibited e.g. by some steel materials. In this work we present a phenomenological material model whose structure is motivated by polycrystalline modeling that takes into account the evolution of polarized dislocation structures on the grain level - the main cause of the induced flow anisotropy on the macroscopic level. The model considers besides the movement of the yield surface and its proportional expansion, as it is the case in conventional plasticity, also the changes of the yield surface shape (distortional hardening) and accounts for the pressure dependence of the flow stress. All these additional attributes turn out to be essential to model the stress-strain response of dual phase high strength steels subjected to non-proportional loading

  2. Accurate Hardening Modeling As Basis For The Realistic Simulation Of Sheet Forming Processes With Complex Strain-Path Changes

    International Nuclear Information System (INIS)

    Levkovitch, Vladislav; Svendsen, Bob

    2007-01-01

    Sheet metal forming involves large strains and severe strain-path changes. Large plastic strains lead in many metals to the development of persistent dislocation structures resulting in strong flow anisotropy. This induced anisotropic behavior manifests itself in the case of a strain path change through very different stress-strain responses depending on the type of the strain-path change. While many metals exhibit a drop of the yield stress (Bauschinger effect) after a load reversal, some metals show an increase of the yield stress after an orthogonal strain-path change (so-called cross hardening). To model the Bauschinger effect, kinematic hardening has been successfully used for years. However, the usage of the kinematic hardening leads automatically to a drop of the yield stress after an orthogonal strain-path change contradicting tests exhibiting the cross hardening effect. Another effect, not accounted for in the classical elasto-plasticity, is the difference between the tensile and compressive strength, exhibited e.g. by some steel materials. In this work we present a phenomenological material model whose structure is motivated by polycrystalline modeling that takes into account the evolution of polarized dislocation structures on the grain level - the main cause of the induced flow anisotropy on the macroscopic level. The model considers besides the movement of the yield surface and its proportional expansion, as it is the case in conventional plasticity, also the changes of the yield surface shape (distortional hardening) and accounts for the pressure dependence of the flow stress. All these additional attributes turn out to be essential to model the stress-strain response of dual phase high strength steels subjected to non-proportional loading

  3. Video Modeling Training Effects on Types of Attention Delivered by Educational Care-Providers.

    Science.gov (United States)

    Taber, Traci A; Lambright, Nathan; Luiselli, James K

    2017-06-01

    We evaluated the effects of abbreviated (i.e., one-session) video modeling on delivery of student-preferred attention by educational care-providers. The video depicted a novel care-provider interacting with and delivering attention to the student. Within a concurrent multiple baseline design, video modeling increased delivery of the targeted attention for all participants as well as their delivery of another type of attention that was not trained although these effects were variable within and between care-providers. We discuss the clinical and training implications from these findings.

  4. Testing models of basin inversion in the eastern North Sea using exceptionally accurate thermal and maturity data

    DEFF Research Database (Denmark)

    Nielsen, S.B.; Clausen, O.R.; Gallagher, Kerry

    2011-01-01

    the thermal history information contained in high quality thermal maturity data comprising temperature profiles, vitrinite reflectance and apatite fission track data. Having remained open for experimental purposes, the data of two of the deep wells (Aars-1 and Farsoe-1) are of exceptionally high quality. Here...... about the magnitude of deposition and erosion during this hiatus. We use Markov Chain Monte Carlo with a transient one-dimensional thermal model to explore the parameter space of potential thermal history solutions, using the different available data as constraints. The variable parameters comprise...... inversion of the STZ. This is in agreement with numerical rheological models of inversion zone dynamics, which explain how marginal trough subsidence occurred as a consequence of late Cretaceous compressional inversion and erosion along the inversion axis (Nielsen et al. 2005, 2007). Following this, the in-plane...

  5. Development of an Anatomically Accurate Finite Element Human Ocular Globe Model for Blast-Related Fluid-Structure Interaction Studies

    Science.gov (United States)

    2017-02-01

    nucleus green ) is attached to the shell of the eye via the zonule fibers (orange) and the ciliary body (pink). The zonule fibers are approximated in our...as shown in the literature.13,24 (a and b) A study conducted by Norman et al.24 with images from normal human subjects sectioned into 15 equal...Fig. 13 Scleral thickness variation procedure in the model: a) scleral thickness variation contours with thickness values noted from Norman et al

  6. Establishing magnetic resonance imaging as an accurate and reliable tool to diagnose and monitor esophageal cancer in a rat model.

    Directory of Open Access Journals (Sweden)

    Juliann E Kosovec

    Full Text Available OBJECTIVE: To assess the reliability of magnetic resonance imaging (MRI for detection of esophageal cancer in the Levrat model of end-to-side esophagojejunostomy. BACKGROUND: The Levrat model has proven utility in terms of its ability to replicate Barrett's carcinogenesis by inducing gastroduodenoesophageal reflux (GDER. Due to lack of data on the utility of non-invasive methods for detection of esophageal cancer, treatment efficacy studies have been limited, as adenocarcinoma histology has only been validated post-mortem. It would therefore be of great value if the validity and reliability of MRI could be established in this setting. METHODS: Chronic GDER reflux was induced in 19 male Sprague-Dawley rats using the modified Levrat model. At 40 weeks post-surgery, all animals underwent endoscopy, MRI scanning, and post-mortem histological analysis of the esophagus and anastomosis. With post-mortem histology serving as the gold standard, assessment of presence of esophageal cancer was made by five esophageal specialists and five radiologists on endoscopy and MRI, respectively. RESULTS: The accuracy of MRI and endoscopic analysis to correctly identify cancer vs. no cancer was 85.3% and 50.5%, respectively. ROC curves demonstrated that MRI rating had an AUC of 0.966 (p<0.001 and endoscopy rating had an AUC of 0.534 (p = 0.804. The sensitivity and specificity of MRI for identifying cancer vs. no-cancer was 89.1% and 80% respectively, as compared to 45.5% and 57.5% for endoscopy. False positive rates of MRI and endoscopy were 20% and 42.5%, respectively. CONCLUSIONS: MRI is a more reliable diagnostic method than endoscopy in the Levrat model. The non-invasiveness of the tool and its potential to volumetrically quantify the size and number of tumors likely makes it even more useful in evaluating novel agents and their efficacy in treatment studies of esophageal cancer.

  7. Rotating Arc Jet Test Model: Time-Accurate Trajectory Heat Flux Replication in a Ground Test Environment

    Science.gov (United States)

    Laub, Bernard; Grinstead, Jay; Dyakonov, Artem; Venkatapathy, Ethiraj

    2011-01-01

    Though arc jet testing has been the proven method employed for development testing and certification of TPS and TPS instrumentation, the operational aspects of arc jets limit testing to selected, but constant, conditions. Flight, on the other hand, produces timevarying entry conditions in which the heat flux increases, peaks, and recedes as a vehicle descends through an atmosphere. As a result, we are unable to "test as we fly." Attempts to replicate the time-dependent aerothermal environment of atmospheric entry by varying the arc jet facility operating conditions during a test have proven to be difficult, expensive, and only partially successful. A promising alternative is to rotate the test model exposed to a constant-condition arc jet flow to yield a time-varying test condition at a point on a test article (Fig. 1). The model shape and rotation rate can be engineered so that the heat flux at a point on the model replicates the predicted profile for a particular point on a flight vehicle. This simple concept will enable, for example, calibration of the TPS sensors on the Mars Science Laboratory (MSL) aeroshell for anticipated flight environments.

  8. An accurate tangential force-displacement model for granular-flow simulations: Contacting spheres with plastic deformation, force-driven formulation

    International Nuclear Information System (INIS)

    Vu-Quoc, L.; Lesburg, L.; Zhang, X.

    2004-01-01

    An elasto-plastic frictional tangential force-displacement (TFD) model for spheres in contact for accurate and efficient granular-flow simulations is presented in this paper; the present TFD is consistent with the elasto-plastic normal force-displacement (NFD) model presented in [ASME Journal of Applied Mechanics 67 (2) (2000) 363; Proceedings of the Royal Society of London, Series A 455 (1991) (1999) 4013]. The proposed elasto-plastic frictional TFD model is accurate, and is validated against non-linear finite-element analyses involving plastic flows under both loading and unloading conditions. The novelty of the present TFD model lies in (i) the additive decomposition of the elasto-plastic contact area radius into an elastic part and a plastic part, (ii) the correction of the particles' radii at the contact point, and (iii) the correction of the particles' elastic moduli. The correction of the contact-area radius represents an effect of plastic deformation in colliding particles; the correction of the radius of curvature represents a permanent indentation after impact; the correction of the elastic moduli represents a softening of the material due to plastic flow. The construction of both the present elasto-plastic frictional TFD model and its consistent companion, the elasto-plastic NFD model, parallels the formalism of the continuum theory of elasto-plasticity. Both NFD and TFD models form a coherent set of force-displacement (FD) models not available hitherto for granular-flow simulations, and are consistent with the Hertz, Cattaneo, Mindlin, Deresiewicz contact mechanics theory. Together, these FD models will allow for efficient simulations of granular flows (or granular gases) involving a large number of particles

  9. Charging and discharging tests for obtaining an accurate dynamic electro-thermal model of high power lithium-ion pack system for hybrid and EV applications

    DEFF Research Database (Denmark)

    Mihet-Popa, Lucian; Camacho, Oscar Mauricio Forero; Nørgård, Per Bromand

    2013-01-01

    This paper presents a battery test platform including two Li-ion battery designed for hybrid and EV applications, and charging/discharging tests under different operating conditions carried out for developing an accurate dynamic electro-thermal model of a high power Li-ion battery pack system....... The aim of the tests has been to study the impact of the battery degradation and to find out the dynamic characteristics of the cells including nonlinear open circuit voltage, series resistance and parallel transient circuit at different charge/discharge currents and cell temperature. An equivalent...... circuit model, based on the runtime battery model and the Thevenin circuit model, with parameters obtained from the tests and depending on SOC, current and temperature has been implemented in MATLAB/Simulink and Power Factory. A good alignment between simulations and measurements has been found....

  10. Structural equation modelling of determinants of customer satisfaction of mobile network providers: Case of Kolkata, India

    Directory of Open Access Journals (Sweden)

    Shibashish Chakraborty

    2014-12-01

    Full Text Available The Indian market of mobile network providers is growing rapidly. India is the second largest market of mobile network providers in the world and there is intense competition among existing players. In such a competitive market, customer satisfaction becomes a key issue. The objective of this paper is to develop a customer satisfaction model of mobile network providers in Kolkata. The results indicate that generic requirements (an aggregation of output quality and perceived value, flexibility, and price are the determinants of customer satisfaction. This study offers insights for mobile network providers to understand the determinants of customer satisfaction.

  11. Advanced Mechanistic 3D Spatial Modeling and Analysis Methods to Accurately Represent Nuclear Facility External Event Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Sezen, Halil [The Ohio State Univ., Columbus, OH (United States). Dept. of Civil, Environmental and Geodetic Engineering; Aldemir, Tunc [The Ohio State Univ., Columbus, OH (United States). College of Engineering, Nuclear Engineering Program, Dept. of Mechanical and Aerospace Engineering; Denning, R. [The Ohio State Univ., Columbus, OH (United States); Vaidya, N. [Rizzo Associates, Pittsburgh, PA (United States)

    2017-12-29

    Probabilistic risk assessment of nuclear power plants initially focused on events initiated by internal faults at the plant, rather than external hazards including earthquakes and flooding. Although the importance of external hazards risk analysis is now well recognized, the methods for analyzing low probability external hazards rely heavily on subjective judgment of specialists, often resulting in substantial conservatism. This research developed a framework to integrate the risk of seismic and flooding events using realistic structural models and simulation of response of nuclear structures. The results of four application case studies are presented.

  12. Accurate Evaluation of Quantum Integrals

    Science.gov (United States)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  13. Efficient and accurate simulations of two-dimensional electronic photon-echo signals: Illustration for a simple model of the Fenna-Matthews-Olson complex

    International Nuclear Information System (INIS)

    Sharp, Leah Z.; Egorova, Dassia; Domcke, Wolfgang

    2010-01-01

    Two-dimensional (2D) photon-echo spectra of a single subunit of the Fenna-Matthews-Olson (FMO) bacteriochlorophyll trimer of Chlorobium tepidum are simulated, employing the equation-of-motion phase-matching approach (EOM-PMA). We consider a slightly extended version of the previously proposed Frenkel exciton model, which explicitly accounts for exciton coherences in the secular approximation. The study is motivated by a recent experiment reporting long-lived coherent oscillations in 2D transients [Engel et al., Nature 446, 782 (2007)] and aims primarily at accurate simulations of the spectroscopic signals, with the focus on oscillations of 2D peak intensities with population time. The EOM-PMA accurately accounts for finite pulse durations as well as pulse-overlap effects and does not invoke approximations apart from the weak-field limit for a given material system. The population relaxation parameters of the exciton model are taken from the literature. The effects of various dephasing mechanisms on coherence lifetimes are thoroughly studied. It is found that the experimentally detected multiple frequencies in peak oscillations cannot be reproduced by the employed FMO model, which calls for the development of a more sophisticated exciton model of the FMO complex.

  14. A new expression of Ns versus Ef to an accurate control charge model for AlGaAs/GaAs

    Science.gov (United States)

    Bouneb, I.; Kerrour, F.

    2016-03-01

    Semi-conductor components become the privileged support of information and communication, particularly appreciation to the development of the internet. Today, MOS transistors on silicon dominate largely the semi-conductors market, however the diminution of transistors grid length is not enough to enhance the performances and respect Moore law. Particularly, for broadband telecommunications systems, where faster components are required. For this reason, alternative structures proposed like hetero structures IV-IV or III-V [1] have been.The most effective components in this area (High Electron Mobility Transistor: HEMT) on IIIV substrate. This work investigates an approach for contributing to the development of a numerical model based on physical and numerical modelling of the potential at heterostructure in AlGaAs/GaAs interface. We have developed calculation using projective methods allowed the Hamiltonian integration using Green functions in Schrodinger equation, for a rigorous resolution “self coherent” with Poisson equation. A simple analytical approach for charge-control in quantum well region of an AlGaAs/GaAs HEMT structure was presented. A charge-control equation, accounting for a variable average distance of the 2-DEG from the interface was introduced. Our approach which have aim to obtain ns-Vg characteristics is mainly based on: A new linear expression of Fermi-level variation with two-dimensional electron gas density in high electron mobility and also is mainly based on the notion of effective doping and a new expression of AEc

  15. Mineral-associated organic matter: are we now on the right path to accurately measuring and modelling it?

    Science.gov (United States)

    Cotrufo, M. F.

    2017-12-01

    Mineral-associated organic matter (MAOM) is the largest and most persistent pool of carbon in soil. Understanding and correctly modeling its dynamic is key to suggest management practices that can augment soil carbon storage for climate change mitigation, as well as increase soil organic matter (SOM) stocks to support soil health on the long-term. In the Microbial Efficiency Mineral Stabilization (MEMS) framework we proposed that, contrary to what originally thought, this form of persistent SOM is derived from the labile components of plant inputs, through their efficient microbial processing. I will present results from several experiments using dual isotope labeling of plant inputs that largely confirm this opinion, and point to the key role of dissolved organic matter in MAOM formation, and to the dynamic nature of the outer layer of MAOM. I will also show how we are incorporating this understanding in a new SOM model, which uses physically defined measurable pools rather than turnover-defined pools to forecast C cycling in soil.

  16. Social models provide a norm of appropriate food intake for young women.

    Directory of Open Access Journals (Sweden)

    Lenny R Vartanian

    Full Text Available It is often assumed that social models influence people's eating behavior by providing a norm of appropriate food intake, but this hypothesis has not been directly tested. In three experiments, female participants were exposed to a low-intake model, a high-intake model, or no model (control condition. Experiments 1 and 2 used a remote-confederate manipulation and were conducted in the context of a cookie taste test. Experiment 3 used a live confederate and was conducted in the context of a task during which participants were given incidental access to food. Participants also rated the extent to which their food intake was influenced by a variety of factors (e.g., hunger, taste, how much others ate. In all three experiments, participants in the low-intake conditions ate less than did participants in the high-intake conditions, and also reported a lower perceived norm of appropriate intake. Furthermore, perceived norms of appropriate intake mediated the effects of the social model on participants' food intake. Despite the observed effects of the social models, participants were much more likely to indicate that their food intake was influenced by taste and hunger than by the behavior of the social models. Thus, social models appear to influence food intake by providing a norm of appropriate eating behavior, but people may be unaware of the influence of a social model on their behavior.

  17. CLASH-VLT: INSIGHTS ON THE MASS SUBSTRUCTURES IN THE FRONTIER FIELDS CLUSTER MACS J0416.1–2403 THROUGH ACCURATE STRONG LENS MODELING

    International Nuclear Information System (INIS)

    Grillo, C.; Suyu, S. H.; Umetsu, K.; Rosati, P.; Caminha, G. B.; Mercurio, A.; Balestra, I.; Munari, E.; Nonino, M.; De Lucia, G.; Borgani, S.; Biviano, A.; Girardi, M.; Lombardi, M.; Gobat, R.; Coe, D.; Koekemoer, A. M.; Postman, M.; Zitrin, A.; Halkola, A.

    2015-01-01

    We present a detailed mass reconstruction and a novel study on the substructure properties in the core of the Cluster Lensing And Supernova survey with Hubble (CLASH) and Frontier Fields galaxy cluster MACS J0416.1–2403. We show and employ our extensive spectroscopic data set taken with the VIsible Multi-Object Spectrograph instrument as part of our CLASH-VLT program, to confirm spectroscopically 10 strong lensing systems and to select a sample of 175 plausible cluster members to a limiting stellar mass of log (M * /M ☉ ) ≅ 8.6. We reproduce the measured positions of a set of 30 multiple images with a remarkable median offset of only 0.''3 by means of a comprehensive strong lensing model comprised of two cluster dark-matter halos, represented by cored elliptical pseudo-isothermal mass distributions, and the cluster member components, parameterized with dual pseudo-isothermal total mass profiles. The latter have total mass-to-light ratios increasing with the galaxy HST/WFC3 near-IR (F160W) luminosities. The measurement of the total enclosed mass within the Einstein radius is accurate to ∼5%, including the systematic uncertainties estimated from six distinct mass models. We emphasize that the use of multiple-image systems with spectroscopic redshifts and knowledge of cluster membership based on extensive spectroscopic information is key to constructing robust high-resolution mass maps. We also produce magnification maps over the central area that is covered with HST observations. We investigate the galaxy contribution, both in terms of total and stellar mass, to the total mass budget of the cluster. When compared with the outcomes of cosmological N-body simulations, our results point to a lack of massive subhalos in the inner regions of simulated clusters with total masses similar to that of MACS J0416.1–2403. Our findings of the location and shape of the cluster dark-matter halo density profiles and on the cluster substructures provide intriguing

  18. CLASH-VLT: INSIGHTS ON THE MASS SUBSTRUCTURES IN THE FRONTIER FIELDS CLUSTER MACS J0416.1–2403 THROUGH ACCURATE STRONG LENS MODELING

    Energy Technology Data Exchange (ETDEWEB)

    Grillo, C. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen (Denmark); Suyu, S. H.; Umetsu, K. [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Rosati, P.; Caminha, G. B. [Dipartimento di Fisica e Scienze della Terra, Università degli Studi di Ferrara, Via Saragat 1, I-44122 Ferrara (Italy); Mercurio, A. [INAF - Osservatorio Astronomico di Capodimonte, Via Moiariello 16, I-80131 Napoli (Italy); Balestra, I.; Munari, E.; Nonino, M.; De Lucia, G.; Borgani, S.; Biviano, A.; Girardi, M. [INAF - Osservatorio Astronomico di Trieste, via G. B. Tiepolo 11, I-34143, Trieste (Italy); Lombardi, M. [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, I-20133 Milano (Italy); Gobat, R. [Laboratoire AIM-Paris-Saclay, CEA/DSM-CNRS-Universitè Paris Diderot, Irfu/Service d' Astrophysique, CEA Saclay, Orme des Merisiers, F-91191 Gif sur Yvette (France); Coe, D.; Koekemoer, A. M.; Postman, M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21208 (United States); Zitrin, A. [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); Halkola, A., E-mail: grillo@dark-cosmology.dk; and others

    2015-02-10

    We present a detailed mass reconstruction and a novel study on the substructure properties in the core of the Cluster Lensing And Supernova survey with Hubble (CLASH) and Frontier Fields galaxy cluster MACS J0416.1–2403. We show and employ our extensive spectroscopic data set taken with the VIsible Multi-Object Spectrograph instrument as part of our CLASH-VLT program, to confirm spectroscopically 10 strong lensing systems and to select a sample of 175 plausible cluster members to a limiting stellar mass of log (M {sub *}/M {sub ☉}) ≅ 8.6. We reproduce the measured positions of a set of 30 multiple images with a remarkable median offset of only 0.''3 by means of a comprehensive strong lensing model comprised of two cluster dark-matter halos, represented by cored elliptical pseudo-isothermal mass distributions, and the cluster member components, parameterized with dual pseudo-isothermal total mass profiles. The latter have total mass-to-light ratios increasing with the galaxy HST/WFC3 near-IR (F160W) luminosities. The measurement of the total enclosed mass within the Einstein radius is accurate to ∼5%, including the systematic uncertainties estimated from six distinct mass models. We emphasize that the use of multiple-image systems with spectroscopic redshifts and knowledge of cluster membership based on extensive spectroscopic information is key to constructing robust high-resolution mass maps. We also produce magnification maps over the central area that is covered with HST observations. We investigate the galaxy contribution, both in terms of total and stellar mass, to the total mass budget of the cluster. When compared with the outcomes of cosmological N-body simulations, our results point to a lack of massive subhalos in the inner regions of simulated clusters with total masses similar to that of MACS J0416.1–2403. Our findings of the location and shape of the cluster dark-matter halo density profiles and on the cluster substructures provide

  19. Value-added strategy models to provide quality services in senior health business.

    Science.gov (United States)

    Yang, Ya-Ting; Lin, Neng-Pai; Su, Shyi; Chen, Ya-Mei; Chang, Yao-Mao; Handa, Yujiro; Khan, Hafsah Arshed Ali; Elsa Hsu, Yi-Hsin

    2017-06-20

    The rapid population aging is now a global issue. The increase in the elderly population will impact the health care industry and health enterprises; various senior needs will promote the growth of the senior health industry. Most senior health studies are focused on the demand side and scarcely on supply. Our study selected quality enterprises focused on aging health and analyzed different strategies to provide excellent quality services to senior health enterprises. We selected 33 quality senior health enterprises in Taiwan and investigated their excellent quality services strategies by face-to-face semi-structured in-depth interviews with CEO and managers of each enterprise in 2013. A total of 33 senior health enterprises in Taiwan. Overall, 65 CEOs and managers of 33 enterprises were interviewed individually. None. Core values and vision, organization structure, quality services provided, strategies for quality services. This study's results indicated four type of value-added strategy models adopted by senior enterprises to offer quality services: (i) residential care and co-residence model, (ii) home care and living in place model, (iii) community e-business experience model and (iv) virtual and physical portable device model. The common part in these four strategy models is that the services provided are elderly centered. These models offer virtual and physical integrations, and also offer total solutions for the elderly and their caregivers. Through investigation of successful strategy models for providing quality services to seniors, we identified opportunities to develop innovative service models and successful characteristics, also policy implications were summarized. The observations from this study will serve as a primary evidenced base for enterprises developing their senior market and, also for promoting the value co-creation possibility through dialogue between customers and those that deliver service. © The Author 2017. Published by Oxford

  20. MJO-Related Tropical Convection Anomalies Lead to More Accurate Stratospheric Vortex Variability in Subseasonal Forecast Models.

    Science.gov (United States)

    Garfinkel, C I; Schwartz, C

    2017-10-16

    The effect of the Madden-Julian Oscillation (MJO) on the Northern Hemisphere wintertime stratospheric polar vortex in the period preceding stratospheric sudden warmings is evaluated in operational subseasonal forecasting models. Reforecasts which simulate stronger MJO-related convection in the Tropical West Pacific also simulate enhanced heat flux in the lowermost stratosphere and a more realistic vortex evolution. The time scale on which vortex predictability is enhanced lies between 2 and 4 weeks for nearly all cases. Those stratospheric sudden warmings that were preceded by a strong MJO event are more predictable at ∼20 day leads than stratospheric sudden warmings not preceded by a MJO event. Hence, knowledge of the MJO can contribute to enhanced predictability, at least in a probabilistic sense, of the Northern Hemisphere polar stratosphere.

  1. Accurate quantum chemical calculations

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  2. Accurate model of photon beams as a tool for commissioning and quality assurance of treatment planning calculations

    International Nuclear Information System (INIS)

    Linares Rosales, Haydee M.; Lara Mas, Elier; Alfonso Laguardia, Rodolfo

    2015-01-01

    Simulation of a linear accelerator (linac) head requires determining the parameters that characterize the primary electron beam striking on the target which is a step that plays a vital role in the accuracy of Monte Carlo calculations. In this work, the commissioning of photon beams (6 MV and 15 MV) of an Elekta Precise accelerator, using the Monte Carlo code EGSnrc, was performed. The influence of the primary electron beam characteristics on the absorbed dose distribution for two photon qualities was studied. Using different combinations of mean energy and radial FWHM of the primary electron beam, deposited doses were calculated in a water phantom, for different field sizes. Based on the deposited dose in the phantom, depth dose curves and lateral dose profiles were constructed and compared with experimental values measured in an arrangement similar to the simulation. Taking into account the main differences between calculations and measurements, an acceptability criteria based on confidence limits was implemented. As expected, the lateral dose profiles for small field sizes were strongly influenced by the radial distribution (FWHM). The combinations of energy/FWHM that best reproduced the experimental results were used to generate the phase spaces, in order to obtain a model with the motorized wedge included and to calculate output factors. A good agreement was obtained between simulations and measurements for a wide range of fi eld sizes, being all the results found within the range of tolerance. (author)

  3. Accurate Masses, Radii, and Temperatures for the Eclipsing Binary V2154 Cyg, and Tests of Stellar Evolution Models

    Science.gov (United States)

    Bright, Jane; Torres, Guillermo

    2018-01-01

    We report new spectroscopic observations of the F-type triple system V2154 Cyg, in which two of the stars form an eclipsing binary with a period of 2.6306303 ± 0.0000038 days. We combine the results from our spectroscopic analysis with published light curves in the uvby Strömgren passbands to derive the first reported absolute dimensions of the stars in the eclipsing binary. The masses and radii are measured with high accuracy to better than 1.5% precision. For the primary and secondary respectively, we find that the masses are 1.269 ± 0.017 M⊙ and 0.7542 ± 0.0059 M⊙, the radii are 1.477 ± 0.012 R⊙ and 0.7232 ± 0.0091R⊙, and the temperatures are 6770 ± 150 K and 5020 ± 150 K. Current models of stellar evolution agree with the measured properties of the primary, but the secondary is larger than predicted. This may be due to activity in the secondary, as has been shown for other systems with a star of similar mass with this same discrepancy.The SAO REU program is funded by the National Science Foundation REU and Department of Defense ASSURE programs under NSF Grant AST-1659473, and by the Smithsonian Institution. GT acknowledges partial support for this work from NSF grant AST-1509375.

  4. Provide a model to improve the performance of intrusion detection systems in the cloud

    OpenAIRE

    Foroogh Sedighi

    2016-01-01

    High availability of tools and service providers in cloud computing and the fact that cloud computing services are provided by internet and deal with public, have caused important challenges for new computing model. Cloud computing faces problems and challenges such as user privacy, data security, data ownership, availability of services, and recovery after breaking down, performance, scalability, programmability. So far, many different methods are presented for detection of intrusion in clou...

  5. Wind farms providing secondary frequency regulation: evaluating the performance of model-based receding horizon control

    Directory of Open Access Journals (Sweden)

    C. R. Shapiro

    2018-01-01

    Full Text Available This paper is an extended version of our paper presented at the 2016 TORQUE conference (Shapiro et al., 2016. We investigate the use of wind farms to provide secondary frequency regulation for a power grid using a model-based receding horizon control framework. In order to enable real-time implementation, the control actions are computed based on a time-varying one-dimensional wake model. This model describes wake advection and wake interactions, both of which play an important role in wind farm power production. In order to test the control strategy, it is implemented in a large-eddy simulation (LES model of an 84-turbine wind farm using the actuator disk turbine representation. Rotor-averaged velocity measurements at each turbine are used to provide feedback for error correction. The importance of including the dynamics of wake advection in the underlying wake model is tested by comparing the performance of this dynamic-model control approach to a comparable static-model control approach that relies on a modified Jensen model. We compare the performance of both control approaches using two types of regulation signals, RegA and RegD, which are used by PJM, an independent system operator in the eastern United States. The poor performance of the static-model control relative to the dynamic-model control demonstrates that modeling the dynamics of wake advection is key to providing the proposed type of model-based coordinated control of large wind farms. We further explore the performance of the dynamic-model control via composite performance scores used by PJM to qualify plants for regulation services or markets. Our results demonstrate that the dynamic-model-controlled wind farm consistently performs well, passing the qualification threshold for all fast-acting RegD signals. For the RegA signal, which changes over slower timescales, the dynamic-model control leads to average performance that surpasses the qualification threshold, but further

  6. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction II: Nonplanar Molecules.

    Science.gov (United States)

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R

    2017-11-14

    The crystal structure prediction (CSP) of a given compound from its molecular diagram is a fundamental challenge in computational chemistry with implications in relevant technological fields. A key component of CSP is the method to calculate the lattice energy of a crystal, which allows the ranking of candidate structures. This work is the second part of our investigation to assess the potential of the exchange-hole dipole moment (XDM) dispersion model for crystal structure prediction. In this article, we study the relatively large, nonplanar, mostly flexible molecules in the first five blind tests held by the Cambridge Crystallographic Data Centre. Four of the seven experimental structures are predicted as the energy minimum, and thermal effects are demonstrated to have a large impact on the ranking of at least another compound. As in the first part of this series, delocalization error affects the results for a single crystal (compound X), in this case by detrimentally overstabilizing the π-conjugated conformation of the monomer. Overall, B86bPBE-XDM correctly predicts 16 of the 21 compounds in the five blind tests, a result similar to the one obtained using the best CSP method available to date (dispersion-corrected PW91 by Neumann et al.). Perhaps more importantly, the systems for which B86bPBE-XDM fails to predict the experimental structure as the energy minimum are mostly the same as with Neumann's method, which suggests that similar difficulties (absence of vibrational free energy corrections, delocalization error,...) are not limited to B86bPBE-XDM but affect GGA-based DFT-methods in general. Our work confirms B86bPBE-XDM as an excellent option for crystal energy ranking in CSP and offers a guide to identify crystals (organic salts, conjugated flexible systems) where difficulties may appear.

  7. On the Accurate Determination of Shock Wave Time-Pressure Profile in the Experimental Models of Blast-Induced Neurotrauma

    Directory of Open Access Journals (Sweden)

    Maciej Skotak

    2018-02-01

    Full Text Available Measurement issues leading to the acquisition of artifact-free shock wave pressure-time profiles are discussed. We address the importance of in-house sensor calibration and data acquisition sampling rate. Sensor calibration takes into account possible differences between calibration methodology in a manufacturing facility, and those used in the specific laboratory. We found in-house calibration factors of brand new sensors differ by less than 10% from their manufacturer supplied data. Larger differences were noticeable for sensors that have been used for hundreds of experiments and were as high as 30% for sensors close to the end of their useful lifetime. These observations were despite the fact that typical overpressures in our experiments do not exceed 50 psi for sensors that are rated at 1,000 psi maximum pressure. We demonstrate that sampling rate of 1,000 kHz is necessary to capture the correct rise time values, but there were no statistically significant differences between peak overpressure and impulse values for low-intensity shock waves (Mach number <2 at lower rates. We discuss two sources of experimental errors originating from mechanical vibration and electromagnetic interference on the quality of a waveform recorded using state-of-the-art high-frequency pressure sensors. The implementation of preventive measures, pressure acquisition artifacts, and data interpretation with examples, are provided in this paper that will help the community at large to avoid these mistakes. In order to facilitate inter-laboratory data comparison, common reporting standards should be developed by the blast TBI research community. We noticed the majority of published literature on the subject limits reporting to peak overpressure; with much less attention directed toward other important parameters, i.e., duration, impulse, and dynamic pressure. These parameters should be included as a mandatory requirement in publications so the results can be properly

  8. Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 Accurately Predicts Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions.

    Science.gov (United States)

    Zuñiga, Cristal; Li, Chien-Ting; Huelsman, Tyler; Levering, Jennifer; Zielinski, Daniel C; McConnell, Brian O; Long, Christopher P; Knoshaug, Eric P; Guarnieri, Michael T; Antoniewicz, Maciek R; Betenbaugh, Michael J; Zengler, Karsten

    2016-09-01

    The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. © 2016 American Society of Plant Biologists. All rights reserved.

  9. Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 Accurately Predicts Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions1

    Science.gov (United States)

    Zuñiga, Cristal; Li, Chien-Ting; Zielinski, Daniel C.; Guarnieri, Michael T.; Antoniewicz, Maciek R.; Zengler, Karsten

    2016-01-01

    The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. PMID:27372244

  10. Comparisons of aerosol optical depth provided by seviri satellite observations and CAMx air quality modelling

    Science.gov (United States)

    Fernandes, A.; Riffler, M.; Ferreira, J.; Wunderle, S.; Borrego, C.; Tchepel, O.

    2015-04-01

    Satellite data provide high spatial coverage and characterization of atmospheric components for vertical column. Additionally, the use of air pollution modelling in combination with satellite data opens the challenging perspective to analyse the contribution of different pollution sources and transport processes. The main objective of this work is to study the AOD over Portugal using satellite observations in combination with air pollution modelling. For this purpose, satellite data provided by Spinning Enhanced Visible and Infra-Red Imager (SEVIRI) on-board the geostationary Meteosat-9 satellite on AOD at 550 nm and modelling results from the Chemical Transport Model (CAMx - Comprehensive Air quality Model) were analysed. The study period was May 2011 and the aim was to analyse the spatial variations of AOD over Portugal. In this study, a multi-temporal technique to retrieve AOD over land from SEVIRI was used. The proposed method takes advantage of SEVIRI's high temporal resolution of 15 minutes and high spatial resolution. CAMx provides the size distribution of each aerosol constituent among a number of fixed size sections. For post processing, CAMx output species per size bin have been grouped into total particulate sulphate (PSO4), total primary and secondary organic aerosols (POA + SOA), total primary elemental carbon (PEC) and primary inert material per size bin (CRST1 to CRST_4) to be used in AOD quantification. The AOD was calculated by integration of aerosol extinction coefficient (Qext) on the vertical column. The results were analysed in terms of temporal and spatial variations. The analysis points out that the implemented methodology provides a good spatial agreement between modelling results and satellite observation for dust outbreak studied (10th -17th of May 2011). A correlation coefficient of r=0.79 was found between the two datasets. This work provides relevant background to start the integration of these two different types of the data in order

  11. The dynamic information architecture system : a simulation framework to provide interoperability for process models

    International Nuclear Information System (INIS)

    Hummel, J. R.; Christiansen, J. H.

    2002-01-01

    As modeling and simulation becomes a more important part of the day-to-day activities in industry and government, organizations are being faced with the vexing problem of how to integrate a growing suite of heterogeneous models both within their own organizations and between organizations. The Argonne National Laboratory, which is operated by the University of Chicago for the United States Department of Energy, has developed the Dynamic Information Architecture System (DIAS) to address such problems. DIAS is an object-oriented, subject domain independent framework that is used to integrate legacy or custom-built models and applications. In this paper we will give an overview of the features of DIAS and give examples of how it has been used to integrate models in a number of applications. We shall also describe some of the key supporting DIAS tools that provide seamless interoperability between models and applications

  12. Power and type I error results for a bias-correction approach recently shown to provide accurate odds ratios of genetic variants for the secondary phenotypes associated with primary diseases.

    Science.gov (United States)

    Wang, Jian; Shete, Sanjay

    2011-11-01

    We recently proposed a bias correction approach to evaluate accurate estimation of the odds ratio (OR) of genetic variants associated with a secondary phenotype, in which the secondary phenotype is associated with the primary disease, based on the original case-control data collected for the purpose of studying the primary disease. As reported in this communication, we further investigated the type I error probabilities and powers of the proposed approach, and compared the results to those obtained from logistic regression analysis (with or without adjustment for the primary disease status). We performed a simulation study based on a frequency-matching case-control study with respect to the secondary phenotype of interest. We examined the empirical distribution of the natural logarithm of the corrected OR obtained from the bias correction approach and found it to be normally distributed under the null hypothesis. On the basis of the simulation study results, we found that the logistic regression approaches that adjust or do not adjust for the primary disease status had low power for detecting secondary phenotype associated variants and highly inflated type I error probabilities, whereas our approach was more powerful for identifying the SNP-secondary phenotype associations and had better-controlled type I error probabilities. © 2011 Wiley Periodicals, Inc.

  13. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Energy Technology Data Exchange (ETDEWEB)

    Rybynok, V O; Kyriacou, P A [City University, London (United Kingdom)

    2007-10-15

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  14. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    Science.gov (United States)

    Rybynok, V. O.; Kyriacou, P. A.

    2007-10-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  15. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    International Nuclear Information System (INIS)

    Rybynok, V O; Kyriacou, P A

    2007-01-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media

  16. Comparing consumer-directed and agency models for providing supportive services at home.

    Science.gov (United States)

    Benjamin, A E; Matthias, R; Franke, T M

    2000-04-01

    To examine the service experiences and outcomes of low-income Medicaid beneficiaries with disabilities under two different models for organizing home-based personal assistance services: agency-directed and consumer-directed. A survey of a random sample of 1,095 clients, age 18 and over, who receive services in California's In-Home Supportive Services (IHSS) program funded primarily by Medicaid. Other data were obtained from the California Management and Payrolling System (CMIPS). The sample was stratified by service model (agency-directed or consumer-directed), client age (over or under age 65), and severity. Data were collected on client demographics, condition/functional status, and supportive service experience. Outcome measures were developed in three areas: safety, unmet need, and service satisfaction. Factor analysis was used to reduce multiple outcome measures to nine dimensions. Multiple regression analysis was used to assess the effect of service model on each outcome dimension, taking into account the client-provider relationship, client demographics, and case mix. Recipients of IHSS services as of mid-1996 were interviewed by telephone. The survey was conducted in late 1996 and early 1997. On various outcomes, recipients in the consumer-directed model report more positive outcomes than those in the agency model, or they report no difference. Statistically significant differences emerge on recipient safety, unmet needs, and service satisfaction. A family member present as a paid provider is also associated with more positive reported outcomes within the consumer-directed model, but model differences persist even when this is taken into account. Although both models have strengths and weaknesses, from a recipient perspective the consumer-directed model is associated with more positive outcomes. Although health professionals have expressed concerns about the capacity of consumer direction to assure quality, particularly with respect to safety, meeting unmet

  17. MODEL OF PROVIDING WITH DEVELOPMENT STRATEGY FOR INFORMATION TECHNOLOGIES IN AN ORGANIZATION

    Directory of Open Access Journals (Sweden)

    A. A. Kuzkin

    2015-03-01

    Full Text Available Subject of research. The paper presents research and instructional tools for assessment of providing with the development strategy for information technologies in an organization. Method. The corresponding assessment model is developed which takes into consideration IT-processes equilibrium according to selected efficiency factors of information technologies application. Basic results. The model peculiarity resides in applying neuro-fuzzy approximators where the conclusion is drawn upon fuzzy logic, and membership functions are adjusted through the use of neural networks. For the adequacy testing of the suggested model, due diligence result analysis has been carried out for the IT-strategy executed in the “Navigator” group of companies at the stage of implementation and support of new technologies and production methods. Data visualization with a circle diagram is applied for the comparative evaluation of the analysis results. The chosen model adequacy is proved by the agreement between predictive assessments for IT-strategy performance targets derived by means of the fuzzy cognitive model over 12 months planning horizon and the real values of these targets upon the expiry of the given planning term. Practical significance. The developed model application gives the possibility to solve the problem of sustainability assessment for the process of providing the required IT-strategy realization level based upon the fuzzy cognitive map analysis and to reveal IT-objectives changing tendencies for an organization over the stated planning interval.

  18. Electron scattering data as the basis for kinetic models -- what can we realistically provide, and how?

    Science.gov (United States)

    Buckman, Stephen

    2009-10-01

    It is unlikely that anyone would dispute the important role that the availability of accurate data can play in the modeling and simulation of low temperature plasmas. Fundamental measurements of collision processes, from the relatively simple (eg. elastic scattering) to the complex (eg. molecular dissociation) are critical to developing an understanding of discharge and plasma behaviour. While there has been a healthy relationship between the data users and data gatherers at meetings such as GEC for many years, there are often misunderstandings about the capabilities that reside in each of these areas, and how best to maintain and strengthen the communication between them. This paper will attempt to summarise those electron-driven processes that are accessible, in a quantitative sense, in modern scattering experiments. Advances in treating reactive and excited species will also be discussed, as will the potential to push our measurement technologies further. An inescapable conclusion is that the collision community can best contribute through a strategic alliance between experiment and theory. Theory should be benchmarked against experiment for those processes and targets that are accessible, and used wisely for those processes where experiment cannot contribute.

  19. Simulation model for transcervical laryngeal injection providing real-time feedback.

    Science.gov (United States)

    Ainsworth, Tiffiny A; Kobler, James B; Loan, Gregory J; Burns, James A

    2014-12-01

    This study aimed to develop and evaluate a model for teaching transcervical laryngeal injections. A 3-dimensional printer was used to create a laryngotracheal framework based on de-identified computed tomography images of a human larynx. The arytenoid cartilages and intrinsic laryngeal musculature were created in silicone from clay casts and thermoplastic molds. The thyroarytenoid (TA) muscle was created with electrically conductive silicone using metallic filaments embedded in silicone. Wires connected TA muscles to an electrical circuit incorporating a cell phone and speaker. A needle electrode completed the circuit when inserted in the TA during simulated injection, providing real-time feedback of successful needle placement by producing an audible sound. Face validation by the senior author confirmed appropriate tactile feedback and anatomical realism. Otolaryngologists pilot tested the model and completed presimulation and postsimulation questionnaires. The high-fidelity simulation model provided tactile and audio feedback during needle placement, simulating transcervical vocal fold injections. Otolaryngology residents demonstrated higher comfort levels with transcervical thyroarytenoid injection on postsimulation questionnaires. This is the first study to describe a simulator for developing transcervical vocal fold injection skills. The model provides real-time tactile and auditory feedback that aids in skill acquisition. Otolaryngologists reported increased confidence with transcervical injection after using the simulator. © The Author(s) 2014.

  20. Non-Model-Based Control of a Wheeled Vehicle Pulling Two Trailers to Provide Early Powered Mobility and Driving Experiences.

    Science.gov (United States)

    Sanders Td Vr, David A

    2018-01-01

    Non-model-based control of a wheeled vehicle pulling two trailers is proposed. It is a fun train for disabled children consisting of a locomotive and two carriages. The fun train has afforded opportunities for both disabled and able bodied young people to share an activity and has provided early driving experiences for disabled children; it has introduced them to assistive and powered mobility. The train is a nonlinear system and subject to nonholonomic kinematic constraints, so that position and state depend on the path taken to get there. The train is described, and then, a robust control algorithm using proportional-derivative filtered errors is proposed to control the locomotive. The controller was not dependent on an accurate model of the train, because the mass of the vehicle and two carriages changed depending on the number, size, and shape of children and wheelchair seats on the train. The controller was robust and stable in uncertainty. Results are presented to show the effectiveness of the approach, and the suggested control algorithm is shown to be acceptable without knowing the exact plant dynamics.

  1. Evolving provider payment models and patient access to innovative medical technology.

    Science.gov (United States)

    Long, Genia; Mortimer, Richard; Sanzenbacher, Geoffrey

    2014-12-01

    Abstract Objective: To investigate the evolving use and expected impact of pay-for-performance (P4P) and risk-based provider reimbursement on patient access to innovative medical technology. Structured interviews with leading private payers representing over 110 million commercially-insured lives exploring current and planned use of P4P provider payment models, evidence requirements for technology assessment and new technology coverage, and the evolving relationship between the two topics. Respondents reported rapid increases in the use of P4P and risk-sharing programs, with roughly half of commercial lives affected 3 years ago, just under two-thirds today, and an expected three-quarters in 3 years. All reported well-established systems for evaluating new technology coverage. Five of nine reported becoming more selective in the past 3 years in approving new technologies; four anticipated that in the next 3 years there will be a higher evidence requirement for new technology access. Similarly, four expected it will become more difficult for clinically appropriate but costly technologies to gain coverage. All reported planning to rely more on these types of provider payment incentives to control costs, but didn't see them as a substitute for payer technology reviews and coverage limitations; they each have a role to play. Interviews limited to nine leading payers with models in place; self-reported data. Likely implications include a more uncertain payment environment for providers, and indirectly for innovative medical technology and future investment, greater reliance on quality and financial metrics, and increased evidence requirements for favorable coverage and utilization decisions. Increasing provider financial risk may challenge the traditional technology adoption paradigm, where payers assumed a 'gatekeeping' role and providers a countervailing patient advocacy role with regard to access to new technology. Increased provider financial risk may result in an

  2. Improvement of AEP Predictions Using Diurnal CFD Modelling with Site-Specific Stability Weightings Provided from Mesoscale Simulation

    International Nuclear Information System (INIS)

    Hristov, Y; Oxley, G; Žagar, M

    2014-01-01

    The Bolund measurement campaign, performed by Danish Technical University (DTU) Wind Energy Department (also known as RISØ), provided significant insight into wind flow modeling over complex terrain. In the blind comparison study several modelling solutions were submitted with the vast majority being steady-state Computational Fluid Dynamics (CFD) approaches with two equation k-ε turbulence closure. This approach yielded the most accurate results, and was identified as the state-of-the-art tool for wind turbine generator (WTG) micro-siting. Based on the findings from Bolund, further comparison between CFD and field measurement data has been deemed essential in order to improve simulation accuracy for turbine load and long-term Annual Energy Production (AEP) estimations. Vestas Wind Systems A/S is a major WTG original equipment manufacturer (OEM) with an installed base of over 60GW in over 70 countries accounting for 19% of the global installed base. The Vestas Performance and Diagnostic Centre (VPDC) provides online live data to more than 47GW of these turbines allowing a comprehensive comparison between modelled and real-world energy production data. In previous studies, multiple sites have been simulated with a steady neutral CFD formulation for the atmospheric surface layer (ASL), and wind resource (RSF) files have been generated as a base for long-term AEP predictions showing significant improvement over predictions performed with the industry standard linear WAsP tool. In this study, further improvements to the wind resource file generation with CFD are examined using an unsteady diurnal cycle approach with a full atmospheric boundary layer (ABL) formulation, with the unique stratifications throughout the cycle weighted according to mesoscale simulated sectorwise stability frequencies

  3. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Sargent, T.O.

    1981-01-01

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  4. Cutting Edge PBPK Models and Analyses: Providing the Basis for Future Modeling Efforts and Bridges to Emerging Toxicology Paradigms

    Directory of Open Access Journals (Sweden)

    Jane C. Caldwell

    2012-01-01

    Full Text Available Physiologically based Pharmacokinetic (PBPK models are used for predictions of internal or target dose from environmental and pharmacologic chemical exposures. Their use in human risk assessment is dependent on the nature of databases (animal or human used to develop and test them, and includes extrapolations across species, experimental paradigms, and determination of variability of response within human populations. Integration of state-of-the science PBPK modeling with emerging computational toxicology models is critical for extrapolation between in vitro exposures, in vivo physiologic exposure, whole organism responses, and long-term health outcomes. This special issue contains papers that can provide the basis for future modeling efforts and provide bridges to emerging toxicology paradigms. In this overview paper, we present an overview of the field and introduction for these papers that includes discussions of model development, best practices, risk-assessment applications of PBPK models, and limitations and bridges of modeling approaches for future applications. Specifically, issues addressed include: (a increased understanding of human variability of pharmacokinetics and pharmacodynamics in the population, (b exploration of mode of action hypotheses (MOA, (c application of biological modeling in the risk assessment of individual chemicals and chemical mixtures, and (d identification and discussion of uncertainties in the modeling process.

  5. Reliability constrained decision model for energy service provider incorporating demand response programs

    International Nuclear Information System (INIS)

    Mahboubi-Moghaddam, Esmaeil; Nayeripour, Majid; Aghaei, Jamshid

    2016-01-01

    Highlights: • The operation of Energy Service Providers (ESPs) in electricity markets is modeled. • Demand response as the cost-effective solution is used for energy service provider. • The market price uncertainty is modeled using the robust optimization technique. • The reliability of the distribution network is embedded into the framework. • The simulation results demonstrate the benefits of robust framework for ESPs. - Abstract: Demand response (DR) programs are becoming a critical concept for the efficiency of current electric power industries. Therefore, its various capabilities and barriers have to be investigated. In this paper, an effective decision model is presented for the strategic behavior of energy service providers (ESPs) to demonstrate how to participate in the day-ahead electricity market and how to allocate demand in the smart distribution network. Since market price affects DR and vice versa, a new two-step sequential framework is proposed, in which unit commitment problem (UC) is solved to forecast the expected locational marginal prices (LMPs), and successively DR program is applied to optimize the total cost of providing energy for the distribution network customers. This total cost includes the cost of purchased power from the market and distributed generation (DG) units, incentive cost paid to the customers, and compensation cost of power interruptions. To obtain compensation cost, the reliability evaluation of the distribution network is embedded into the framework using some innovative constraints. Furthermore, to consider the unexpected behaviors of the other market participants, the LMP prices are modeled as the uncertainty parameters using the robust optimization technique, which is more practical compared to the conventional stochastic approach. The simulation results demonstrate the significant benefits of the presented framework for the strategic performance of ESPs.

  6. Learning a weighted sequence model of the nucleosome core and linker yields more accurate predictions in Saccharomyces cerevisiae and Homo sapiens.

    Directory of Open Access Journals (Sweden)

    Sheila M Reynolds

    2010-07-01

    Full Text Available DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence-301 base pairs, centered at the position to be scored-with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the

  7. Learning a weighted sequence model of the nucleosome core and linker yields more accurate predictions in Saccharomyces cerevisiae and Homo sapiens.

    Science.gov (United States)

    Reynolds, Sheila M; Bilmes, Jeff A; Noble, William Stafford

    2010-07-08

    DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence-301 base pairs, centered at the position to be scored-with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the bulk of the

  8. Learning a Weighted Sequence Model of the Nucleosome Core and Linker Yields More Accurate Predictions in Saccharomyces cerevisiae and Homo sapiens

    Science.gov (United States)

    Reynolds, Sheila M.; Bilmes, Jeff A.; Noble, William Stafford

    2010-01-01

    DNA in eukaryotes is packaged into a chromatin complex, the most basic element of which is the nucleosome. The precise positioning of the nucleosome cores allows for selective access to the DNA, and the mechanisms that control this positioning are important pieces of the gene expression puzzle. We describe a large-scale nucleosome pattern that jointly characterizes the nucleosome core and the adjacent linkers and is predominantly characterized by long-range oscillations in the mono, di- and tri-nucleotide content of the DNA sequence, and we show that this pattern can be used to predict nucleosome positions in both Homo sapiens and Saccharomyces cerevisiae more accurately than previously published methods. Surprisingly, in both H. sapiens and S. cerevisiae, the most informative individual features are the mono-nucleotide patterns, although the inclusion of di- and tri-nucleotide features results in improved performance. Our approach combines a much longer pattern than has been previously used to predict nucleosome positioning from sequence—301 base pairs, centered at the position to be scored—with a novel discriminative classification approach that selectively weights the contributions from each of the input features. The resulting scores are relatively insensitive to local AT-content and can be used to accurately discriminate putative dyad positions from adjacent linker regions without requiring an additional dynamic programming step and without the attendant edge effects and assumptions about linker length modeling and overall nucleosome density. Our approach produces the best dyad-linker classification results published to date in H. sapiens, and outperforms two recently published models on a large set of S. cerevisiae nucleosome positions. Our results suggest that in both genomes, a comparable and relatively small fraction of nucleosomes are well-positioned and that these positions are predictable based on sequence alone. We believe that the bulk of the

  9. Avoiding fractional electrons in subsystem DFT based ab-initio molecular dynamics yields accurate models for liquid water and solvated OH radical.

    Science.gov (United States)

    Genova, Alessandro; Ceresoli, Davide; Pavanello, Michele

    2016-06-21

    In this work we achieve three milestones: (1) we present a subsystem DFT method capable of running ab-initio molecular dynamics simulations accurately and efficiently. (2) In order to rid the simulations of inter-molecular self-interaction error, we exploit the ability of semilocal frozen density embedding formulation of subsystem DFT to represent the total electron density as a sum of localized subsystem electron densities that are constrained to integrate to a preset, constant number of electrons; the success of the method relies on the fact that employed semilocal nonadditive kinetic energy functionals effectively cancel out errors in semilocal exchange-correlation potentials that are linked to static correlation effects and self-interaction. (3) We demonstrate this concept by simulating liquid water and solvated OH(•) radical. While the bulk of our simulations have been performed on a periodic box containing 64 independent water molecules for 52 ps, we also simulated a box containing 256 water molecules for 22 ps. The results show that, provided one employs an accurate nonadditive kinetic energy functional, the dynamics of liquid water and OH(•) radical are in semiquantitative agreement with experimental results or higher-level electronic structure calculations. Our assessments are based upon comparisons of radial and angular distribution functions as well as the diffusion coefficient of the liquid.

  10. Avoiding fractional electrons in subsystem DFT based ab-initio molecular dynamics yields accurate models for liquid water and solvated OH radical

    International Nuclear Information System (INIS)

    Genova, Alessandro; Pavanello, Michele; Ceresoli, Davide

    2016-01-01

    In this work we achieve three milestones: (1) we present a subsystem DFT method capable of running ab-initio molecular dynamics simulations accurately and efficiently. (2) In order to rid the simulations of inter-molecular self-interaction error, we exploit the ability of semilocal frozen density embedding formulation of subsystem DFT to represent the total electron density as a sum of localized subsystem electron densities that are constrained to integrate to a preset, constant number of electrons; the success of the method relies on the fact that employed semilocal nonadditive kinetic energy functionals effectively cancel out errors in semilocal exchange–correlation potentials that are linked to static correlation effects and self-interaction. (3) We demonstrate this concept by simulating liquid water and solvated OH • radical. While the bulk of our simulations have been performed on a periodic box containing 64 independent water molecules for 52 ps, we also simulated a box containing 256 water molecules for 22 ps. The results show that, provided one employs an accurate nonadditive kinetic energy functional, the dynamics of liquid water and OH • radical are in semiquantitative agreement with experimental results or higher-level electronic structure calculations. Our assessments are based upon comparisons of radial and angular distribution functions as well as the diffusion coefficient of the liquid.

  11. A Global Remote Laboratory Experimentation Network and the Experiment Service Provider Business Model and Plans

    Directory of Open Access Journals (Sweden)

    Tor Ivar Eikaas

    2003-07-01

    Full Text Available This paper presents results from the IST KAII Trial project ReLAX - Remote LAboratory eXperimentation trial (IST 1999-20827, and contributes with a framework for a global remote laboratory experimentation network supported by a new business model. The paper presents this new Experiment Service Provider business model that aims at bringing physical experimentation back into the learning arena, where remotely operable laboratory experiments used in advanced education and training schemes are made available to a global education and training market in industry and academia. The business model is based on an approach where individual experiment owners offer remote access to their high-quality laboratory facilities to users around the world. The usage can be for research, education, on-the-job training etc. The access to these facilities is offered via an independent operating company - the Experiment Service Provider. The Experiment Service Provider offers eCommerce services like booking, access control, invoicing, dispute resolution, quality control, customer evaluation services and a unified Lab Portal.

  12. Providing comprehensive and consistent access to astronomical observatory archive data: the NASA archive model

    Science.gov (United States)

    McGlynn, Thomas; Fabbiano, Giuseppina; Accomazzi, Alberto; Smale, Alan; White, Richard L.; Donaldson, Thomas; Aloisi, Alessandra; Dower, Theresa; Mazzerella, Joseph M.; Ebert, Rick; Pevunova, Olga; Imel, David; Berriman, Graham B.; Teplitz, Harry I.; Groom, Steve L.; Desai, Vandana R.; Landry, Walter

    2016-07-01

    Since the turn of the millennium a constant concern of astronomical archives have begun providing data to the public through standardized protocols unifying data from disparate physical sources and wavebands across the electromagnetic spectrum into an astronomical virtual observatory (VO). In October 2014, NASA began support for the NASA Astronomical Virtual Observatories (NAVO) program to coordinate the efforts of NASA astronomy archives in providing data to users through implementation of protocols agreed within the International Virtual Observatory Alliance (IVOA). A major goal of the NAVO collaboration has been to step back from a piecemeal implementation of IVOA standards and define what the appropriate presence for the US and NASA astronomy archives in the VO should be. This includes evaluating what optional capabilities in the standards need to be supported, the specific versions of standards that should be used, and returning feedback to the IVOA, to support modifications as needed. We discuss a standard archive model developed by the NAVO for data archive presence in the virtual observatory built upon a consistent framework of standards defined by the IVOA. Our standard model provides for discovery of resources through the VO registries, access to observation and object data, downloads of image and spectral data and general access to archival datasets. It defines specific protocol versions, minimum capabilities, and all dependencies. The model will evolve as the capabilities of the virtual observatory and needs of the community change.

  13. Wind farms providing secondary frequency regulation: Evaluating the performance of model-based receding horizon control

    International Nuclear Information System (INIS)

    Shapiro, Carl R.; Meneveau, Charles; Gayme, Dennice F.; Meyers, Johan

    2016-01-01

    We investigate the use of wind farms to provide secondary frequency regulation for a power grid. Our approach uses model-based receding horizon control of a wind farm that is tested using a large eddy simulation (LES) framework. In order to enable real-time implementation, the control actions are computed based on a time-varying one-dimensional wake model. This model describes wake advection and interactions, both of which play an important role in wind farm power production. This controller is implemented in an LES model of an 84-turbine wind farm represented by actuator disk turbine models. Differences between the velocities at each turbine predicted by the wake model and measured in LES are used for closed-loop feedback. The controller is tested on two types of regulation signals, “RegA” and “RegD”, obtained from PJM, an independent system operator in the eastern United States. Composite performance scores, which are used by PJM to qualify plants for regulation, are used to evaluate the performance of the controlled wind farm. Our results demonstrate that the controlled wind farm consistently performs well, passing the qualification threshold for all fastacting RegD signals. For the RegA signal, which changes over slower time scales, the controlled wind farm's average performance surpasses the threshold, but further work is needed to enable the controlled system to achieve qualifying performance all of the time. (paper)

  14. Agent-based organizational modelling for analysis of safety culture at an air navigation service provider

    International Nuclear Information System (INIS)

    Stroeve, Sybert H.; Sharpanskykh, Alexei; Kirwan, Barry

    2011-01-01

    Assessment of safety culture is done predominantly by questionnaire-based studies, which tend to reveal attitudes on immaterial characteristics (values, beliefs, norms). There is a need for a better understanding of the implications of the material aspects of an organization (structures, processes, etc.) for safety culture and their interactions with the immaterial characteristics. This paper presents a new agent-based organizational modelling approach for integrated and systematic evaluation of material and immaterial characteristics of socio-technical organizations in safety culture analysis. It uniquely considers both the formal organization and the value- and belief-driven behaviour of individuals in the organization. Results are presented of a model for safety occurrence reporting at an air navigation service provider. Model predictions consistent with questionnaire-based results are achieved. A sensitivity analysis provides insight in organizational factors that strongly influence safety culture indicators. The modelling approach can be used in combination with attitude-focused safety culture research, towards an integrated evaluation of material and immaterial characteristics of socio-technical organizations. By using this approach an organization is able to gain a deeper understanding of causes of diverse problems and inefficiencies both in the formal organization and in the behaviour of organizational agents, and to systematically identify and evaluate improvement options.

  15. Pharmacists providing care in the outpatient setting through telemedicine models: a narrative review

    Directory of Open Access Journals (Sweden)

    Littauer SL

    2017-12-01

    Full Text Available Telemedicine refers to the delivery of clinical services using technology that allows two-way, real time, interactive communication between the patient and the clinician at a distant site. Commonly, telemedicine is used to improve access to general and specialty care for patients in rural areas. This review aims to provide an overview of existing telemedicine models involving the delivery of care by pharmacists via telemedicine (including telemonitoring and video, but excluding follow-up telephone calls and to highlight the main areas of chronic-disease management where these models have been applied. Studies within the areas of hypertension, diabetes, asthma, anticoagulation and depression were identified, but only two randomized controlled trials with adequate sample size demonstrating the positive impact of telemonitoring combined with pharmacist care in hypertension were identified. The evidence for the impact of pharmacist-based telemedicine models is sparse and weak, with the studies conducted presenting serious threats to internal and external validity. Therefore, no definitive conclusions about the impact of pharmacist-led telemedicine models can be made at this time. In the Unites States, the increasing shortage of primary care providers and specialists represents an opportunity for pharmacists to assume a more prominent role managing patients with chronic disease in the ambulatory care setting. However, lack of reimbursement may pose a barrier to the provision of care by pharmacists using telemedicine.

  16. Modeling fMRI signals can provide insights into neural processing in the cerebral cortex.

    Science.gov (United States)

    Vanni, Simo; Sharifian, Fariba; Heikkinen, Hanna; Vigário, Ricardo

    2015-08-01

    Every stimulus or task activates multiple areas in the mammalian cortex. These distributed activations can be measured with functional magnetic resonance imaging (fMRI), which has the best spatial resolution among the noninvasive brain imaging methods. Unfortunately, the relationship between the fMRI activations and distributed cortical processing has remained unclear, both because the coupling between neural and fMRI activations has remained poorly understood and because fMRI voxels are too large to directly sense the local neural events. To get an idea of the local processing given the macroscopic data, we need models to simulate the neural activity and to provide output that can be compared with fMRI data. Such models can describe neural mechanisms as mathematical functions between input and output in a specific system, with little correspondence to physiological mechanisms. Alternatively, models can be biomimetic, including biological details with straightforward correspondence to experimental data. After careful balancing between complexity, computational efficiency, and realism, a biomimetic simulation should be able to provide insight into how biological structures or functions contribute to actual data processing as well as to promote theory-driven neuroscience experiments. This review analyzes the requirements for validating system-level computational models with fMRI. In particular, we study mesoscopic biomimetic models, which include a limited set of details from real-life networks and enable system-level simulations of neural mass action. In addition, we discuss how recent developments in neurophysiology and biophysics may significantly advance the modelling of fMRI signals. Copyright © 2015 the American Physiological Society.

  17. Testing a Nested Skills Model of the Relations among Invented Spelling, Accurate Spelling, and Word Reading, from Kindergarten to Grade 1

    Science.gov (United States)

    Sénéchal, Monique

    2017-01-01

    The goal was to assess the role of invented spelling to subsequent reading and spelling as proposed by the Nested Skills Model of Early Literacy Acquisition. 107 English-speaking children were tested at the beginning of kindergarten and grade 1, and at the end of grade 1. The findings provided support for the proposed model. First, the role played…

  18. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    Science.gov (United States)

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error model predicts the exposure of

  19. Can a numerically stable subgrid-scale model for turbulent flow computation be ideally accurate?: a preliminary theoretical study for the Gaussian filtered Navier-Stokes equations.

    Science.gov (United States)

    Ida, Masato; Taniguchi, Nobuyuki

    2003-09-01

    This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.

  20. The climate4impact platform: Providing, tailoring and facilitating climate model data access

    Science.gov (United States)

    Pagé, Christian; Pagani, Andrea; Plieger, Maarten; Som de Cerff, Wim; Mihajlovski, Andrej; de Vreede, Ernst; Spinuso, Alessandro; Hutjes, Ronald; de Jong, Fokke; Bärring, Lars; Vega, Manuel; Cofiño, Antonio; d'Anca, Alessandro; Fiore, Sandro; Kolax, Michael

    2017-04-01

    One of the main objectives of climate4impact is to provide standardized web services and tools that are reusable in other portals. These services include web processing services, web coverage services and web mapping services (WPS, WCS and WMS). Tailored portals can be targeted to specific communities and/or countries/regions while making use of those services. Easier access to climate data is very important for the climate change impact communities. To fulfill this objective, the climate4impact (http://climate4impact.eu/) web portal and services has been developed, targeting climate change impact modellers, impact and adaptation consultants, as well as other experts using climate change data. It provides to users harmonized access to climate model data through tailored services. It features static and dynamic documentation, Use Cases and best practice examples, an advanced search interface, an integrated authentication and authorization system with the Earth System Grid Federation (ESGF), a visualization interface with ADAGUC web mapping tools. In the latest version, statistical downscaling services, provided by the Santander Meteorology Group Downscaling Portal, were integrated. An innovative interface to integrate statistical downscaling services will be released in the upcoming version. The latter will be a big step in bridging the gap between climate scientists and the climate change impact communities. The climate4impact portal builds on the infrastructure of an international distributed database that has been set to disseminate the results from the global climate model results of the Coupled Model Intercomparison project Phase 5 (CMIP5). This database, the ESGF, is an international collaboration that develops, deploys and maintains software infrastructure for the management, dissemination, and analysis of climate model data. The European FP7 project IS-ENES, Infrastructure for the European Network for Earth System modelling, supports the European

  1. runjags: An R Package Providing Interface Utilities, Model Templates, Parallel Computing Methods and Additional Distributions for MCMC Models in JAGS

    Directory of Open Access Journals (Sweden)

    Matthew J. Denwood

    2016-07-01

    Full Text Available The runjags package provides a set of interface functions to facilitate running Markov chain Monte Carlo models in JAGS from within R. Automated calculation of appropriate convergence and sample length diagnostics, user-friendly access to commonly used graphical outputs and summary statistics, and parallelized methods of running JAGS are provided. Template model specifications can be generated using a standard lme4-style formula interface to assist users less familiar with the BUGS syntax. Automated simulation study functions are implemented to facilitate model performance assessment, as well as drop-k type cross-validation studies, using high performance computing clusters such as those provided by parallel. A module extension for JAGS is also included within runjags, providing the Pareto family of distributions and a series of minimally-informative priors including the DuMouchel and half-Cauchy priors. This paper outlines the primary functions of this package, and gives an illustration of a simulation study to assess the sensitivity of two equivalent model formulations to different prior distributions.

  2. Modelling intentions to provide smoking cessation support among mental health professionals in the Netherlands.

    Science.gov (United States)

    Blankers, Matthijs; Buisman, Renate; Hopman, Petra; van Gool, Ronald; van Laar, Margriet

    2016-01-01

    Tobacco use prevalence is elevated among people with mental illnesses, leading to elevated rates of premature smoking-related mortality. Opportunities to encourage smoking cessation among them are currently underused by mental health professionals. In this paper, we aim to explore mechanisms to invigorate professionals' intentions to help patients stop smoking. Data stem from a recent staff survey on the provision of smoking cessation support to patients with mental illnesses in the Netherlands. Items and underlying constructs were based on the theory of planned behaviour and literature on habitual behaviour. Data were weighted and only data from staff members with regular patient contact (n = 506) were included. Descriptive statistics of the survey items are presented and in a second step using structural equation modelling (SEM), we regressed the latent variables attitudes, subjective norms (SN), perceived behavioural control (PBC), past cessation support behaviour (PB) and current smoking behaviour on intentions to provide support. In optimisation steps, models comprising a subset of this initial model were evaluated. A sample of 506 mental health workers who had direct contact with patients completed the survey. The majority of them were females (70.0 %), respondents had an average age of 42.5 years (SD = 12.0). Seventy-five percent had at least a BSc educational background. Of the respondents, 76 % indicated that patients should be encouraged more to quit smoking. Respondents were supportive to train their direct colleagues to provide cessation support more often (71 %) and also supported the involvement of mental health care facilities in providing cessation support to patients (69 %). The majority of the respondents feels capable to provide cessation support (66 %). Two thirds of the respondents wants to provide support, however only a minority (35 %) intends to actually do so during the coming year. Next, using SEM an acceptable fit was

  3. Texture-based characterization of subskin features by specified laser speckle effects at λ = 650 nm region for more accurate parametric 'skin age' modelling.

    Science.gov (United States)

    Orun, A B; Seker, H; Uslan, V; Goodyer, E; Smith, G

    2017-06-01

    The textural structure of 'skin age'-related subskin components enables us to identify and analyse their unique characteristics, thus making substantial progress towards establishing an accurate skin age model. This is achieved by a two-stage process. First by the application of textural analysis using laser speckle imaging, which is sensitive to textural effects within the λ = 650 nm spectral band region. In the second stage, a Bayesian inference method is used to select attributes from which a predictive model is built. This technique enables us to contrast different skin age models, such as the laser speckle effect against the more widely used normal light (LED) imaging method, whereby it is shown that our laser speckle-based technique yields better results. The method introduced here is non-invasive, low cost and capable of operating in real time; having the potential to compete against high-cost instrumentation such as confocal microscopy or similar imaging devices used for skin age identification purposes. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  4. Improved predictive modeling of white LEDs with accurate luminescence simulation and practical inputs with TracePro opto-mechanical design software

    Science.gov (United States)

    Tsao, Chao-hsi; Freniere, Edward R.; Smith, Linda

    2009-02-01

    The use of white LEDs for solid-state lighting to address applications in the automotive, architectural and general illumination markets is just emerging. LEDs promise greater energy efficiency and lower maintenance costs. However, there is a significant amount of design and cost optimization to be done while companies continue to improve semiconductor manufacturing processes and begin to apply more efficient and better color rendering luminescent materials such as phosphor and quantum dot nanomaterials. In the last decade, accurate and predictive opto-mechanical software modeling has enabled adherence to performance, consistency, cost, and aesthetic criteria without the cost and time associated with iterative hardware prototyping. More sophisticated models that include simulation of optical phenomenon, such as luminescence, promise to yield designs that are more predictive - giving design engineers and materials scientists more control over the design process to quickly reach optimum performance, manufacturability, and cost criteria. A design case study is presented where first, a phosphor formulation and excitation source are optimized for a white light. The phosphor formulation, the excitation source and other LED components are optically and mechanically modeled and ray traced. Finally, its performance is analyzed. A blue LED source is characterized by its relative spectral power distribution and angular intensity distribution. YAG:Ce phosphor is characterized by relative absorption, excitation and emission spectra, quantum efficiency and bulk absorption coefficient. Bulk scatter properties are characterized by wavelength dependent scatter coefficients, anisotropy and bulk absorption coefficient.

  5. Low level radiation: how does the linear without threshold model provide the safety of Canadian

    International Nuclear Information System (INIS)

    Anon.

    2010-01-01

    The linear without threshold model is a model of risk used worldwide by the most of health organisms of nuclear regulation in order to establish dose limits for workers and public. It is in the heart of the approach adopted by the Canadian commission of nuclear safety (C.C.S.N.) in matter of radiation protection. The linear without threshold model presumes reasonably it exists a direct link between radiation exposure and cancer rate. It does not exist scientific evidence that chronicle exposure to radiation doses under 100 milli sievert (mSv) leads harmful effects on health. Several scientific reports highlighted scientific evidences that seem indicate a low level of radiation is less harmful than the linear without threshold predicts. As the linear without threshold model presumes that any radiation exposure brings risks, the ALARA principle obliges the licensees to get the radiation exposure at the lowest reasonably achievable level, social and economical factors taken into account. ALARA principle constitutes a basic principle in the C.C.S.N. approach in matter of radiation protection; On the radiation protection plan, C.C.S.N. gets a careful approach that allows to provide health and safety of Canadian people and the protection of their environment. (N.C.)

  6. State and Alternative Fuel Provider Fleets - Fleet Compliance Annual Report: Model Year 2015, Fiscal Year 2016

    Energy Technology Data Exchange (ETDEWEB)

    2016-12-01

    The U.S. Department of Energy (DOE) regulates covered state government and alternative fuel provider fleets, pursuant to the Energy Policy Act of 1992 (EPAct), as amended. Covered fleets may meet their EPAct requirements through one of two compliance methods: Standard Compliance or Alternative Compliance. For model year (MY) 2015, the compliance rate with this program for the more than 3011 reporting fleets was 100%. More than 294 fleets used Standard Compliance and exceeded their aggregate MY 2015 acquisition requirements by 8% through acquisitions alone. The seven covered fleets that used Alternative Compliance exceeded their aggregate MY 2015 petroleum use reduction requirements by 46%.

  7. TRANSIT: model for providing generic transportation input for preliminary siting analysis

    International Nuclear Information System (INIS)

    McNair, G.W.; Cashwell, J.W.

    1985-02-01

    To assist the US Department of Energy's efforts in potential facility site screening in the nuclear waste management program, a computerized model, TRANSIT, is being developed. Utilizing existing data on the location and inventory characteristics of spent nuclear fuel at reactor sites, TRANSIT derives isopleths of transportation mileage, costs, risks and fleet requirements for shipments to storage sites and/or repository sites. This technique provides a graphic, first-order method for use by the Department in future site screening efforts. 2 refs

  8. Quantitative Hydraulic Models Of Early Land Plants Provide Insight Into Middle Paleozoic Terrestrial Paleoenvironmental Conditions

    Science.gov (United States)

    Wilson, J. P.; Fischer, W. W.

    2010-12-01

    Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative

  9. Spectrally accurate contour dynamics

    International Nuclear Information System (INIS)

    Van Buskirk, R.D.; Marcus, P.S.

    1994-01-01

    We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use

  10. Determination of the structure of γ-alumina from interatomic potential and first-principles calculations: The requirement of significant numbers of nonspinel positions to achieve an accurate structural model

    International Nuclear Information System (INIS)

    Paglia, Gianluca; Rohl, Andrew L.; Gale, Julian D.; Buckley, Craig E.

    2005-01-01

    We have performed an extensive computational study of γ-Al 2 O 3 , beginning with the geometric analysis of approximately 1.47 billion spinel-based structural candidates, followed by derivative method energy minimization calculations of approximately 122 000 structures. Optimization of the spinel-based structural models demonstrated that structures exhibiting nonspinel site occupancy after simulation were more energetically favorable, as suggested in other computational studies. More importantly, none of the spinel structures exhibited simulated diffraction patterns that were characteristic of γ-Al 2 O 3 . This suggests that cations of γ-Al 2 O 3 are not exclusively held in spinel positions, that the spinel model of γ-Al 2 O 3 does not accurately reflect its structure, and that a representative structure cannot be achieved from molecular modeling when the spinel representation is used as the starting structure. The latter two of these three findings are extremely important when trying to accurately model the structure. A second set of starting models were generated with a large number of cations occupying c symmetry positions, based on the findings from recent experiments. Optimization of the new c symmetry-based structural models resulted in simulated diffraction patterns that were characteristic of γ-Al 2 O 3 . The modeling, conducted using supercells, yields a more accurate and complete determination of the defect structure of γ-Al 2 O 3 than can be achieved with current experimental techniques. The results show that on average over 40% of the cations in the structure occupy nonspinel positions, and approximately two-thirds of these occupy c symmetry positions. The structures exhibit variable occupancy in the site positions that follow local symmetry exclusion rules. This variation was predominantly represented by a migration of cations away from a symmetry positions to other tetrahedral site positions during optimization which were found not to affect the

  11. Early Prostate Cancer: Hedonic Prices Model of Provider-Patient Interactions and Decisions

    International Nuclear Information System (INIS)

    Jani, Ashesh B.; Hellman, Samuel

    2008-01-01

    Purpose: To determine the relative influence of treatment features and treatment availabilities on final treatment decisions in early prostate cancer. Methods and Materials: We describe and apply a model, based on hedonic prices, to understand provider-patient interactions in prostate cancer. This model included four treatments (observation, external beam radiotherapy, brachytherapy, and prostatectomy) and five treatment features (one efficacy and four treatment complication features). We performed a literature search to estimate (1) the intersections of the 'bid' functions and 'offer' functions with the price function along different treatment feature axes, and (2) the treatments actually rendered in different patient subgroups based on age. We performed regressions to determine the relative weight of each feature in the overall interaction and the relative availability of each treatment modality to explain differences between observed vs. predicted use of different modalities in different patient subpopulations. Results: Treatment efficacy and potency preservation are the major factors influencing decisions for young patients, whereas preservation of urinary and rectal function is much more important for very elderly patients. Referral patterns seem to be responsible for most of the deviations of observed use of different treatments from those predicted by idealized provider-patient interactions. Specifically, prostatectomy is used far more commonly in young patients and radiotherapy and observation used far more commonly in elderly patients than predicted by a uniform referral pattern. Conclusions: The hedonic prices approach facilitated identifying the relative importance of treatment features and quantification of the impact of the prevailing referral pattern on prostate cancer treatment decisions

  12. Providing Context for Complexity: Using Infographics and Conceptual Models to Teach Global Change Processes

    Science.gov (United States)

    Bean, J. R.; White, L. D.

    2015-12-01

    Understanding modern and historical global changes requires interdisciplinary knowledge of the physical and life sciences. The Understanding Global Change website from the UC Museum of Paleontology will use a focal infographic that unifies diverse content often taught in separate K-12 science units. This visualization tool provides scientists with a structure for presenting research within the broad context of global change, and supports educators with a framework for teaching and assessing student understanding of complex global change processes. This new approach to teaching the science of global change is currently being piloted and refined based on feedback from educators and scientists in anticipation of a 2016 website launch. Global change concepts are categorized within the infographic as causes of global change (e.g., burning of fossil fuels, volcanism), ongoing Earth system processes (e.g., ocean circulation, the greenhouse effect), and the changes scientists measure in Earth's physical and biological systems (e.g., temperature, extinctions/radiations). The infographic will appear on all website content pages and provides a template for the creation of flowcharts, which are conceptual models that allow teachers and students to visualize the interdependencies and feedbacks among processes in the atmosphere, hydrosphere, biosphere, and geosphere. The development of this resource is timely given that the newly adopted Next Generation Science Standards emphasize cross-cutting concepts, including model building, and Earth system science. Flowchart activities will be available on the website to scaffold inquiry-based lessons, determine student preconceptions, and assess student content knowledge. The infographic has already served as a learning and evaluation tool during professional development workshops at UC Berkeley, Stanford University, and the Smithsonian National Museum of Natural History. At these workshops, scientists and educators used the infographic

  13. Neuropsychologists as primary care providers of cognitive health: A novel comprehensive cognitive wellness service delivery model.

    Science.gov (United States)

    Pimental, Patricia A; O'Hara, John B; Jandak, Jessica L

    2018-01-01

    By virtue of their extensive knowledge base and specialized training in brain-behavior relationships, neuropsychologists are especially poised to execute a unique broad-based approach to overall cognitive wellness and should be viewed as primary care providers of cognitive health. This article will describe a novel comprehensive cognitive wellness service delivery model including cognitive health, anti-aging, lifelong wellness, and longevity-oriented practices. These practice areas include brain-based cognitive wellness, emotional and spiritually centric exploration, and related multimodality health interventions. As experts in mind-body connections, neuropsychologists can provide a variety of evidence-based treatment options, empowering patients with a sense of value and purpose. Multiple areas of clinical therapy skill-based learning, tailor-made to fit individual needs, will be discussed including: brain stimulating activities, restorative techniques, automatic negative thoughts and maladaptive thinking reduction, inflammation and pain management techniques, nutrition and culinary focused cognitive wellness, spirituality based practices and mindfulness, movement and exercise, alternative/complimentary therapies, relationship restoration/social engagement, and trauma healing/meaning. Cognitive health rests upon the foundation of counteracting mind-body connection disruptions from multiple etiologies including inflammation, chronic stress, metabolic issues, cardiac conditions, autoimmune disease, neurological disorders, infectious diseases, and allergy spectrum disorders. Superimposed on these issues are lifestyle patterns and negative health behaviors that develop as ill-fated compensatory mechanisms used to cope with life stressors and aging. The brain and body are electrical systems that can "short circuit." The therapy practices inherent in the proposed cognitive wellness service delivery model can provide preventative insulation and circuit breaking against

  14. Measuring the Quality of Services Provided for Outpatients in Kowsar Clinic in Ardebil City Based on the SERVQUAL Model

    Directory of Open Access Journals (Sweden)

    Hasan Ghobadi

    2014-12-01

    Full Text Available Background & objectives: Today, the concept of q uality of services is particularly important in health care and customer satisfaction can be defined by comparing the expectations of the services with perception of provided services. The aim of this study was to evaluate the quality of services provided for outpatients in clinic of Ardebil city based on the SERVQUAL model.   Methods: This descriptive study was conducted on 650 patients referred to outpatient clinic since July to September 201 3 using a standardized SERVQUAL questionnaire (1988 with confirmed reliability and validity. The paired t-test and Friedman test were used for analysis of data by SPSS software.   Results: 56.1 % of respondents were male and 43.9 % of them were female . The mean age of patients was 33 ± 11.91 , 68.9 % of patients were in Ardabil and 27.3 % of them had bachelor's or higher. The results showed that there is a significant difference between perceptions and expectations of the patients about five dimensions of the service quality (tangibility, reliability, assurance, responsiveness, and empathy in the studied clinic (P< 0.001. The highest mean gap and minimum gap were related to empathy and assurance, respectively.   Conclusion: Regarding to observed differences in quality , the managers and also planners have to evaluate their performance more accurately in order to have better planning for future actions. In fact, any efforts to reduce the gap between expectation and perception of patients result in greater satisfaction, loyalty and further visits to organizations.

  15. Centromeric DNA characterization in the model grass Brachypodium distachyon provides insights on the evolution of the genus.

    Science.gov (United States)

    Li, Yinjia; Zuo, Sheng; Zhang, Zhiliang; Li, Zhanjie; Han, Jinlei; Chu, Zhaoqing; Hasterok, Robert; Wang, Kai

    2018-03-01

    Brachypodium distachyon is a well-established model monocot plant, and its small and compact genome has been used as an accurate reference for the much larger and often polyploid genomes of cereals such as Avena sativa (oats), Hordeum vulgare (barley) and Triticum aestivum (wheat). Centromeres are indispensable functional units of chromosomes and they play a core role in genome polyploidization events during evolution. As the Brachypodium genus contains about 20 species that differ significantly in terms of their basic chromosome numbers, genome size, ploidy levels and life strategies, studying their centromeres may provide important insight into the structure and evolution of the genome in this interesting and important genus. In this study, we isolated the centromeric DNA of the B. distachyon reference line Bd21 and characterized its composition via the chromatin immunoprecipitation of the nucleosomes that contain the centromere-specific histone CENH3. We revealed that the centromeres of Bd21 have the features of typical multicellular eukaryotic centromeres. Strikingly, these centromeres contain relatively few centromeric satellite DNAs; in particular, the centromere of chromosome 5 (Bd5) consists of only ~40 kb. Moreover, the centromeric retrotransposons in B. distachyon (CRBds) are evolutionarily young. These transposable elements are located both within and adjacent to the CENH3 binding domains, and have similar compositions. Moreover, based on the presence of CRBds in the centromeres, the species in this study can be grouped into two distinct lineages. This may provide new evidence regarding the phylogenetic relationships within the Brachypodium genus. © 2018 The Authors The Plant Journal © 2018 John Wiley & Sons Ltd.

  16. MODEL REQUEST FOR PROPOSALS TO PROVIDE ENERGY AND OTHER ATTRIBUTES FROM AN OFFSHORE WIND POWER PROJECT

    Energy Technology Data Exchange (ETDEWEB)

    Jeremy Firestone; Dawn Kurtz Crompton

    2011-10-22

    This document provides a model RFP for new generation. The 'base' RFP is for a single-source offshore wind RFP. Required modifications are noted should a state or utility seek multi-source bids (e.g., all renewables or all sources). The model is premised on proposals meeting threshold requirements (e.g., a MW range of generating capacity and a range in terms of years), RFP issuer preferences (e.g., likelihood of commercial operation by a date certain, price certainty, and reduction in congestion), and evaluation criteria, along with a series of plans (e.g., site, environmental effects, construction, community outreach, interconnection, etc.). The Model RFP places the most weight on project risk (45%), followed by project economics (35%), and environmental and social considerations (20%). However, if a multi-source RFP is put forward, the sponsor would need to either add per-MWh technology-specific, life-cycle climate (CO2), environmental and health impact costs to bid prices under the 'Project Economics' category or it should increase the weight given to the 'Environmental and Social Considerations' category.

  17. Creating a market: an economic analysis of the purchaser-provider model.

    Science.gov (United States)

    Shackley, P; Healey, A

    1993-09-01

    The focus of this paper is the extent to which the purchaser-provider split and the creation of a market in the provision of health care can be expected to bring about greater efficiency within the new NHS. The starting point is a theoretical discussion of markets and competition. In particular, emphasis is placed upon the economic model of perfect competition. It is argued that because of the existence of externalities, uncertainty and a lack of perfect information, an unregulated market in health care will almost certainly fail. In view of this, the imperfect provider markets of monopoly and contestable markets, which are of particular relevance to health care, are discussed. A description of the new health care market and the principal actors within it is followed by an evaluation of the new health care market. It is argued that in view of the restrictions to competition that exist between providers, some form of price regulation will be necessary to prevent monopolistic behaviour in the hospital sector. Regulation of purchasers is also suggested as a means of improving efficiency. It is concluded that competition may be a necessary condition for increased efficiency in health care provision, but is not sufficient in itself. Other incentives in the hospital sector are necessary to assist the market process and to enhance its impact on efficiency.

  18. Little Evidence Exists To Support The Expectation That Providers Would Consolidate To Enter New Payment Models.

    Science.gov (United States)

    Neprash, Hannah T; Chernew, Michael E; McWilliams, J Michael

    2017-02-01

    Provider consolidation has been associated with higher health care prices and spending. The prevailing wisdom is that payment reform will accelerate consolidation, especially between physicians and hospitals and among physician groups, as providers position themselves to bear financial risk for the full continuum of patient care. Drawing on data from a number of sources from 2008 onward, we examined the relationship between Medicare's accountable care organization (ACO) programs and provider consolidation. We found that consolidation was under way in the period 2008-10, before the Affordable Care Act (ACA) established the ACO programs. While the number of hospital mergers and the size of specialty-oriented physician groups increased after the ACA was passed, we found minimal evidence that consolidation was associated with ACO penetration at the market level or with physicians' participation in ACOs within markets. We conclude that payment reform has been associated with little acceleration in consolidation in addition to trends already under way, but there is evidence of potential defensive consolidation in response to new payment models. Project HOPE—The People-to-People Health Foundation, Inc.

  19. The Nordic welfare model providing energy transition? A political geography approach to the EU RES directive

    International Nuclear Information System (INIS)

    Westholm, Erik; Beland Lindahl, Karin

    2012-01-01

    The EU Renewable Energy Strategy (RES) Directive requires that each member state obtain 20% of its energy supply from renewable sources by 2020. If fully implemented, this implies major changes in institutions, infrastructure, land use, and natural resource flows. This study applies a political geography perspective to explore the transition to renewable energy use in the heating and cooling segment of the Swedish energy system, 1980–2010. The Nordic welfare model, which developed mainly after the Second World War, required relatively uniform, standardized local and regional authorities functioning as implementation agents for national politics. Since 1980, the welfare orientation has gradually been complemented by competition politics promoting technological change, innovation, and entrepreneurship. This combination of welfare state organization and competition politics provided the dynamics necessary for energy transition, which occurred in a semi-public sphere of actors at various geographical scales. However, our analysis, suggest that this was partly an unintended policy outcome, since it was based on a welfare model with no significant energy aims. Our case study suggests that state organization plays a significant role, and that the EU RES Directive implementation will be uneven across Europe, reflecting various welfare models with different institutional pre-requisites for energy transition. - Highlights: ► We explore the energy transition in the heating/cooling sector in Sweden 1980–2000. ► The role of the state is studied from a political geography perspective. ► The changing welfare model offered the necessary institutional framework. ► Institutional arrangements stand out as central to explain the relative success. ► The use of renewables in EU member states will continue to vary significantly.

  20. Ecosystem Services Provided by Agricultural Land as Modeled by Broad Scale Geospatial Analysis

    Science.gov (United States)

    Kokkinidis, Ioannis

    Agricultural ecosystems provide multiple services including food and fiber provision, nutrient cycling, soil retention and water regulation. Objectives of the study were to identify and quantify a selection of ecosystem services provided by agricultural land, using existing geospatial tools and preferably free and open source data, such as the Virginia Land Use Evaluation System (VALUES), the North Carolina Realistic Yield Expectations (RYE) database, and the land cover datasets NLCD and CDL. Furthermore I sought to model tradeoffs between provisioning and other services. First I assessed the accuracy of agricultural land in NLCD and CDL over a four county area in eastern Virginia using cadastral parcels. I uncovered issues concerning the definition of agricultural land. The area and location of agriculture saw little change in the 19 years studied. Furthermore all datasets have significant errors of omission (11.3 to 95.1%) and commission (0 to 71.3%). Location of agriculture was used with spatial crop yield databases I created and combined with models I adapted to calculate baseline values for plant biomass, nutrient composition and requirements, land suitability for and potential production of biofuels and the economic impact of agriculture for the four counties. The study area was then broadened to cover 97 counties in eastern Virginia and North Carolina, investigating the potential for increased regional grain production through intensification and extensification of agriculture. Predicted yield from geospatial crop models was compared with produced yield from the NASS Survey of Agriculture. Area of most crops in CDL was similar to that in the Survey of Agriculture, but a yield gap is present for most years, partially due to weather, thus indicating potential for yield increase through intensification. Using simple criteria I quantified the potential to extend agriculture in high yield land in other uses and modeled the changes in erosion and runoff should

  1. IAEA technical meeting: Assess and co-ordinate modelling needs and data providers. Summary report

    International Nuclear Information System (INIS)

    Clark, R.E.H.

    2004-05-01

    This report briefly describes the proceedings, conclusions and recommendations of the Technical Meeting to 'Assess and Co-ordinate Modelling Needs and Data Providers', held on 4-5 December 2003. Eight international experts on atomic and molecular data related to fusion energy research activities participated in the meeting. Each participant reviewed the current status of their own speciality and current lines of research as well as anticipated needs in new data for nuclear fusion energy research. Current CRPs on related topics were reviewed. In light of current research activities and anticipated data needs for fusion, a detailed set of tasks appropriate for a new CRP was developed. This meeting completely fulfilled the specified goals. (author)

  2. A model to compare a defined benefit pension fund with a defined contribution provident fund

    Directory of Open Access Journals (Sweden)

    J.M. Nevin

    2003-12-01

    Full Text Available During 1994 universities and certain other institutions were given the option of setting up private retirement funds as an alternative to the AIPF. Because of the underfundedness of the AIPF only a substantially reduced Actuarial Reserve Value could be transferred to the new fund on behalf of each member. Employees at these institutions had to make the difficult decision of whether to remain a member of the AIPF or to join a new fund. Several institutions created defined contribution funds as an alternative to the AIPF. In such funds the member carries the investment risk and most institutions felt the need to provide some form of top-up of the Transfer Value. A simple mathematical model is formulated to aid in the comparison of expected retirement benefits under the AIPF and a private fund and to investigate the management problem of distributing additional top-up funds in a fair manner amongst the various age groups within the fund.

  3. Accurate x-ray spectroscopy

    International Nuclear Information System (INIS)

    Deslattes, R.D.

    1987-01-01

    Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data

  4. A stochastic simulation model for reliable PV system sizing providing for solar radiation fluctuations

    International Nuclear Information System (INIS)

    Kaplani, E.; Kaplanis, S.

    2012-01-01

    Highlights: ► Solar radiation data for European cities follow the Extreme Value or Weibull distribution. ► Simulation model for the sizing of SAPV systems based on energy balance and stochastic analysis. ► Simulation of PV Generator-Loads-Battery Storage System performance for all months. ► Minimum peak power and battery capacity required for reliable SAPV sizing for various European cities. ► Peak power and battery capacity reduced by more than 30% for operation 95% success rate. -- Abstract: The large fluctuations observed in the daily solar radiation profiles affect highly the reliability of the PV system sizing. Increasing the reliability of the PV system requires higher installed peak power (P m ) and larger battery storage capacity (C L ). This leads to increased costs, and makes PV technology less competitive. This research paper presents a new stochastic simulation model for stand-alone PV systems, developed to determine the minimum installed P m and C L for the PV system to be energy independent. The stochastic simulation model developed, makes use of knowledge acquired from an in-depth statistical analysis of the solar radiation data for the site, and simulates the energy delivered, the excess energy burnt, the load profiles and the state of charge of the battery system for the month the sizing is applied, and the PV system performance for the entire year. The simulation model provides the user with values for the autonomy factor d, simulating PV performance in order to determine the minimum P m and C L depending on the requirements of the application, i.e. operation with critical or non-critical loads. The model makes use of NASA’s Surface meteorology and Solar Energy database for the years 1990–2004 for various cities in Europe with a different climate. The results obtained with this new methodology indicate a substantial reduction in installed peak power and battery capacity, both for critical and non-critical operation, when compared to

  5. Cameroon mid-level providers offer a promising public health dentistry model

    Directory of Open Access Journals (Sweden)

    Achembong Leo

    2012-11-01

    Full Text Available Background Oral health services are inadequate and unevenly distributed in many developing countries, particularly those in sub-Saharan Africa. Rural areas in these countries and poorer sections of the population in urban areas often do not have access to oral health services mainly because of a significant shortage of dentists and the high costs of care. We reviewed Cameroon’s experience with deploying a mid-level cadre of oral health professionals and the feasibility of establishing a more formal and predictable role for these health workers. We anticipate that a task-shifting approach in the provision of dental care will significantly improve the uneven distribution of oral health services particularly in the rural areas of Cameroon, which is currently served by only 3% of the total number of dentists. Methods The setting of this study was the Cameroon Baptist Convention Health Board (BCHB, which has four dentists and 42 mid-level providers. De-identified data were collected manually from the registries of 10 Baptist Convention clinics located in six of Cameroon’s 10 regions and then entered into an Excel format before importing into STATA. A retrospective abstraction of all entries for patient visits starting October 2010, and going back in time until 1500 visits were extracted from each clinic. Results This study showed that mid-level providers in BCHB clinics are offering a full scope of dental work across the 10 clinics, with the exception of treatment for major facial injuries. Mid-level providers alone performed 93.5% of all extractions, 87.5% of all fillings, 96.5% of all root canals, 97.5% of all cleanings, and 98.1% of all dentures. The dentists also typically played a teaching role in training the mid-level providers. Conclusions The Ministry of Health in Cameroon has an opportunity to learn from the BCHB model to expand access to oral health care across the country. This study shows the benefits of using a simple, workable, low

  6. OpenClimateGIS - A Web Service Providing Climate Model Data in Commonly Used Geospatial Formats

    Science.gov (United States)

    Erickson, T. A.; Koziol, B. W.; Rood, R. B.

    2011-12-01

    The goal of the OpenClimateGIS project is to make climate model datasets readily available in commonly used, modern geospatial formats used by GIS software, browser-based mapping tools, and virtual globes.The climate modeling community typically stores climate data in multidimensional gridded formats capable of efficiently storing large volumes of data (such as netCDF, grib) while the geospatial community typically uses flexible vector and raster formats that are capable of storing small volumes of data (relative to the multidimensional gridded formats). OpenClimateGIS seeks to address this difference in data formats by clipping climate data to user-specified vector geometries (i.e. areas of interest) and translating the gridded data on-the-fly into multiple vector formats. The OpenClimateGIS system does not store climate data archives locally, but rather works in conjunction with external climate archives that expose climate data via the OPeNDAP protocol. OpenClimateGIS provides a RESTful API web service for accessing climate data resources via HTTP, allowing a wide range of applications to access the climate data.The OpenClimateGIS system has been developed using open source development practices and the source code is publicly available. The project integrates libraries from several other open source projects (including Django, PostGIS, numpy, Shapely, and netcdf4-python).OpenClimateGIS development is supported by a grant from NOAA's Climate Program Office.

  7. A Novel Stackelberg-Bertrand Game Model for Pricing Content Provider

    Directory of Open Access Journals (Sweden)

    Cheng Zhang

    2015-11-01

    Full Text Available With the popularity of smart devices such as smartphone, tablet, contents that traditionally be viewed on a personal computer, can also be viewed on these smart devices. The demand for contents thus is increasing year by year, which makes the content providers (CPs get great revenue from either users’ subscription or advertisement. On the other hand, Internet service providers (ISPs, who keep investing in the network technology or capacity capacity to support the huge traffic generated by contents, do not benefit directly from the content traffic. One choice for ISPs is to charge CPs to share the revenue from the huge content traffic. Then ISPs have enough incentives to invest in network infrastructure to improve quality of services (QoS, which eventually benefit CPs and users. This paper presents a novel economic model called Stackelberg-Bertrand game to capture the interaction and competitions among ISPs, CPs and users when ISPs charge CPs. A generic user demand function is assumed to capture the sensitivity of demand to prices of ISPs and CPs. The numerical results show that the price elasticity of ISP and CP plays an important part on the payoff of the ISP and CP.

  8. Molecular modeling of human neutral sphingomyelinase provides insight into its molecular interactions.

    Science.gov (United States)

    Dinesh; Goswami, Angshumala; Suresh, Panneer Selvam; Thirunavukkarasu, Chinnasamy; Weiergräber, Oliver H; Kumar, Muthuvel Suresh

    2011-01-01

    The neutral sphingomyelinase (N-SMase) is considered a major candidate for mediating the stress-induced production of ceramide, and it plays an important role in cell-cycle arrest, apoptosis, inflammation, and eukaryotic stress responses. Recent studies have identified a small region at the very N-terminus of the 55 kDa tumour necrosis factor receptor (TNF-R55), designated the neutral sphingomyelinase activating domain (NSD) that is responsible for the TNF-induced activation of N-SMase. There is no direct association between TNF-R55 NSD and N-SMase; instead, a protein named factor associated with N-SMase activation (FAN) has been reported to couple the TNF-R55 NSD to N-SMase. Since the three-dimensional fold of N-SMase is still unknown, we have modeled the structure using the protein fold recognition and threading method. Moreover, we propose models for the TNF-R55 NSD as well as the FAN protein in order to study the structural basis of N-SMase activation and regulation. Protein-protein interaction studies suggest that FAN is crucially involved in mediating TNF-induced activation of the N-SMase pathway, which in turn regulates mitogenic and proinflammatory responses. Inhibition of N-SMase may lead to reduction of ceramide levels and hence may provide a novel therapeutic strategy for inflammation and autoimmune diseases. Molecular dynamics (MD) simulations were performed to check the stability of the predicted model and protein-protein complex; indeed, stable RMS deviations were obtained throughout the simulation. Furthermore, in silico docking of low molecular mass ligands into the active site of N-SMase suggests that His135, Glu48, Asp177, and Asn179 residues play crucial roles in this interaction. Based on our results, these ligands are proposed to be potent and selective N-SMase inhibitors, which may ultimately prove useful as lead compounds for drug development.

  9. Accurate and rapid modeling of iron-bleomycin-induced DNA damage using tethered duplex oligonucleotides and electrospray ionization ion trap mass spectrometric analysis.

    Science.gov (United States)

    Harsch, A; Marzilli, L A; Bunt, R C; Stubbe, J; Vouros, P

    2000-05-01

    Bleomycin B(2)(BLM) in the presence of iron [Fe(II)] and O(2)catalyzes single-stranded (ss) and double-stranded (ds) cleavage of DNA. Electrospray ionization ion trap mass spectrometry was used to monitor these cleavage processes. Two duplex oligonucleotides containing an ethylene oxide tether between both strands were used in this investigation, allowing facile monitoring of all ss and ds cleavage events. A sequence for site-specific binding and cleavage by Fe-BLM was incorporated into each analyte. One of these core sequences, GTAC, is a known hot-spot for ds cleavage, while the other sequence, GGCC, is a hot-spot for ss cleavage. Incubation of each oligo-nucleotide under anaerobic conditions with Fe(II)-BLM allowed detection of the non-covalent ternary Fe-BLM/oligonucleotide complex in the gas phase. Cleavage studies were then performed utilizing O(2)-activated Fe(II)-BLM. No work-up or separation steps were required and direct MS and MS/MS analyses of the crude reaction mixtures confirmed sequence-specific Fe-BLM-induced cleavage. Comparison of the cleavage patterns for both oligonucleotides revealed sequence-dependent preferences for ss and ds cleavages in accordance with previously established gel electrophoresis analysis of hairpin oligonucleotides. This novel methodology allowed direct, rapid and accurate determination of cleavage profiles of model duplex oligonucleotides after exposure to activated Fe-BLM.

  10. Using a Time-Driven Activity-Based Costing Model To Determine the Actual Cost of Services Provided by a Transgenic Core.

    Science.gov (United States)

    Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J

    2018-03-01

    Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.

  11. 9: ADAPTATION OF PREGNANCY RISK ASSESSMENT MONITORING SYSTEM (PRAMS) AND PROVIDE A MODEL ON IT

    Science.gov (United States)

    Kharaghani, Roghieh; Shariati, Mohammad; Keramat, Afsaneh; Yunesian, Masud; Moghisi, Alireza

    2017-01-01

    Background and aims A surveillance system helps to detect epidemics and the pattern of the problems incidence in the community and it is essential part of evidence based decision making process. This study aimed to adapt of PRAMS and provide a model on it. Methods This study was performed in 7 steps as follows: Surveillance systems in pregnancy were reviewed and appropriate system was selected for Iran by nominal group technique. Two comparative studies were conducted to determine the similarities and differences between Iran and the selected community. PRAMS method and system were adapted based on the results of the comparative studies and experts opinions. The study tool was adapted. A field trial was conducted to assess adapted PRAMS feasibility based on TELOS (technical, economic, legal, operational, and schedule) model in the city of Shahriar, located in the west of Tehran, and to compare data collection methods. Then, based on the results and consultation with related executive managers, the final model of PRAMS was suggested for Iranian health system. Results Review of the surveillance systems in pregnancy, identified six models. The results of the nominal group technique showed that, the appropriate model for Iran is PRAMS. Based on the comparative studies and expert opinions, the appropriate system and method for program was as follows: the sampling frame was composed of data in thyroid screening forms and hospital records, the sampling method was systematic, data collection methods were home and phone based surveys, and participants were women within 2 to 6 months postpartum who had a live or still birth. The study tool was adapted. Thirty-seven health volunteers collected the data in this study (technical feasibility). Any home based completed questionnaire cost 2.45 and a phone cost 1.89 USD. Many indices were achieved from the study, which were worth much more than the expenses (economic feasibility). The project was consistent with legal requirements

  12. How Accurately Do Maize Crop Models Simulate the Interactions of Atmospheric CO2 Concentration Levels With Limited Water Supply on Water Use and Yield?

    Science.gov (United States)

    Durand, Jean-Louis; Delusca, Kenel; Boote, Ken; Lizaso, Jon; Manderscheid, Remy; Weigel, Hans Johachim; Ruane, Alexander Clark; Rosenzweig, Cynthia E.; Jones, Jim; Ahuja, Laj; hide

    2017-01-01

    This study assesses the ability of 21 crop models to capture the impact of elevated CO2 concentration [CO2] on maize yield and water use as measured in a 2-year Free Air Carbon dioxide Enrichment experiment conducted at the Thunen Institute in Braunschweig, Germany (Manderscheid et al. 2014). Data for ambient [CO2] and irrigated treatments were provided to the 21 models for calibrating plant traits, including weather, soil and management data as well as yield, grain number, above ground biomass, leaf area index, nitrogen concentration in biomass and grain, water use and soil water content. Models differed in their representation of carbon assimilation and evapotranspiration processes. The models reproduced the absence of yield response to elevated [CO2] under well-watered conditions, as well as the impact of water deficit at ambient [CO2], with 50 percent of models within a range of plus/minus 1 Mg ha(exp. -1) around the mean. The bias of the median of the 21 models was less than 1 Mg ha(exp. -1). However under water deficit in one of the two years, the models captured only 30 percent of the exceptionally high [CO2] enhancement on yield observed. Furthermore the ensemble of models was unable to simulate the very low soil water content at anthesis and the increase of soil water and grain number brought about by the elevated [CO2] under dry conditions. Overall, we found models with explicit stomatal control on transpiration tended to perform better. Our results highlight the need for model improvement with respect to simulating transpirational water use and its impact on water status during the kernel-set phase.

  13. Integrated model for providing tactical emergency medicine support (TEMS): analysis of 120 tactical situations.

    Science.gov (United States)

    Vainionpää, T; Peräjoki, K; Hiltunen, T; Porthan, K; Taskinen, A; Boyd, J; Kuisma, M

    2012-02-01

    Various models for organising tactical emergency medicine support (TEMS) in law enforcement operations exist. In Helsinki, TEMS is organised as an integral part of emergency medical service (EMS) and applied in hostage, siege, bomb threat and crowd control situations and in other tactical situations after police request. Our aim was to analyse TEMS operations, patient profile, and the level of on-site care provided. We conducted a retrospective cohort study of TEMS operations in Helsinki from 2004 to 2009. Data were retrieved from EMS, hospital and dispatching centre files and from TEMS reports. One hundred twenty TEMS operations were analysed. Median time from dispatching to arrival on scene was 10 min [Interquartile Range (IQR) 7-14]. Median duration of operations was 41 min (IQR 19-63). Standby was the only activity in 72 operations, four patients were dead on arrival, 16 requests were called off en route and patient examination or care was needed in 28 operations. Twenty-eight patients (records retrieved) were alive on arrival and were classified as trauma (n = 12) or medical (n = 16). Of traumas, two sustained a gunshot wound, one sustained a penetrating abdominal wound, three sustained medium severity injuries and nine sustained minor injuries. There was neither on-scene nor in-hospital mortality among patients who were alive on arrival. The level of on-site care performed was basic life support in all cases. The results showed that TEMS integrated to daily EMS services including safe zone working only was a feasible, rapid and efficient way to provide medical support to law enforcement operations. © 2011 The Authors Acta Anaesthesiologica Scandinavica © 2011 The Acta Anaesthesiologica Scandinavica Foundation.

  14. Estimated Nutritive Value of Low-Price Model Lunch Sets Provided to Garment Workers in Cambodia

    Directory of Open Access Journals (Sweden)

    Jan Makurat

    2017-07-01

    Full Text Available Background: The establishment of staff canteens is expected to improve the nutritional situation of Cambodian garment workers. The objective of this study is to assess the nutritive value of low-price model lunch sets provided at a garment factory in Phnom Penh, Cambodia. Methods: Exemplary lunch sets were served to female workers through a temporary canteen at a garment factory in Phnom Penh. Dish samples were collected repeatedly to examine mean serving sizes of individual ingredients. Food composition tables and NutriSurvey software were used to assess mean amounts and contributions to recommended dietary allowances (RDAs or adequate intake of energy, macronutrients, dietary fiber, vitamin C (VitC, iron, vitamin A (VitA, folate and vitamin B12 (VitB12. Results: On average, lunch sets provided roughly one third of RDA or adequate intake of energy, carbohydrates, fat and dietary fiber. Contribution to RDA of protein was high (46% RDA. The sets contained a high mean share of VitC (159% RDA, VitA (66% RDA, and folate (44% RDA, but were low in VitB12 (29% RDA and iron (20% RDA. Conclusions: Overall, lunches satisfied recommendations of caloric content and macronutrient composition. Sets on average contained a beneficial amount of VitC, VitA and folate. Adjustments are needed for a higher iron content. Alternative iron-rich foods are expected to be better suited, compared to increasing portions of costly meat/fish components. Lunch provision at Cambodian garment factories holds the potential to improve food security of workers, approximately at costs of <1 USD/person/day at large scale. Data on quantitative total dietary intake as well as physical activity among workers are needed to further optimize the concept of staff canteens.

  15. An integrated Biophysical CGE model to provide Sustainable Development Goal insights

    Science.gov (United States)

    Sanchez, Marko; Cicowiez, Martin; Howells, Mark; Zepeda, Eduardo

    2016-04-01

    Future projected changes in the energy system will inevitably result in changes to the level of appropriation of environmental resources, particularly land and water, and this will have wider implications for environmental sustainability, and may affect other sectors of the economy. An integrated climate, land, energy and water (CLEW) system will provide useful insights, particularly with regard to the environmental sustainability. However, it will require adequate integration with other tools to detect economic impacts and broaden the scope for policy analysis. A computable general equilibrium (CGE) model is a well suited tool to channel impacts, as detected in a CLEW analysis, onto all sectors of the economy, and evaluate trade-offs and synergies, including those of possible policy responses. This paper will show an application of such integration in a single-country CGE model with the following key characteristics. Climate is partly exogenous (as proxied by temperature and rainfall) and partly endogenous (as proxied by emissions generated by different sectors) and has an impact on endogenous variables such as land productivity and labor productivity. Land is a factor of production used in agricultural and forestry activities which can be of various types if land use alternatives (e.g., deforestation) are to be considered. Energy is an input to the production process of all economic sectors and a consumption good for households. Because it is possible to allow for substitution among different energy sources (e.g. renewable vs non-renewable) in the generation of electricity, the production process of energy products can consider the use of natural resources such as oil and water. Water, data permitting, can be considered as an input into the production process of agricultural sectors, which is particularly relevant in case of irrigation. It can also be considered as a determinant of total factor productivity in hydro-power generation. The integration of a CLEW

  16. A simple and accurate rule-based modeling framework for simulation of autocrine/paracrine stimulation of glioblastoma cell motility and proliferation by L1CAM in 2-D culture.

    Science.gov (United States)

    Caccavale, Justin; Fiumara, David; Stapf, Michael; Sweitzer, Liedeke; Anderson, Hannah J; Gorky, Jonathan; Dhurjati, Prasad; Galileo, Deni S

    2017-12-11

    Glioblastoma multiforme (GBM) is a devastating brain cancer for which there is no known cure. Its malignancy is due to rapid cell division along with high motility and invasiveness of cells into the brain tissue. Simple 2-dimensional laboratory assays (e.g., a scratch assay) commonly are used to measure the effects of various experimental perturbations, such as treatment with chemical inhibitors. Several mathematical models have been developed to aid the understanding of the motile behavior and proliferation of GBM cells. However, many are mathematically complicated, look at multiple interdependent phenomena, and/or use modeling software not freely available to the research community. These attributes make the adoption of models and simulations of even simple 2-dimensional cell behavior an uncommon practice by cancer cell biologists. Herein, we developed an accurate, yet simple, rule-based modeling framework to describe the in vitro behavior of GBM cells that are stimulated by the L1CAM protein using freely available NetLogo software. In our model L1CAM is released by cells to act through two cell surface receptors and a point of signaling convergence to increase cell motility and proliferation. A simple graphical interface is provided so that changes can be made easily to several parameters controlling cell behavior, and behavior of the cells is viewed both pictorially and with dedicated graphs. We fully describe the hierarchical rule-based modeling framework, show simulation results under several settings, describe the accuracy compared to experimental data, and discuss the potential usefulness for predicting future experimental outcomes and for use as a teaching tool for cell biology students. It is concluded that this simple modeling framework and its simulations accurately reflect much of the GBM cell motility behavior observed experimentally in vitro in the laboratory. Our framework can be modified easily to suit the needs of investigators interested in other

  17. An algorithm to provide UK global radiation for use with models

    International Nuclear Information System (INIS)

    Hamer, P.J.C.

    1999-01-01

    Decision support systems which include crop growth models require long-term average values of global radiation to simulate future expected growth. Global radiation is rarely available as there are relatively few meteorological stations with long-term records and so interpolation between sites is difficult. Global radiation data across a good geographical spread throughout the UK were obtained and sub-divided into ‘coastal’ and ‘inland’ sites. Monthly means of global radiation (S) were extracted and analysed in relation to irradiance in the absence of atmosphere (S o ) calculated from site latitude and the time of year. The ratio S/S o was fitted to the month of the year (t) and site latitude using a nonlinear fit function in which 90% of the variance was accounted for. An algorithm is presented which provides long-term daily values of global radiation from information on latitude, time of year and whether the site is inland or close to the coast. (author)

  18. Accurate control testing for clay liner permeability

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, R J

    1991-08-01

    Two series of centrifuge tests were carried out to evaluate the use of centrifuge modelling as a method of accurate control testing of clay liner permeability. The first series used a large 3 m radius geotechnical centrifuge and the second series a small 0.5 m radius machine built specifically for research on clay liners. Two permeability cells were fabricated in order to provide direct data comparisons between the two methods of permeability testing. In both cases, the centrifuge method proved to be effective and efficient, and was found to be free of both the technical difficulties and leakage risks normally associated with laboratory permeability testing of fine grained soils. Two materials were tested, a consolidated kaolin clay having an average permeability coefficient of 1.2{times}10{sup -9} m/s and a compacted illite clay having a permeability coefficient of 2.0{times}10{sup -11} m/s. Four additional tests were carried out to demonstrate that the 0.5 m radius centrifuge could be used for linear performance modelling to evaluate factors such as volumetric water content, compaction method and density, leachate compatibility and other construction effects on liner leakage. The main advantages of centrifuge testing of clay liners are rapid and accurate evaluation of hydraulic properties and realistic stress modelling for performance evaluations. 8 refs., 12 figs., 7 tabs.

  19. Spdef null mice lack conjunctival goblet cells and provide a model of dry eye.

    Science.gov (United States)

    Marko, Christina K; Menon, Balaraj B; Chen, Gang; Whitsett, Jeffrey A; Clevers, Hans; Gipson, Ilene K

    2013-07-01

    Goblet cell numbers decrease within the conjunctival epithelium in drying and cicatrizing ocular surface diseases. Factors regulating goblet cell differentiation in conjunctival epithelium are unknown. Recent data indicate that the transcription factor SAM-pointed domain epithelial-specific transcription factor (Spdef) is essential for goblet cell differentiation in tracheobronchial and gastrointestinal epithelium of mice. Using Spdef(-/-) mice, we determined that Spdef is required for conjunctival goblet cell differentiation and that Spdef(-/-) mice, which lack conjunctival goblet cells, have significantly increased corneal surface fluorescein staining and tear volume, a phenotype consistent with dry eye. Microarray analysis of conjunctival epithelium in Spdef(-/-) mice revealed down-regulation of goblet cell-specific genes (Muc5ac, Tff1, Gcnt3). Up-regulated genes included epithelial cell differentiation/keratinization genes (Sprr2h, Tgm1) and proinflammatory genes (Il1-α, Il-1β, Tnf-α), all of which are up-regulated in dry eye. Interestingly, four Wnt pathway genes were down-regulated. SPDEF expression was significantly decreased in the conjunctival epithelium of Sjögren syndrome patients with dry eye and decreased goblet cell mucin expression. These data demonstrate that Spdef is required for conjunctival goblet cell differentiation and down-regulation of SPDEF may play a role in human dry eye with goblet cell loss. Spdef(-/-) mice have an ocular surface phenotype similar to that in moderate dry eye, providing a new, more convenient model for the disease. Copyright © 2013 American Society for Investigative Pathology. Published by Elsevier Inc. All rights reserved.

  20. The EZ diffusion model provides a powerful test of simple empirical effects.

    Science.gov (United States)

    van Ravenzwaaij, Don; Donkin, Chris; Vandekerckhove, Joachim

    2017-04-01

    Over the last four decades, sequential accumulation models for choice response times have spread through cognitive psychology like wildfire. The most popular style of accumulator model is the diffusion model (Ratcliff Psychological Review, 85, 59-108, 1978), which has been shown to account for data from a wide range of paradigms, including perceptual discrimination, letter identification, lexical decision, recognition memory, and signal detection. Since its original inception, the model has become increasingly complex in order to account for subtle, but reliable, data patterns. The additional complexity of the diffusion model renders it a tool that is only for experts. In response, Wagenmakers et al. (Psychonomic Bulletin & Review, 14, 3-22, 2007) proposed that researchers could use a more basic version of the diffusion model, the EZ diffusion. Here, we simulate experimental effects on data generated from the full diffusion model and compare the power of the full diffusion model and EZ diffusion to detect those effects. We show that the EZ diffusion model, by virtue of its relative simplicity, will be sometimes better able to detect experimental effects than the data-generating full diffusion model.

  1. Economic model of a cloud provider operating in a federated cloud

    OpenAIRE

    Goiri Presa, Íñigo; Guitart Fernández, Jordi; Torres Viñals, Jordi

    2012-01-01

    Resource provisioning in Cloud providers is a challenge because of the high variability of load over time. On the one hand, the providers can serve most of the requests owning only a restricted amount of resources, but this forces to reject customers during peak hours. On the other hand, valley hours incur in under-utilization of the resources, which forces the providers to increase their prices to be profitable. Federation overcomes these limitations and allows pro...

  2. Activity-based funding model provides foundation for province-wide best practices in renal care.

    Science.gov (United States)

    Levin, Adeera; Lo, Clifford; Noel, Kevin; Djurdjev, Ogjnenka; Amano, Erlyn C

    2013-01-01

    British Columbia has a unique funding model for renal care in Canada. Patient care is delivered through six health authorities, while funding is administered by the Provincial Renal Agency using an activity-based funding model. The model allocates funding based on a schedule of costs for every element of renal care, excluding physician fees. Accountability, transparency of allocation and tracking of outcomes are key features that ensure successful implementation. The model supports province-wide best practices and equitable care and fosters innovation. Since its introduction, the outpatient renal services budget has grown less than the population, while maintaining or improving clinical outcomes. Copyright © 2013 Longwoods Publishing.

  3. A model of sustainable development of scientific research health institutions, providing high-tech medical care

    Directory of Open Access Journals (Sweden)

    I. Yu. Bedoreva

    2017-01-01

    Full Text Available The issue of sustainability is relevant for all types of businesses and organizations. Long-term development has always been and remains one of the most difficult tasks faced by organizations. The implementation the provisions of international standards ISO series 9000 has proven to be effective. The ISO standards are concentrated on the global experience for sustainable success of organizations. The standards incorporated all the rational that has been accumulated in this field of knowledge and practice. These standards not only eliminate technical barriers in collaboration and have established standardized approaches, but also serve as a valuable source of international experience and ready management solutions. They became a practical guide for the creation of management systems for sustainable development in organizations of different spheres of activity.Problem and purpose. The article presents the author’s approach to the problem of sustainable development health of the organization. The purpose of this article is to examine the approaches to management for sustainable success of organizations and to describe a model of sustainable development applied in research healthcare institutions providing high-tech medical care.Methodology. The study used general scientific methods of empirical and theoretical knowledge, general logical methods and techniques and methods of system analysis, comparison, analogy, generalization, the materials research for the development of medical organizations.The main results of our work are to first develop the technique of complex estimation of activity of the scientific-research institutions of health and deploy key elements of the management system that allows the level of maturity of the management system of the institution to be set in order to identify its strengths and weaknesses, and to identify areas for improvements and innovation, and to set priorities for determining the sequence of action when

  4. Supportive Accountability: A model for providing human support for internet and ehealth interventions

    NARCIS (Netherlands)

    Mohr, D.C.; Cuijpers, P.; Lehman, K.A.

    2011-01-01

    The effectiveness of and adherence to eHealth interventions is enhanced by human support. However, human support has largely not been manualized and has usually not been guided by clear models. The objective of this paper is to develop a clear theoretical model, based on relevant empirical

  5. A Distance Education Model for Training Substance Abuse Treatment Providers in Cognitive-Behavioral Therapy

    Science.gov (United States)

    Watson, Donnie W.; Rawson, Richard R.; Rataemane, Solomon; Shafer, Michael S.; Obert, Jeanne; Bisesi, Lorrie; Tanamly, Susie

    2003-01-01

    This paper presents a rationale for the use of a distance education approach in the clinical training of community substance abuse treatment providers. Developing and testing new approaches to the clinical training and supervision of providers is important in the substance abuse treatment field where new information is always available. A…

  6. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation

    Science.gov (United States)

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework. PMID:28522983

  7. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation.

    Science.gov (United States)

    Kim, Ji Chul

    2017-01-01

    Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.

  8. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation

    Directory of Open Access Journals (Sweden)

    Ji Chul Kim

    2017-05-01

    Full Text Available Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.

  9. 75 FR 2562 - Publication of Model Notices for Health Care Continuation Coverage Provided Pursuant to the...

    Science.gov (United States)

    2010-01-15

    ... DEPARTMENT OF LABOR Employee Benefits Security Administration Publication of Model Notices for... AGENCY: Employee Benefits Security Administration, Department of Labor. ACTION: Notice of the..., contact the Department's Employee Benefits Security Administration's Benefits Advisors at 1-866-444-3272...

  10. An Analytical Model That Provides Insights into Various C2 Issues

    National Research Council Canada - National Science Library

    Taylor, James G; Neta, Beny; Shugart, Peter A

    2004-01-01

    .... A Lanchester-type model of force-on-force combat that reflects C2 architecture at the platform level is developed through a detailed analysis of the target-engagement cycle for a single typical firer...

  11. An open-loop, physiologic model-based decision support system can provide appropriate ventilator settings

    DEFF Research Database (Denmark)

    Karbing, Dan Stieper; Spadaro, Savino; Dey, Nilanjan

    2018-01-01

    OBJECTIVES: To evaluate the physiologic effects of applying advice on mechanical ventilation by an open-loop, physiologic model-based clinical decision support system. DESIGN: Prospective, observational study. SETTING: University and Regional Hospitals' ICUs. PATIENTS: Varied adult ICU population...

  12. Equipment upgrade - Accurate positioning of ion chambers

    International Nuclear Information System (INIS)

    Doane, Harry J.; Nelson, George W.

    1990-01-01

    Five adjustable clamps were made to firmly support and accurately position the ion Chambers, that provide signals to the power channels for the University of Arizona TRIGA reactor. The design requirements, fabrication procedure and installation are described

  13. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    International Nuclear Information System (INIS)

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-01-01

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO 2 )]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO 2 ), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO 2 were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO 2 distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower

  14. Application service provider (ASP) financial models for off-site PACS archiving

    Science.gov (United States)

    Ratib, Osman M.; Liu, Brent J.; McCoy, J. Michael; Enzmann, Dieter R.

    2003-05-01

    For the replacement of its legacy Picture Archiving and Communication Systems (approx. annual workload of 300,000 procedures), UCLA Medical Center has evaluated and adopted an off-site data-warehousing solution based on an ASP financial with a one-time single payment per study archived. Different financial models for long-term data archive services were compared to the traditional capital/operational costs of on-site digital archives. Total cost of ownership (TCO), including direct and indirect expenses and savings, were compared for each model. Financial parameters were considered: logistic/operational advantages and disadvantages of ASP models versus traditional archiving systems. Our initial analysis demonstrated that the traditional linear ASP business model for data storage was unsuitable for large institutions. The overall cost markedly exceeds the TCO of an in-house archive infrastructure (when support and maintenance costs are included.) We demonstrated, however, that non-linear ASP pricing models can be cost-effective alternatives for large-scale data storage, particularly if they are based on a scalable off-site data-warehousing service and the prices are adapted to the specific size of a given institution. The added value of ASP is that it does not require iterative data migrations from legacy media to new storage media at regular intervals.

  15. The proposed 'concordance-statistic for benefit' provided a useful metric when modeling heterogeneous treatment effects.

    Science.gov (United States)

    van Klaveren, David; Steyerberg, Ewout W; Serruys, Patrick W; Kent, David M

    2018-02-01

    Clinical prediction models that support treatment decisions are usually evaluated for their ability to predict the risk of an outcome rather than treatment benefit-the difference between outcome risk with vs. without therapy. We aimed to define performance metrics for a model's ability to predict treatment benefit. We analyzed data of the Synergy between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery (SYNTAX) trial and of three recombinant tissue plasminogen activator trials. We assessed alternative prediction models with a conventional risk concordance-statistic (c-statistic) and a novel c-statistic for benefit. We defined observed treatment benefit by the outcomes in pairs of patients matched on predicted benefit but discordant for treatment assignment. The 'c-for-benefit' represents the probability that from two randomly chosen matched patient pairs with unequal observed benefit, the pair with greater observed benefit also has a higher predicted benefit. Compared to a model without treatment interactions, the SYNTAX score II had improved ability to discriminate treatment benefit (c-for-benefit 0.590 vs. 0.552), despite having similar risk discrimination (c-statistic 0.725 vs. 0.719). However, for the simplified stroke-thrombolytic predictive instrument (TPI) vs. the original stroke-TPI, the c-for-benefit (0.584 vs. 0.578) was similar. The proposed methodology has the potential to measure a model's ability to predict treatment benefit not captured with conventional performance metrics. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Petrographic characterization to build an accurate rock model using micro-CT: Case study on low-permeable to tight turbidite sandstone from Eocene Shahejie Formation.

    Science.gov (United States)

    Munawar, Muhammad Jawad; Lin, Chengyan; Cnudde, Veerle; Bultreys, Tom; Dong, Chunmei; Zhang, Xianguo; De Boever, Wesley; Zahid, Muhammad Aleem; Wu, Yuqi

    2018-03-26

    Pore scale flow simulations heavily depend on petrographic characterizing and modeling of reservoir rocks. Mineral phase segmentation and pore network modeling are crucial stages in micro-CT based rock modeling. The success of the pore network model (PNM) to predict petrophysical properties relies on image segmentation, image resolution and most importantly nature of rock (homogenous, complex or microporous). The pore network modeling has experienced extensive research and development during last decade, however the application of these models to a variety of naturally heterogenous reservoir rock is still a challenge. In this paper, four samples from a low permeable to tight sandstone reservoir were used to characterize their petrographic and petrophysical properties using high-resolution micro-CT imaging. The phase segmentation analysis from micro-CT images shows that 5-6% microporous regions are present in kaolinite rich sandstone (E3 and E4), while 1.7-1.8% are present in illite rich sandstone (E1 and E2). The pore system percolates without micropores in E1 and E2 while it does not percolate without micropores in E3 and E4. In E1 and E2, total MICP porosity is equal to the volume percent of macrospores determined from micro-CT images, which indicate that the macropores are well connected and microspores do not play any role in non-wetting fluid (mercury) displacement process. Whereas in E3 and E4 sandstones, the volume percent of micropores is far less (almost 50%) than the total MICP porosity which means that almost half of the pore space was not detected by the micro-CT scan. PNM behaved well in E1 and E2 where better agreement exists in PNM and MICP measurements. While E3 and E4 exhibit multiscale pore space which cannot be addressed with single scale PNM method, a multiscale approach is needed to characterize such complex rocks. This study provides helpful insights towards the application of existing micro-CT based petrographic characterization methodology

  17. An integrated decision making model for the selection of sustainable forward and reverse logistic providers

    DEFF Research Database (Denmark)

    Govindan, Kannan; Agarwal, Vernika; Darbari, Jyoti Dhingra

    2017-01-01

    Due to rising concerns for environmental sustainability, the Indian electronic industry faces immense pressure to incorporate effective sustainable practices into the supply chain (SC) planning. Consequently, manufacturing enterprises (ME) are exploring the option of re-examining their SC...... strategies and taking a formalized approach towards a sustainable partnership with logistics providers. To begin with, it is imperative to associate with sustainable forward and reverse logistics providers to manage effectively the upward and downstream flows simultaneously. In this context, this paper...... improve the sustainable performance value of the SC network and secure reasonable profits. The managerial implications drawn from the result analysis provide a sustainable framework to the ME for enhancing its corporate image....

  18. Integrating Modeling and Monitoring to Provide Long-Term Control of Contaminants

    International Nuclear Information System (INIS)

    Fogwell, Th.

    2009-01-01

    An introduction is presented of the types of problems that exist for long-term control of radionuclides at DOE sites. A breakdown of the distributions at specific sites is given, together with the associated difficulties. A paradigm for remediation showing the integration of monitoring with modeling is presented. It is based on a feedback system that allows for the monitoring to act as principal sensors in a control system. Currently the establishment of a very prescriptive monitoring program fails to have a mechanism for improving models and improving control of the contaminants. The resulting system can be optimized to improve performance. Optimizing monitoring automatically entails linking the monitoring with modeling. If monitoring designs were required to be more efficient, thus requiring optimization, then the monitoring automatically becomes linked to modeling. Records of decision could be written to accommodate revisions in monitoring as better modeling evolves. The technical pieces of the required paradigm are already available; they just need to be implemented and applied to solve the long-term control of the contaminants. An integration of the various parts of the system is presented. Each part is described, and examples are given. References are given to other projects which bring together similar elements in systems for the control of contaminants. Trends are given for the development of the technical features of a robust system. Examples of monitoring methods for specific sites are given. The examples are used to illustrate how such a system would work. Examples of technology needs are presented. Finally, other examples of integrated modeling-monitoring approaches are presented. (authors)

  19. A conceptual model: Redesigning how we provide palliative care for patients with chronic obstructive pulmonary disease.

    Science.gov (United States)

    Philip, Jennifer; Crawford, Gregory; Brand, Caroline; Gold, Michelle; Miller, Belinda; Hudson, Peter; Smallwood, Natasha; Lau, Rosalind; Sundararajan, Vijaya

    2017-05-31

    Despite significant needs, patients with chronic obstructive pulmonary disease (COPD) make limited use of palliative care, in part because the current models of palliative care do not address their key concerns. Our aim was to develop a tailored model of palliative care for patients with COPD and their family caregivers. Based on information gathered within a program of studies (qualitative research exploring experiences, a cohort study examining service use), an expert advisory committee evaluated and integrated data, developed responses, formulated principles to inform care, and made recommendations for practice. The informing studies were conducted in two Australian states: Victoria and South Australia. A series of principles underpinning the model were developed, including that it must be: (1) focused on patient and caregiver; (2) equitable, enabling access to components of palliative care for a group with significant needs; (3) accessible; and (4) less resource-intensive than expansion of usual palliative care service delivery. The recommended conceptual model was to have the following features: (a) entry to palliative care occurs routinely triggered by clinical transitions in care; (b) care is embedded in routine ambulatory respiratory care, ensuring that it is regarded as "usual" care by patients and clinicians alike; (c) the tasks include screening for physical and psychological symptoms, social and community support, provision of information, and discussions around goals and preferences for care; and (d) transition to usual palliative care services is facilitated as the patient nears death. Our proposed innovative and conceptual model for provision of palliative care requires future formal testing using rigorous mixed-methods approaches to determine if theoretical propositions translate into effectiveness, feasibility, and benefits (including economic benefits). There is reason to consider adaptation of the model for the palliative care of patients with

  20. Supportive Accountability: A Model for Providing Human Support to Enhance Adherence to eHealth Interventions

    Science.gov (United States)

    2011-01-01

    The effectiveness of and adherence to eHealth interventions is enhanced by human support. However, human support has largely not been manualized and has usually not been guided by clear models. The objective of this paper is to develop a clear theoretical model, based on relevant empirical literature, that can guide research into human support components of eHealth interventions. A review of the literature revealed little relevant information from clinical sciences. Applicable literature was drawn primarily from organizational psychology, motivation theory, and computer-mediated communication (CMC) research. We have developed a model, referred to as “Supportive Accountability.” We argue that human support increases adherence through accountability to a coach who is seen as trustworthy, benevolent, and having expertise. Accountability should involve clear, process-oriented expectations that the patient is involved in determining. Reciprocity in the relationship, through which the patient derives clear benefits, should be explicit. The effect of accountability may be moderated by patient motivation. The more intrinsically motivated patients are, the less support they likely require. The process of support is also mediated by the communications medium (eg, telephone, instant messaging, email). Different communications media each have their own potential benefits and disadvantages. We discuss the specific components of accountability, motivation, and CMC medium in detail. The proposed model is a first step toward understanding how human support enhances adherence to eHealth interventions. Each component of the proposed model is a testable hypothesis. As we develop viable human support models, these should be manualized to facilitate dissemination. PMID:21393123

  1. The Roy Adaptation Model: A Theoretical Framework for Nurses Providing Care to Individuals With Anorexia Nervosa.

    Science.gov (United States)

    Jennings, Karen M

    Using a nursing theoretical framework to understand, elucidate, and propose nursing research is fundamental to knowledge development. This article presents the Roy Adaptation Model as a theoretical framework to better understand individuals with anorexia nervosa during acute treatment, and the role of nursing assessments and interventions in the promotion of weight restoration. Nursing assessments and interventions situated within the Roy Adaptation Model take into consideration how weight restoration does not occur in isolation but rather reflects an adaptive process within external and internal environments, and has the potential for more holistic care.

  2. The "P2P" Educational Model Providing Innovative Learning by Linking Technology, Business and Research

    Science.gov (United States)

    Dickinson, Paul Gordon

    2017-01-01

    This paper evaluates the effect and potential of a new educational learning model called Peer to Peer (P2P). The study was focused on Laurea, Hyvinkaa's Finland campus and its response to bridging the gap between traditional educational methods and working reality, where modern technology plays an important role. The study describes and evaluates…

  3. A Context-Aware Model to Provide Positioning in Disaster Relief Scenarios

    Directory of Open Access Journals (Sweden)

    Daniel Moreno

    2015-09-01

    Full Text Available The effectiveness of the work performed during disaster relief efforts is highly dependent on the coordination of activities conducted by the first responders deployed in the affected area. Such coordination, in turn, depends on an appropriate management of geo-referenced information. Therefore, enabling first responders to count on positioning capabilities during these activities is vital to increase the effectiveness of the response process. The positioning methods used in this scenario must assume a lack of infrastructure-based communication and electrical energy, which usually characterizes affected areas. Although positioning systems such as the Global Positioning System (GPS have been shown to be useful, we cannot assume that all devices deployed in the area (or most of them will have positioning capabilities by themselves. Typically, many first responders carry devices that are not capable of performing positioning on their own, but that require such a service. In order to help increase the positioning capability of first responders in disaster-affected areas, this paper presents a context-aware positioning model that allows mobile devices to estimate their position based on information gathered from their surroundings. The performance of the proposed model was evaluated using simulations, and the obtained results show that mobile devices without positioning capabilities were able to use the model to estimate their position. Moreover, the accuracy of the positioning model has been shown to be suitable for conducting most first response activities.

  4. The Strategic Thinking and Learning Community: An Innovative Model for Providing Academic Assistance

    Science.gov (United States)

    Commander, Nannette Evans; Valeri-Gold, Maria; Darnell, Kim

    2004-01-01

    Today, academic assistance efforts are frequently geared to all students, not just the underprepared, with study skills offered in various formats. In this article, the authors describe a learning community model with the theme, "Strategic Thinking and Learning" (STL). Results of data analysis indicate that participants of the STL…

  5. Models Provide Specificity: Testing a Proposed Mechanism of Visual Working Memory Capacity Development

    Science.gov (United States)

    Simmering, Vanessa R.; Patterson, Rebecca

    2012-01-01

    Numerous studies have established that visual working memory has a limited capacity that increases during childhood. However, debate continues over the source of capacity limits and its developmental increase. Simmering (2008) adapted a computational model of spatial cognitive development, the Dynamic Field Theory, to explain not only the source…

  6. The Evolution of Software Pricing: From Box Licenses to Application Service Provider Models.

    Science.gov (United States)

    Bontis, Nick; Chung, Honsan

    2000-01-01

    Describes three different pricing models for software. Findings of this case study support the proposition that software pricing is a complex and subjective process. The key determinant of alignment between vendor and user is the nature of value in the software to the buyer. This value proposition may range from increased cost reduction to…

  7. MODELING QUEUING SYSTEM OF INTERACTION BETWEEN TERMINAL DEVICES AND SERVICES PROVIDERS IN THE BANK

    Directory of Open Access Journals (Sweden)

    Ivan A. Mnatsakanyan

    2014-01-01

    Full Text Available The article focuses on the development of mathematical models and tools to optimize the system of queuing at the bank. The article discusses the mathematical aspects that will achieve redistribution of transaction flow, reduce the time of the request in the queue, increase the bank’s profit and gain competitive advantage.

  8. Can "Ubuntu" Provide a Model for Citizenship Education in African Democracies?

    Science.gov (United States)

    Enslin, Penny; Horsthemke, Kai

    2004-01-01

    Some proponents of Africanism argue that African traditional education and the principles of "ubuntu" should provide the framework for citizenship education. While conceding that understandable concerns lie behind defences of "ubuntu" as underpinning African democracy, we argue that the Africanist perspective faces various problems and makes…

  9. The anti-human trafficking collaboration model and serving victims: Providers' perspectives on the impact and experience.

    Science.gov (United States)

    Kim, Hea-Won; Park, Taekyung; Quiring, Stephanie; Barrett, Diana

    2018-01-01

    A coalition model is often used to serve victims of human trafficking but little is known about whether the model is adequately meeting the needs of the victims. The purpose of this study was to examine anti-human trafficking collaboration model in terms of its impact and the collaborative experience, including challenges and lessons learned from the service providers' perspective. Mixed methods study was conducted to evaluate the impact of a citywide anti-trafficking coalition model from the providers' perspectives. Web-based survey was administered with service providers (n = 32) and focus groups were conducted with Core Group members (n = 10). Providers reported the coalition model has made important impacts in the community by increasing coordination among the key agencies, law enforcement, and service providers and improving quality of service provision. Providers identified the improved and expanded partnerships among coalition members as the key contributing factor to the success of the coalition model. Several key strategies were suggested to improve the coalition model: improved referral tracking, key partner and protocol development, and information sharing.

  10. An agent-based simulation model of patient choice of health care providers in accountable care organizations.

    Science.gov (United States)

    Alibrahim, Abdullah; Wu, Shinyi

    2018-03-01

    Accountable care organizations (ACO) in the United States show promise in controlling health care costs while preserving patients' choice of providers. Understanding the effects of patient choice is critical in novel payment and delivery models like ACO that depend on continuity of care and accountability. The financial, utilization, and behavioral implications associated with a patient's decision to forego local health care providers for more distant ones to access higher quality care remain unknown. To study this question, we used an agent-based simulation model of a health care market composed of providers able to form ACO serving patients and embedded it in a conditional logit decision model to examine patients capable of choosing their care providers. This simulation focuses on Medicare beneficiaries and their congestive heart failure (CHF) outcomes. We place the patient agents in an ACO delivery system model in which provider agents decide if they remain in an ACO and perform a quality improving CHF disease management intervention. Illustrative results show that allowing patients to choose their providers reduces the yearly payment per CHF patient by $320, reduces mortality rates by 0.12 percentage points and hospitalization rates by 0.44 percentage points, and marginally increases provider participation in ACO. This study demonstrates a model capable of quantifying the effects of patient choice in a theoretical ACO system and provides a potential tool for policymakers to understand implications of patient choice and assess potential policy controls.

  11. Using Model-Based System Engineering to Provide Artifacts for NASA Project Life-Cycle and Technical Reviews Presentation

    Science.gov (United States)

    Parrott, Edith L.; Weiland, Karen J.

    2017-01-01

    This is the presentation for the AIAA Space conference in September 2017. It highlights key information from Using Model-Based Systems Engineering to Provide Artifacts for NASA Project Life-cycle and Technical Reviews paper.

  12. Programming with models: modularity and abstraction provide powerful capabilities for systems biology.

    Science.gov (United States)

    Mallavarapu, Aneil; Thomson, Matthew; Ullian, Benjamin; Gunawardena, Jeremy

    2009-03-06

    Mathematical models are increasingly used to understand how phenotypes emerge from systems of molecular interactions. However, their current construction as monolithic sets of equations presents a fundamental barrier to progress. Overcoming this requires modularity, enabling sub-systems to be specified independently and combined incrementally, and abstraction, enabling generic properties of biological processes to be specified independently of specific instances. These, in turn, require models to be represented as programs rather than as datatypes. Programmable modularity and abstraction enables libraries of modules to be created, which can be instantiated and reused repeatedly in different contexts with different components. We have developed a computational infrastructure that accomplishes this. We show here why such capabilities are needed, what is required to implement them and what can be accomplished with them that could not be done previously.

  13. RN Diabetes Virtual Case Management: A New Model for Providing Chronic Care Management.

    Science.gov (United States)

    Brown, Nancy N; Carrara, Barbara E; Watts, Sharon A; Lucatorto, Michelle A

    2016-01-01

    The U.S. chronic disease health care system has substantial gaps in delivery of services. New models of care change traditional delivery of care and explore new settings for care. This article describes a new model of diabetes chronic care delivery: nurse-delivered care that includes protocol-based insulin titration and patient education delivered solely in a virtual environment. In phase 1, the clinical outcome of time to achievement of glycated hemoglobin (A(1C)) goals (P managed insulin titration protocol with individualized A(1C) goals had a significant (P Safety was demonstrated by the absence of hypoglycemia related to RN protocol adjustment. There were no admissions or emergency room (ER) visits for hypoglycemia. This study demonstrates safety and efficacy of RN virtual chronic disease management for an older population of patients with long-standing diabetes.

  14. Churn Forecasting Model for South African Pre-Paid Service Providers

    OpenAIRE

    Olusola Gbenga Olufemi; Trudie Strydom

    2018-01-01

    Telecommunication companies globally confront with rising problems of customer agitations. Inadequacy of telecoms’ rendered services, delivered products and many other causes, result to the difficult moments telecoms face. These problems have further degenerated to customers leaving from one network provider to the other, in quest for improved satisfaction. Churn is the term used to describe this customers’ resultant movement, due to agitation caused by inadequate operations. The Republic of ...

  15. Providing a New Model for Discovering Cloud Services Based on Ontology

    Directory of Open Access Journals (Sweden)

    B. Heydari

    2017-12-01

    Full Text Available Due to its efficient, flexible, and dynamic substructure in information technology and service quality parameters estimation, cloud computing has become one of the most important issues in computer world. Discovering cloud services has been posed as a fundamental issue in reaching out high efficiency. In order to do one’s own operations in cloud space, any user needs to request several various services either simultaneously or according to a working routine. These services can be presented by different cloud producers or different decision-making policies. Therefore, service management is one of the important and challenging issues in cloud computing. With the advent of semantic web and practical services accordingly in cloud computing space, access to different kinds of applications has become possible. Ontology is the core of semantic web and can be used to ease the process of discovering services. A new model based on ontology has been proposed in this paper. The results indicate that the proposed model has explored cloud services based on user search results in lesser time compared to other models.

  16. Model of a multiverse providing the dark energy of our universe

    Science.gov (United States)

    Rebhan, E.

    2017-09-01

    It is shown that the dark energy presently observed in our universe can be regarded as the energy of a scalar field driving an inflation-like expansion of a multiverse with ours being a subuniverse among other parallel universes. A simple model of this multiverse is elaborated: Assuming closed space geometry, the origin of the multiverse can be explained by quantum tunneling from nothing; subuniverses are supposed to emerge from local fluctuations of separate inflation fields. The standard concept of tunneling from nothing is extended to the effect that in addition to an inflationary scalar field, matter is also generated, and that the tunneling leads to an (unstable) equilibrium state. The cosmological principle is assumed to pertain from the origin of the multiverse until the first subuniverses emerge. With increasing age of the multiverse, its spatial curvature decays exponentially so fast that, due to sharing the same space, the flatness problem of our universe resolves by itself. The dark energy density imprinted by the multiverse on our universe is time-dependent, but such that the ratio w = ϱ/(c2p) of its mass density and pressure (times c2) is time-independent and assumes a value - 1 + 𝜖 with arbitrary 𝜖 > 0. 𝜖 can be chosen so small, that the dark energy model of this paper can be fitted to the current observational data as well as the cosmological constant model.

  17. Docosahexaenoic Acid (DHA) Provides Neuroprotection in Traumatic Brain Injury Models via Activating Nrf2-ARE Signaling.

    Science.gov (United States)

    Zhu, Wei; Ding, Yuexia; Kong, Wei; Li, Tuo; Chen, Hongguang

    2018-04-16

    In this study, we explored the neuroprotective effects of docosahexaenoic acid (DHA) in traumatic brain injury (TBI) models. In this study, we first confirmed that DHA was neuroprotective against TBI via the NSS test and Morris water maze experiment. Western blot was conducted to identify the expression of Bax, caspase-3, and Bcl-2. And the cell apoptosis of the TBI models was validated by TUNEL staining. Relationships between nuclear factor erythroid 2-related factor 2-antioxidant response element (Nrf2-ARE) pathway-related genes and DHA were explored by RT-PCR and Western blot. Rats of the DHA group performed remarkably better than those of the TBI group in both NSS test and water maze experiment. DHA conspicuously promoted the expression of Bcl-2 and diminished that of cleaved caspase-3 and Bax, indicating the anti-apoptotic role of DHA. Superoxide dismutase (SOD) activity and cortical malondialdehyde content, glutathione peroxidase (GPx) activity were renovated in rats receiving DHA treatment, implying that the neuroprotective influence of DHA was derived from lightening the oxidative stress caused by TBI. Moreover, immunofluorescence and Western blot experiments revealed that DHA facilitated the translocation of Nrf2 to the nucleus. DHA administration also notably increased the expression of the downstream factors NAD(P)H:quinone oxidoreductase (NQO-1) and heme oxygenase 1(HO-1). DHA exerted neuroprotective influence on the TBI models, potentially through activating the Nrf2- ARE pathway.

  18. Do NHS walk-in centres in England provide a model of integrated care?

    Directory of Open Access Journals (Sweden)

    C. Salisbury

    2003-08-01

    Full Text Available Purpose: To undertake a comprehensive evaluation of NHS walk-in centres against criteria of improved access, quality, user satisfaction and efficiency. Context: Forty NHS walk-in centres have been opened in England, as part of the UK governments agenda to modernise the NHS. They are intended to improve access to primary care, provide high quality treatment at convenient times, and reduce inappropriate demand on other NHS providers. Care is provided by nurses rather than doctors, using computerised algorithms, and nurses use protocols to supply treatments previously only available from doctors. Data sources: Several linked studies were conducted using different sources of data and methodologies. These included routinely collected data, site visits, patient interviews, a survey of users of walk-in centres, a study using simulated patients to assess quality of care, analysis of consultation rates in NHS services near to walk-in centres, and audit of compliance with protocols. Conclusion & discussion: The findings illustrate many of the issues described in a recent WHO reflective paper on Integrated Care, including tensions between professional judgement and use of protocols, problems with incompatible IT systems, balancing users' demands and needs, the importance of understanding health professionals' roles and issues of technical versus allocative efficiency.

  19. RESEARCH OF PROBLEMS OF DESIGN OF COMPLEX TECHNICAL PROVIDING AND THE GENERALIZED MODEL OF THEIR DECISION

    Directory of Open Access Journals (Sweden)

    A. V. Skrypnikov

    2015-01-01

    Full Text Available Summary. In this work the general ideas of a method of V. I. Skurikhin taking into account the specified features develop and questions of the analysis and synthesis of a complex of technical means, with finishing them to the level suitable for use in engineering practice of design of information management systems are in more detail considered. In work the general system approach to the solution of questions of a choice of technical means of the information management system is created, the general technique of the sys tem analysis and synthesis of a complex of the technical means and its subsystems providing achievement of extreme value of criterion of efficiency of functioning of a technical complex of the information management system is developed. The main attention is paid to the applied party of system researches of complex technical providing, in particular, to definition of criteria of quality of functioning of a technical complex, development of methods of the analysis of information base of the information management system and definition of requirements to technical means, and also methods of structural synthesis of the main subsystems of complex technical providing. Thus, the purpose is research on the basis of system approach of complex technical providing the information management system and development of a number of methods of the analysis and the synthesis of complex technical providing suitable for use in engineering practice of design of systems. The well-known paradox of development of management information consists of that parameters of the system, and consequently, and requirements to the complex hardware, can not be strictly reasonable to development of algorithms and programs, and vice versa. The possible method of overcoming of these difficulties is prognostication of structure and parameters of complex hardware for certain management informations on the early stages of development, with subsequent clarification and

  20. A decision tree model to estimate the value of information provided by a groundwater quality monitoring network

    Science.gov (United States)

    Khader, A. I.; Rosenberg, D. E.; McKee, M.

    2013-05-01

    Groundwater contaminated with nitrate poses a serious health risk to infants when this contaminated water is used for culinary purposes. To avoid this health risk, people need to know whether their culinary water is contaminated or not. Therefore, there is a need to design an effective groundwater monitoring network, acquire information on groundwater conditions, and use acquired information to inform management options. These actions require time, money, and effort. This paper presents a method to estimate the value of information (VOI) provided by a groundwater quality monitoring network located in an aquifer whose water poses a spatially heterogeneous and uncertain health risk. A decision tree model describes the structure of the decision alternatives facing the decision-maker and the expected outcomes from these alternatives. The alternatives include (i) ignore the health risk of nitrate-contaminated water, (ii) switch to alternative water sources such as bottled water, or (iii) implement a previously designed groundwater quality monitoring network that takes into account uncertainties in aquifer properties, contaminant transport processes, and climate (Khader, 2012). The VOI is estimated as the difference between the expected costs of implementing the monitoring network and the lowest-cost uninformed alternative. We illustrate the method for the Eocene Aquifer, West Bank, Palestine, where methemoglobinemia (blue baby syndrome) is the main health problem associated with the principal contaminant nitrate. The expected cost of each alternative is estimated as the weighted sum of the costs and probabilities (likelihoods) associated with the uncertain outcomes resulting from the alternative. Uncertain outcomes include actual nitrate concentrations in the aquifer, concentrations reported by the monitoring system, whether people abide by manager recommendations to use/not use aquifer water, and whether people get sick from drinking contaminated water. Outcome costs

  1. An artificial pancreas provided a novel model of blood glucose level variability in beagles.

    Science.gov (United States)

    Munekage, Masaya; Yatabe, Tomoaki; Kitagawa, Hiroyuki; Takezaki, Yuka; Tamura, Takahiko; Namikawa, Tsutomu; Hanazaki, Kazuhiro

    2015-12-01

    Although the effects on prognosis of blood glucose level variability have gained increasing attention, it is unclear whether blood glucose level variability itself or the manifestation of pathological conditions that worsen prognosis. Then, previous reports have not been published on variability models of perioperative blood glucose levels. The aim of this study is to establish a novel variability model of blood glucose concentration using an artificial pancreas. We maintained six healthy, male beagles. After anesthesia induction, a 20-G venous catheter was inserted in the right femoral vein and an artificial pancreas (STG-22, Nikkiso Co. Ltd., Tokyo, Japan) was connected for continuous blood glucose monitoring and glucose management. After achieving muscle relaxation, total pancreatectomy was performed. After 1 h of stabilization, automatic blood glucose control was initiated using the artificial pancreas. Blood glucose level varied for 8 h, alternating between the target blood glucose values of 170 and 70 mg/dL. Eight hours later, the experiment was concluded. Total pancreatectomy was performed for 62 ± 13 min. Blood glucose swings were achieved 9.8 ± 2.3 times. The average blood glucose level was 128.1 ± 5.1 mg/dL with an SD of 44.6 ± 3.9 mg/dL. The potassium levels after stabilization and at the end of the experiment were 3.5 ± 0.3 and 3.1 ± 0.5 mmol/L, respectively. In conclusion, the results of the present study demonstrated that an artificial pancreas contributed to the establishment of a novel variability model of blood glucose levels in beagles.

  2. User modeling and adaptation for daily routines providing assistance to people with special needs

    CERN Document Server

    Martín, Estefanía; Carro, Rosa M

    2013-01-01

    User Modeling and Adaptation for Daily Routines is motivated by the need to bring attention to how people with special needs can benefit from adaptive methods and techniques in their everyday lives. Assistive technologies, adaptive systems and context-aware applications are three well-established research fields. There is, in fact, a vast amount of literature that covers HCI-related issues in each area separately. However, the contributions in the intersection of these areas have been less visible, despite the fact that such synergies may have a great impact on improving daily living.Presentin

  3. fiReproxies: A computational model providing insight into heat-affected archaeological lithic assemblages.

    Science.gov (United States)

    Sorensen, Andrew C; Scherjon, Fulco

    2018-01-01

    Evidence for fire use becomes increasingly sparse the further back in time one looks. This is especially true for Palaeolithic assemblages. Primary evidence of fire use in the form of hearth features tends to give way to clusters or sparse scatters of more durable heated stone fragments. In the absence of intact fireplaces, these thermally altered lithic remains have been used as a proxy for discerning relative degrees of fire use between archaeological layers and deposits. While previous experimental studies have demonstrated the physical effects of heat on stony artefacts, the mechanisms influencing the proportion of fire proxy evidence within archaeological layers remain understudied. This fundamental study is the first to apply a computer-based model (fiReproxies) in an attempt to simulate and quantify the complex interplay of factors that ultimately determine when and in what proportions lithic artefacts are heated by (anthropogenic) fires. As an illustrative example, we apply our model to two hypothetical archaeological layers that reflect glacial and interglacial conditions during the late Middle Palaeolithic within a generic simulated cave site to demonstrate how different environmental, behavioural and depositional factors like site surface area, sedimentation rate, occupation frequency, and fire size and intensity can, independently or together, significantly influence the visibility of archaeological fire signals.

  4. Remote sensing and modeling. A tool to provide the spatial information for biomass production potential

    Energy Technology Data Exchange (ETDEWEB)

    Guenther, K.P.; Wisskirchen, K.; Schroedter-Homscheidt, M. [DLR, Wessling (Germany). German Remote Sensing Data Center; Borg, E.; Fichtelmann, B. [DLR, Neustrelitz (Germany). German Remote Sensing Data Center

    2006-07-01

    Earth observation from space has been successfully demonstrated over a wide range of monitoring activities, mostly with the aim of measuring the spatial and temporal distribution of biophysical and geophysical parameters as e.g. the Normalized Difference Vegetation Index (NDVI), the land surface temperature (LST) or the land use classification (LCC). With the growing need for more reliable information of global biomass activity in the frame of climate change, the identification and quantification of carbon sinks and sources got of importance. The goal of our activities is to use time series of remote sensing data and carbon modeling to assess the biomass of large regions. Future activities will be discussed as reprocessing of archived time series (e.g. 30 years) of remote sensing data, which will be used as input to biomass modeling, improving the spatial resolution of local, historic land use maps by processing archived Landsat data (30m), using an innovative classification processor for deriving actual multi-temporal land use maps based MERIS data (300m) and delivering a biomass equivalent indicator as productivity indicator. (orig.)

  5. Penerapan Model Multidimensional Scaling dalam Pemetaan Brand Positioning Internet Service Provider

    Directory of Open Access Journals (Sweden)

    Robertus Tang Herman

    2010-03-01

    Full Text Available In this high-tech era, there have been tremendous advances in tech-based products and services. Internet is one of them that have widened the world’s eyes to a new borderless marketplace. High competition among internet service providers has pushed companies to create competitive advantage and brilliant marketing strategies. They undertake positioning mapping to describe product or service’s positioning amongst many competitors. The right positioning strategy becomes a powerful weapon to win in the battle. This research is designed to create positioning mapping based on perceptual mapping. The researcher uses Multidimensional Scaling and image mapping to achieve this research goal. Sampling is using non-probability sampling in Jakarta. Based on non-attribute approach, the research findings show that there is similarity between two different brands. Thus, both brands are competing against one another. On the other hand, CBN and Netzap provider reflect some differences to others. And some brands require some improvements in terms of network reliability.

  6. JSBML 1.0: providing a smorgasbord of options to encode systems biology models

    DEFF Research Database (Denmark)

    Rodriguez, Nicolas; Thomas, Alex; Watanabe, Leandro

    2015-01-01

    JSBML, the official pure Java programming library for the Systems Biology Markup Language (SBML) format, has evolved with the advent of different modeling formalisms in systems biology and their ability to be exchanged and represented via extensions of SBML. JSBML has matured into a major, active...... open-source project with contributions from a growing, international team of developers who not only maintain compatibility with SBML, but also drive steady improvements to the Java interface and promote ease-of-use with end users. Source code, binaries and documentation for JSBML can be freely...... obtained under the terms of the LGPL 2.1 from the website http://sbml.org/Software/JSBML. More information about JSBML can be found in the user guide at http://sbml.org/Software/JSBML/docs/. jsbml-development@googlegroups.com or andraeger@eng.ucsd.edu Supplementary data are available at Bioinformatics...

  7. Pharmacological targeting of GSK-3 and NRF2 provides neuroprotection in a preclinical model of tauopathy

    Directory of Open Access Journals (Sweden)

    Antonio Cuadrado

    2018-04-01

    Full Text Available Tauopathies are a group of neurodegenerative disorders where TAU protein is presented as aggregates or is abnormally phosphorylated, leading to alterations of axonal transport, neuronal death and neuroinflammation. Currently, there is no treatment to slow progression of these diseases. Here, we have investigated whether dimethyl fumarate (DMF, an inducer of the transcription factor NRF2, could mitigate tauopathy in a mouse model. The signaling pathways modulated by DMF were also studied in mouse embryonic fibroblast (MEFs from wild type or KEAP1-deficient mice. The effect of DMF on neurodegeneration, astrocyte and microglial activation was examined in Nrf2+/+ and Nrf2−/− mice stereotaxically injected in the right hippocampus with an adeno-associated vector expressing human TAUP301L and treated daily with DMF (100 mg/kg, i.g during three weeks. DMF induces the NRF2 transcriptional through a mechanism that involves KEAP1 but also PI3K/AKT/GSK-3-dependent pathways. DMF modulates GSK-3β activity in mouse hippocampi. Furthermore, DMF modulates TAU phosphorylation, neuronal impairment measured by calbindin-D28K and BDNF expression, and inflammatory processes involved in astrogliosis, microgliosis and pro-inflammatory cytokines production. This study reveals neuroprotective effects of DMF beyond disruption of the KEAP1/NRF2 axis by inhibiting GSK3 in a mouse model of tauopathy. Our results support repurposing of this drug for treatment of these diseases. Keywords: DMF, Inflammation, Neurodegeneration, NRF2, Oxidative stress, TAU/ GSK-3

  8. Blood-brain barrier alterations provide evidence of subacute diaschisis in an ischemic stroke rat model.

    Directory of Open Access Journals (Sweden)

    Svitlana Garbuzova-Davis

    Full Text Available Comprehensive stroke studies reveal diaschisis, a loss of function due to pathological deficits in brain areas remote from initial ischemic lesion. However, blood-brain barrier (BBB competence in subacute diaschisis is uncertain. The present study investigated subacute diaschisis in a focal ischemic stroke rat model. Specific focuses were BBB integrity and related pathogenic processes in contralateral brain areas.In ipsilateral hemisphere 7 days after transient middle cerebral artery occlusion (tMCAO, significant BBB alterations characterized by large Evans Blue (EB parenchymal extravasation, autophagosome accumulation, increased reactive astrocytes and activated microglia, demyelinization, and neuronal damage were detected in the striatum, motor and somatosensory cortices. Vascular damage identified by ultrastuctural and immunohistochemical analyses also occurred in the contralateral hemisphere. In contralateral striatum and motor cortex, major ultrastructural BBB changes included: swollen and vacuolated endothelial cells containing numerous autophagosomes, pericyte degeneration, and perivascular edema. Additionally, prominent EB extravasation, increased endothelial autophagosome formation, rampant astrogliosis, activated microglia, widespread neuronal pyknosis and decreased myelin were observed in contralateral striatum, and motor and somatosensory cortices.These results demonstrate focal ischemic stroke-induced pathological disturbances in ipsilateral, as well as in contralateral brain areas, which were shown to be closely associated with BBB breakdown in remote brain microvessels and endothelial autophagosome accumulation. This microvascular damage in subacute phase likely revealed ischemic diaschisis and should be considered in development of treatment strategies for stroke.

  9. Assistance dogs provide a useful behavioral model to enrich communicative skills of assistance robots.

    Science.gov (United States)

    Gácsi, Márta; Szakadát, Sára; Miklósi, Adám

    2013-01-01

    These studies are part of a project aiming to reveal relevant aspects of human-dog interactions, which could serve as a model to design successful human-robot interactions. Presently there are no successfully commercialized assistance robots, however, assistance dogs work efficiently as partners for persons with disabilities. In Study 1, we analyzed the cooperation of 32 assistance dog-owner dyads performing a carrying task. We revealed typical behavior sequences and also differences depending on the dyads' experiences and on whether the owner was a wheelchair user. In Study 2, we investigated dogs' responses to unforeseen difficulties during a retrieving task in two contexts. Dogs displayed specific communicative and displacement behaviors, and a strong commitment to execute the insoluble task. Questionnaire data from Study 3 confirmed that these behaviors could successfully attenuate owners' disappointment. Although owners anticipated the technical competence of future assistance robots to be moderate/high, they could not imagine robots as emotional companions, which negatively affected their acceptance ratings of future robotic assistants. We propose that assistance dogs' cooperative behaviors and problem solving strategies should inspire the development of the relevant functions and social behaviors of assistance robots with limited manual and verbal skills.

  10. Directed evolution of a model primordial enzyme provides insights into the development of the genetic code.

    Directory of Open Access Journals (Sweden)

    Manuel M Müller

    Full Text Available The contemporary proteinogenic repertoire contains 20 amino acids with diverse functional groups and side chain geometries. Primordial proteins, in contrast, were presumably constructed from a subset of these building blocks. Subsequent expansion of the proteinogenic alphabet would have enhanced their capabilities, fostering the metabolic prowess and organismal fitness of early living systems. While the addition of amino acids bearing innovative functional groups directly enhances the chemical repertoire of proteomes, the inclusion of chemically redundant monomers is difficult to rationalize. Here, we studied how a simplified chorismate mutase evolves upon expanding its amino acid alphabet from nine to potentially 20 letters. Continuous evolution provided an enhanced enzyme variant that has only two point mutations, both of which extend the alphabet and jointly improve protein stability by >4 kcal/mol and catalytic activity tenfold. The same, seemingly innocuous substitutions (Ile→Thr, Leu→Val occurred in several independent evolutionary trajectories. The increase in fitness they confer indicates that building blocks with very similar side chain structures are highly beneficial for fine-tuning protein structure and function.

  11. TFP5/TP5 peptide provides neuroprotection in the MPTP model of Parkinson′s disease

    Directory of Open Access Journals (Sweden)

    B K Binukumar

    2016-01-01

    Full Text Available Cyclin-dependent kinase 5 (Cdk5 is a member of the serine-threonine kinase family of cyclin-dependent kinases. Cdk5 is critical to normal mammalian nervous system development and plays important regulatory roles in multiple cellular functions. Recent evidence indicates that Cdk5 is inappropriately activated in several neurodegenerative conditions, including Parkinson′s disease (PD. PD is a chronic neurodegenerative disorder characterized by the loss of dopamine neurons in the substantia nigra, decreased striatal dopamine levels, and consequent extrapyramidal motor dysfunction. During neurotoxicity, p35 is cleaved to form p25. Binding of p25 with Cdk5 leads deregulation of Cdk5 resulting in number of neurodegenerative pathologies. To date, strategies to specifically inhibit Cdk5 hyperactivity have not been successful without affecting normal Cdk5 activity. Here we show that inhibition of p25/Cdk5 hyperactivation through TFP5/TP5, truncated 24-aa peptide derived from the Cdk5 activator p35 rescues nigrostriatal dopaminergic neurodegeneration induced by 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP/MPP + in a mouse model of PD. TP5 peptide treatment also blocked dopamine depletion in the striatum and improved gait dysfunction after MPTP administration. The neuroprotective effect of TFP5/TP5 peptide is also associated with marked reduction in neuroinflammation and apoptosis. Here we show inhibition of Cdk5/p25-hyperactivation by TFP5/TP5 peptide, which identifies Cdk5/p25 as a potential therapeutic target to reduce neurodegeneration in PD.

  12. TELMISARATAN PROVIDES BETTER RENAL PROTECTION THAN VALSARTAN IN A RAT MODEL OF METABOLIC SYNDROME

    Science.gov (United States)

    Khan, Abdul Hye; Imig, John D.

    2013-01-01

    BACKGROUND Angiotension receptor blockers (ARB), telmisartan and valsartan were compared for renal protection in spontaneously hypertensive rats (SHR) fed high fat diet. We hypothesized that in cardiometabolic syndrome, telmisartan an ARB with PPAR-γ activity will offer better renal protection. METHODS SHR were fed either normal (SHR-NF, 7% fat) or high fat (SHR-HF, 36% fat) diet and treated with an ARB for 10 weeks. RESULTS Blood pressure was similar between SHR-NF (190±3 mmHg) and SHR-HF (192±4 mmHg) at the end of the 10 week period. Telmisartan and valsartan decreased blood pressure to similar extents in SHR-NF and SHR-HF groups. Body weight was significantly higher in SHR-HF (368±5g) compared to SHR-NF (328±7g). Telmisartan but not valsartan significantly reduced the body weight gain in SHR-HF. Telmisartan was also more effective than valsartan in improving glycemic and lipid status in SHR-HF. Monocyte chemoattractant protein-1 (MCP-1), an inflammatory marker, was higher in SHR-HF (24±2 ng/d) compared to SHR-NF (14±5 ng/d). Telmisartan reduced MCP-1 excretion in both SHR-HF and SHR-NF to a greater extent than valsartan. An indicator of renal injury, urinary albumin excretion increased to 85±8 mg/d in SHR-HF compared to 54±9 mg/d in SHR-NF. Telmisartan (23±5 mg/d) was more effective than valsartan (45±3 mg/d) in lowering urinary albumin excretion in SHR-HF. Moreover, telmisartan reduced glomerular damage to a greater extent than valsartan in the SHR-HF. CONCLUSIONS Collectively, our data demonstrate that telmisartan was more effective than valsartan in reducing body weight gain, renal inflammation, and renal injury in a rat model of cardiometabolic syndrome. PMID:21415842

  13. Overlapping gene expression profiles of model compounds provide opportunities for immunotoxicity screening

    International Nuclear Information System (INIS)

    Baken, Kirsten A.; Pennings, Jeroen L.A.; Jonker, Martijs J.; Schaap, Mirjam M.; Vries, Annemieke de; Steeg, Harry van; Breit, Timo M.; Loveren, Henk van

    2008-01-01

    In order to investigate immunotoxic effects of a set of model compounds in mice, a toxicogenomics approach was combined with information on macroscopical and histopathological effects on spleens and on modulation of immune function. Bis(tri-n-butyltin)oxide (TBTO), cyclosporin A (CsA), and benzo[a]pyrene (B[a]P) were administered to C57BL/6 mice at immunosuppressive dose levels. Acetaminophen (APAP) was included in the study since indications of immunomodulating properties of this compound have appeared in the literature. TBTO exposure caused the most pronounced effect on gene expression and also resulted in the most severe reduction of body weight gain and induction of splenic irregularities. All compounds caused inhibition of cell division in the spleen as shown by microarray analysis as well as by suppression of lymphocyte proliferation after application of a contact sensitizer as demonstrated in an immune function assay that was adapted from the local lymph node assay. The immunotoxicogenomics approach applied in this study thus pointed to immunosuppression through cell cycle arrest as a common mechanism of action of immunotoxicants, including APAP. Genes related to cell division such as Ccna2, Brca1, Birc5, Incenp, and Cdkn1a (p21) were identified as candidate genes to indicate anti-proliferative effects of xenobiotics in immune cells for future screening assays. The results of our experiments also show the value of group wise pathway analysis for detection of more subtle transcriptional effects and the potency of evaluation of effects in the spleen to demonstrate immunotoxicity

  14. Accurate and efficient gp120 V3 loop structure based models for the determination of HIV-1 co-receptor usage

    Directory of Open Access Journals (Sweden)

    Vaisman Iosif I

    2010-10-01

    Full Text Available Abstract Background HIV-1 targets human cells expressing both the CD4 receptor, which binds the viral envelope glycoprotein gp120, as well as either the CCR5 (R5 or CXCR4 (X4 co-receptors, which interact primarily with the third hypervariable loop (V3 loop of gp120. Determination of HIV-1 affinity for either the R5 or X4 co-receptor on host cells facilitates the inclusion of co-receptor antagonists as a part of patient treatment strategies. A dataset of 1193 distinct gp120 V3 loop peptide sequences (989 R5-utilizing, 204 X4-capable is utilized to train predictive classifiers based on implementations of random forest, support vector machine, boosted decision tree, and neural network machine learning algorithms. An in silico mutagenesis procedure employing multibody statistical potentials, computational geometry, and threading of variant V3 sequences onto an experimental structure, is used to generate a feature vector representation for each variant whose components measure environmental perturbations at corresponding structural positions. Results Classifier performance is evaluated based on stratified 10-fold cross-validation, stratified dataset splits (2/3 training, 1/3 validation, and leave-one-out cross-validation. Best reported values of sensitivity (85%, specificity (100%, and precision (98% for predicting X4-capable HIV-1 virus, overall accuracy (97%, Matthew's correlation coefficient (89%, balanced error rate (0.08, and ROC area (0.97 all reach critical thresholds, suggesting that the models outperform six other state-of-the-art methods and come closer to competing with phenotype assays. Conclusions The trained classifiers provide instantaneous and reliable predictions regarding HIV-1 co-receptor usage, requiring only translated V3 loop genotypes as input. Furthermore, the novelty of these computational mutagenesis based predictor attributes distinguishes the models as orthogonal and complementary to previous methods that utilize sequence

  15. Accurate Wavelength Measurements and Modeling of Fe XV to Fe XIX Spectra Recorded in High-Density Plasmas between 13.5 and 17 Å

    Science.gov (United States)

    May, M. J.; Beiersdorfer, P.; Dunn, J.; Jordan, N.; Hansen, S. B.; Osterheld, A. L.; Faenov, A. Ya.; Pikuz, T. A.; Skobelev, I. Yu.; Flora, F.; Bollanti, S.; Di Lazzaro, P.; Murra, D.; Reale, A.; Reale, L.; Tomassetti, G.; Ritucci, A.; Francucci, M.; Martellucci, S.; Petrocelli, G.

    2005-06-01

    Iron spectra have been recorded from plasmas created at three different laser plasma facilities: the Tor Vergata University laser in Rome (Italy), the Hercules laser at ENEA in Frascati (Italy), and the Compact Multipulse Terawatt (COMET) laser at LLNL in California (USA). The measurements provide a means of identifying dielectronic satellite lines from Fe XVI and Fe XV in the vicinity of the strong 2p-->3d transitions of Fe XVII. About 80 Δn>=1 lines of Fe XV (Mg-like) to Fe XIX (O-like) were recorded between 13.8 and 17.1 Å with a high spectral resolution (λ/Δλ~4000) about 30 of these lines are from Fe XVI and Fe XV. The laser-produced plasmas had electron temperatures between 100 and 500 eV and electron densities between 1020 and 1022 cm-3. The Hebrew University Lawrence Livermore Atomic Code (HULLAC) was used to calculate the atomic structure and atomic rates for Fe XV-XIX. HULLAC was used to calculate synthetic line intensities at Te=200 eV and ne=1021 cm-3 for three different conditions to illustrate the role of opacity: optically thin plasmas with no excitation-autoionization/dielectronic recombination (EA/DR) contributions to the line intensities, optically thin plasmas that included EA/DR contributions to the line intensities, and optically thick plasmas (optical depth ~200 μm) that included EA/DR contributions to the line intensities. The optically thick simulation best reproduced the recorded spectrum from the Hercules laser. However, some discrepancies between the modeling and the recorded spectra remain.

  16. Accurate thickness measurement of graphene

    International Nuclear Information System (INIS)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-01-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1–1.3 nm to 0.1–0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. (paper)

  17. The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle

    OpenAIRE

    Laaksonen, Pekka

    2011-01-01

    Laaksonen, Pekka The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle Jyväskylä: Jyväskylän yliopisto, 2011, 42 s. Tietojärjestelmätiede, kandidaatintutkielma Ohjaaja(t): Käkölä, Timo Tässä kandidaatintutkielmassa selvitettiin sitä, miten the eSourcing Capability Model for Service Providers-mallin käytännöt (practices) ovat liittyneet tietä-myksenhallinnan neljään prosessiin: tiedon luominen, varastointi/noutaminen, jakamine...

  18. A simple simulation model as a tool to assess alternative health care provider payment reform options in Vietnam.

    Science.gov (United States)

    Cashin, Cheryl; Phuong, Nguyen Khanh; Shain, Ryan; Oanh, Tran Thi Mai; Thuy, Nguyen Thi

    2015-01-01

    Vietnam is currently considering a revision of its 2008 Health Insurance Law, including the regulation of provider payment methods. This study uses a simple spreadsheet-based, micro-simulation model to analyse the potential impacts of different provider payment reform scenarios on resource allocation across health care providers in three provinces in Vietnam, as well as on the total expenditure of the provincial branches of the public health insurance agency (Provincial Social Security [PSS]). The results show that currently more than 50% of PSS spending is concentrated at the provincial level with less than half at the district level. There is also a high degree of financial risk on district hospitals with the current fund-holding arrangement. Results of the simulation model show that several alternative scenarios for provider payment reform could improve the current payment system by reducing the high financial risk currently borne by district hospitals without dramatically shifting the current level and distribution of PSS expenditure. The results of the simulation analysis provided an empirical basis for health policy-makers in Vietnam to assess different provider payment reform options and make decisions about new models to support health system objectives.

  19. An accurate mobility model for the I-V characteristics of n-channel enhancement-mode MOSFETs with single-channel boron implantation

    International Nuclear Information System (INIS)

    Chingyuan Wu; Yeongwen Daih

    1985-01-01

    In this paper an analytical mobility model is developed for the I-V characteristics of n-channel enhancement-mode MOSFETs, in which the effects of the two-dimensional electric fields in the surface inversion channel and the parasitic resistances due to contact and interconnection are included. Most importantly, the developed mobility model easily takes the device structure and process into consideration. In order to demonstrate the capabilities of the developed model, the structure- and process-oriented parameters in the present mobility model are calculated explicitly for an n-channel enhancement-mode MOSFET with single-channel boron implantation. Moreover, n-channel MOSFETs with different channel lengths fabricated in a production line by using a set of test keys have been characterized and the measured mobilities have been compared to the model. Excellent agreement has been obtained for all ranges of the fabricated channel lengths, which strongly support the accuracy of the model. (author)

  20. Creation of a Collaborative Disaster Preparedness Video for Daycare Providers: Use of the Delphi Model for the Creation of a Comprehensive Disaster Preparedness Video for Daycare Providers.

    Science.gov (United States)

    Mar, Pamela; Spears, Robert; Reeb, Jeffrey; Thompson, Sarah B; Myers, Paul; Burke, Rita V

    2018-02-22

    Eight million American children under the age of 5 attend daycare and more than another 50 million American children are in school or daycare settings. Emergency planning requirements for daycare licensing vary by state. Expert opinions were used to create a disaster preparedness video designed for daycare providers to cover a broad spectrum of scenarios. Various stakeholders (17) devised the outline for an educational pre-disaster video for child daycare providers using the Delphi technique. Fleiss κ values were obtained for consensus data. A 20-minute video was created, addressing the physical, psychological, and legal needs of children during and after a disaster. Viewers completed an anonymous survey to evaluate topic comprehension. A consensus was attempted on all topics, ranging from elements for inclusion to presentation format. The Fleiss κ value of 0.07 was obtained. Fifty-seven of the total 168 video viewers completed the 10-question survey, with comprehension scores ranging from 72% to 100%. Evaluation of caregivers that viewed our video supports understanding of video contents. Ultimately, the technique used to create and disseminate the resources may serve as a template for others providing pre-disaster planning education. (Disaster Med Public Health Preparedness. 2018;page 1 of 5).

  1. Investigating Effective Components of Higher Education Marketing and Providing a Marketing Model for Iranian Private Higher Education Institutions

    Science.gov (United States)

    Kasmaee, Roya Babaee; Nadi, Mohammad Ali; Shahtalebi, Badri

    2016-01-01

    Purpose: The purpose of this paper is to study and identify the effective components of higher education marketing and providing a marketing model for Iranian higher education private sector institutions. Design/methodology/approach: This study is a qualitative research. For identifying the effective components of higher education marketing and…

  2. Improving sexual health communication between older women and their providers: how the integrative model of behavioral prediction can help.

    Science.gov (United States)

    Hughes, Anne K; Rostant, Ola S; Curran, Paul G

    2014-07-01

    Talking about sexual health can be a challenge for some older women. This project was initiated to identify key factors that improve communication between aging women and their primary care providers. A sample of women (aged 60+) completed an online survey regarding their intent to communicate with a provider about sexual health. Using the integrative model of behavioral prediction as a guide, the survey instrument captured data on attitudes, perceived norms, self-efficacy, and intent to communicate with a provider about sexual health. Data were analyzed using structural equation modeling. Self-efficacy and perceived norms were the most important factors predicting intent to communicate for this sample of women. Intent did not vary with race, but mean scores of the predictors of intent varied for African American and White women. Results can guide practice and intervention with ethnically diverse older women who may be struggling to communicate about their sexual health concerns. © The Author(s) 2013.

  3. Process informed accurate compact modelling of 14-nm FinFET variability and application to statistical 6T-SRAM simulations

    OpenAIRE

    Wang, Xingsheng; Reid, Dave; Wang, Liping; Millar, Campbell; Burenkov, Alex; Evanschitzky, Peter; Baer, Eberhard; Lorenz, Juergen; Asenov, Asen

    2016-01-01

    This paper presents a TCAD based design technology co-optimization (DTCO) process for 14nm SOI FinFET based SRAM, which employs an enhanced variability aware compact modeling approach that fully takes process and lithography simulations and their impact on 6T-SRAM layout into account. Realistic double patterned gates and fins and their impacts are taken into account in the development of the variability-aware compact model. Finally, global process induced variability and local statistical var...

  4. BAD knockout provides metabolic seizure resistance in a genetic model of epilepsy with sudden unexplained death in epilepsy.

    Science.gov (United States)

    Foley, Jeannine; Burnham, Veronica; Tedoldi, Meghan; Danial, Nika N; Yellen, Gary

    2018-01-01

    Metabolic alteration, either through the ketogenic diet (KD) or by genetic alteration of the BAD protein, can produce seizure protection in acute chemoconvulsant models of epilepsy. To assess the seizure-protective role of knocking out (KO) the Bad gene in a chronic epilepsy model, we used the Kcna1 -/- model of epilepsy, which displays progressively increased seizure severity and recapitulates the early death seen in sudden unexplained death in epilepsy (SUDEP). Beginning on postnatal day 24 (P24), we continuously video monitored Kcna1 -/- and Kcna1 -/- Bad -/- double knockout mice to assess survival and seizure severity. We found that Kcna1 -/- Bad -/- mice outlived Kcna1 -/- mice by approximately 2 weeks. Kcna1 -/- Bad -/- mice also spent significantly less time in seizure than Kcna1 -/- mice on P24 and the day of death, showing that BadKO provides seizure resistance in a genetic model of chronic epilepsy. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  5. A random forest based risk model for reliable and accurate prediction of receipt of transfusion in patients undergoing percutaneous coronary intervention.

    Directory of Open Access Journals (Sweden)

    Hitinder S Gurm

    Full Text Available BACKGROUND: Transfusion is a common complication of Percutaneous Coronary Intervention (PCI and is associated with adverse short and long term outcomes. There is no risk model for identifying patients most likely to receive transfusion after PCI. The objective of our study was to develop and validate a tool for predicting receipt of blood transfusion in patients undergoing contemporary PCI. METHODS: Random forest models were developed utilizing 45 pre-procedural clinical and laboratory variables to estimate the receipt of transfusion in patients undergoing PCI. The most influential variables were selected for inclusion in an abbreviated model. Model performance estimating transfusion was evaluated in an independent validation dataset using area under the ROC curve (AUC, with net reclassification improvement (NRI used to compare full and reduced model prediction after grouping in low, intermediate, and high risk categories. The impact of procedural anticoagulation on observed versus predicted transfusion rates were assessed for the different risk categories. RESULTS: Our study cohort was comprised of 103,294 PCI procedures performed at 46 hospitals between July 2009 through December 2012 in Michigan of which 72,328 (70% were randomly selected for training the models, and 30,966 (30% for validation. The models demonstrated excellent calibration and discrimination (AUC: full model  = 0.888 (95% CI 0.877-0.899, reduced model AUC = 0.880 (95% CI, 0.868-0.892, p for difference 0.003, NRI = 2.77%, p = 0.007. Procedural anticoagulation and radial access significantly influenced transfusion rates in the intermediate and high risk patients but no clinically relevant impact was noted in low risk patients, who made up 70% of the total cohort. CONCLUSIONS: The risk of transfusion among patients undergoing PCI can be reliably calculated using a novel easy to use computational tool (https://bmc2.org/calculators/transfusion. This risk prediction

  6. Multi-port network and 3D finite-element models for accurate transformer calculations: Single-phase load-loss test

    Energy Technology Data Exchange (ETDEWEB)

    Escarela-Perez, R. [Departamento de Energia, Universidad Autonoma Metropolitana, Av. San Pablo 180, Col. Reynosa, C.P. 02200, Mexico D.F. (Mexico); Kulkarni, S.V. [Electrical Engineering Department, Indian Institute of Technology, Bombay (India); Melgoza, E. [Instituto Tecnologico de Morelia, Av. Tecnologico 1500, Morelia, Mich., C.P. 58120 (Mexico)

    2008-11-15

    A six-port impedance network for a three-phase transformer is obtained from a 3D time-harmonic finite-element (FE) model. The network model properly captures the eddy current effects of the transformer tank and frame. All theorems and tools of passive linear networks can be used with the multi-port model to simulate several important operating conditions without resorting anymore to computationally expensive 3D FE simulations. The results of the network model are of the same quality as those produced by the FE program. Although the passive network may seem limited by the assumption of linearity, many important transformer operating conditions imply unsaturated states. Single-phase load-loss measurements are employed to demonstrate the effectiveness of the network model and to understand phenomena that could not be explained with conventional equivalent circuits. In addition, formal deduction of novel closed-form formulae is presented for the calculation of the leakage impedance measured at the high and low voltage sides of the transformer. (author)

  7. The accurate particle tracer code

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  8. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    Science.gov (United States)

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  9. [Barriers to the normalization of telemedicine in a healthcare system model based on purchasing of healthcare services using providers' contracts].

    Science.gov (United States)

    Roig, Francesc; Saigí, Francesc

    2011-01-01

    Despite the clear political will to promote telemedicine and the large number of initiatives, the incorporation of this modality in clinical practice remains limited. The objective of this study was to identify the barriers perceived by key professionals who actively participate in the design and implementation of telemedicine in a healthcare system model based on purchasing of healthcare services using providers' contracts. We performed a qualitative study based on data from semi-structured interviews with 17 key informants belonging to distinct Catalan health organizations. The barriers identified were grouped in four areas: technological, organizational, human and economic. The main barriers identified were changes in the healthcare model caused by telemedicine, problems with strategic alignment, resistance to change in the (re)definition of roles, responsibilities and new skills, and lack of a business model that incorporates telemedicine in the services portfolio to ensure its sustainability. In addition to suitable management of change and of the necessary strategic alignment, the definitive normalization of telemedicine in a mixed healthcare model based on purchasing of healthcare services using providers' contracts requires a clear and stable business model that incorporates this modality in the services portfolio and allows healthcare organizations to obtain reimbursement from the payer. 2010 SESPAS. Published by Elsevier Espana. All rights reserved.

  10. Accurate Measurement of the Optical Constants n and k for a Series of 57 Inorganic and Organic Liquids for Optical Modeling and Detection.

    Science.gov (United States)

    Myers, Tanya L; Tonkyn, Russell G; Danby, Tyler O; Taubman, Matthew S; Bernacki, Bruce E; Birnbaum, Jerome C; Sharpe, Steven W; Johnson, Timothy J

    2018-04-01

    For optical modeling and other purposes, we have created a library of 57 liquids for which we have measured the complex optical constants n and k. These liquids vary in their nature, ranging in properties that include chemical structure, optical band strength, volatility, and viscosity. By obtaining the optical constants, one can model most optical phenomena in media and at interfaces including reflection, refraction, and dispersion. Based on the works of others, we have developed improved protocols using multiple path lengths to determine the optical constants n/k for dozens of liquids, including inorganic, organic, and organophosphorus compounds. Detailed descriptions of the measurement and data reduction protocols are discussed; agreement of the derived optical constant n and k values with literature values are presented. We also present results using the n/k values as applied to an optical modeling scenario whereby the derived data are presented and tested for models of 1 µm and 100 µm layers for dimethyl methylphosphonate (DMMP) on both metal (aluminum) and dielectric (soda lime glass) substrates to show substantial differences between the reflected signal from highly reflective substrates and less-reflective substrates.

  11. Providing the meta-model of development of competency using the meta-ethnography approach: Part 2. Synthesis of the available competency development models

    Directory of Open Access Journals (Sweden)

    Shahram Yazdani

    2016-12-01

    Full Text Available Background and Purpose: ConsideringBackground and Purpose: Considering the importance and necessity of competency-based education at a global level and with respect to globalization and the requirement of minimum competencies in medical fields, medical education communities and organizations worldwide have tried to determine the competencies, present frameworks and education models to respond to be sure of the ability of all graduates. In the literature, we observed numerous competency development models that refer to the same issues with different terminologies. It seems that evaluation and synthesis of all these models can finally result in designing a comprehensive meta-model for competency development.Methods: Meta-ethnography is a useful method for synthesis of qualitative research that is used to develop models that interpret the results in several studies. Considering that the aim of this study is to ultimately provide a competency development meta-model, in the previous section of the study, the literature review was conducted to achieve competency development models. Models obtained through the search were studied in details, and the key concepts of the models and overarching concepts were extracted in this section, models’ concepts were reciprocally translated and the available competency development models were synthesized.Results: A presentation of the competency development meta-model and providing a redefinition of the Dreyfus brothers model.Conclusions: Given the importance of competency-based education at a global level and the need to review curricula and competency-based curriculum design, it is required to provide competency development as well as meta-model to be the basis for curriculum development. As there are a variety of competency development models available, in this study, it was tried to develop the curriculum using them.Keywords: Meta-ethnography, Competency development, Meta-model, Qualitative synthesis

  12. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas