WorldWideScience

Sample records for absolute error mae

  1. Assessing energy forecasting inaccuracy by simultaneously considering temporal and absolute errors

    International Nuclear Information System (INIS)

    Frías-Paredes, Laura; Mallor, Fermín; Gastón-Romeo, Martín; León, Teresa

    2017-01-01

    Highlights: • A new method to match time series is defined to assess energy forecasting accuracy. • This method relies in a new family of step patterns that optimizes the MAE. • A new definition of the Temporal Distortion Index between two series is provided. • A parametric extension controls both the temporal distortion index and the MAE. • Pareto optimal transformations of the forecast series are obtained for both indexes. - Abstract: Recent years have seen a growing trend in wind and solar energy generation globally and it is expected that an important percentage of total energy production comes from these energy sources. However, they present inherent variability that implies fluctuations in energy generation that are difficult to forecast. Thus, forecasting errors have a considerable role in the impacts and costs of renewable energy integration, management, and commercialization. This study presents an important advance in the task of analyzing prediction models, in particular, in the timing component of prediction error, which improves previous pioneering results. A new method to match time series is defined in order to assess energy forecasting accuracy. This method relies on a new family of step patterns, an essential component of the algorithm to evaluate the temporal distortion index (TDI). This family minimizes the mean absolute error (MAE) of the transformation with respect to the reference series (the real energy series) and also allows detailed control of the temporal distortion entailed in the prediction series. The simultaneous consideration of temporal and absolute errors allows the use of Pareto frontiers as characteristic error curves. Real examples of wind energy forecasts are used to illustrate the results.

  2. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    Science.gov (United States)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  3. Study of errors in absolute flux density measurements of Cassiopeia A

    International Nuclear Information System (INIS)

    Kanda, M.

    1975-10-01

    An error analysis for absolute flux density measurements of Cassiopeia A is discussed. The lower-bound quadrature-accumulation error for state-of-the-art measurements of the absolute flux density of Cas A around 7 GHz is estimated to be 1.71% for 3 sigma limits. The corresponding practicable error for the careful but not state-of-the-art measurement is estimated to be 4.46% for 3 sigma limits

  4. Sub-nanometer periodic nonlinearity error in absolute distance interferometers

    Science.gov (United States)

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  5. MAE measurements and studies of magnetic domains by electron microscopy

    International Nuclear Information System (INIS)

    Lo, C.C.H.

    1998-01-01

    There is a pressing need for non-destructive testing (NDT) methods for monitoring steel microstructures as they determine the mechanical properties of steel products. Magnetoacoustic emission (MAE) has potential for this application since it is sensitive to steel microstructure. The aim of this project is to study systematically the dependence of MAE upon steel microstructure, and to apply the technique to examine the industrial steel components which have complicated microstructures. Studies of MAE and Barkhausen emission (BE) were made on several systems including fully pearlitic, fully ferritic, ferritic/pearlitic and spheroidized steels. Results suggest that there is a correlation between the microstructural parameters and the MAE and BE profiles. The study of fully pearlitic steel shows that both MAE and BE are sensitive to the interlamellar spacing of pearlite. Low-carbon ferritic steel samples give different MAE and BE profiles which are dependent on ferrite grain size. Lorentz microscopy reveals that there are differences in domain structures and magnetization processes between fully ferritic and fully pearlitic samples. Study of ferritic/pearlitic samples indicates that both MAE and BE depend on the ferrite content. In the case of spheroidized steel samples MAE and BE profiles were found to be sensitive to the changes in the morphology and size of carbides. Samples of industrial steel products including pearlitic rail steel and decarburized billet were investigated. The MAE profiles obtained from the rail are consistent with those measured from the fully pearlitic rod samples. This suggests that MAE can be used for monitoring the microstructure of large steel components, provided that another technique such as BE is also used to complement the MAE measurements. In the study of the billet samples, MAE and BE were found to be dependent on the decarburization depth. The results are discussed in the context of the change in ferrite content of the surface layer

  6. Systematic literature review of hospital medication administration errors in children

    Directory of Open Access Journals (Sweden)

    Ameer A

    2015-11-01

    Full Text Available Ahmed Ameer,1 Soraya Dhillon,1 Mark J Peters,2 Maisoon Ghaleb11Department of Pharmacy, School of Life and Medical Sciences, University of Hertfordshire, Hatfield, UK; 2Paediatric Intensive Care Unit, Great Ormond Street Hospital, London, UK Objective: Medication administration is the last step in the medication process. It can act as a safety net to prevent unintended harm to patients if detected. However, medication administration errors (MAEs during this process have been documented and thought to be preventable. In pediatric medicine, doses are usually administered based on the child's weight or body surface area. This in turn increases the risk of drug miscalculations and therefore MAEs. The aim of this review is to report MAEs occurring in pediatric inpatients. Methods: Twelve bibliographic databases were searched for studies published between January 2000 and February 2015 using “medication administration errors”, “hospital”, and “children” related terminologies. Handsearching of relevant publications was also carried out. A second reviewer screened articles for eligibility and quality in accordance with the inclusion/exclusion criteria. Key findings: A total of 44 studies were systematically reviewed. MAEs were generally defined as a deviation of dose given from that prescribed; this included omitted doses and administration at the wrong time. Hospital MAEs in children accounted for a mean of 50% of all reported medication error reports (n=12,588. It was also identified in a mean of 29% of doses observed (n=8,894. The most prevalent type of MAEs related to preparation, infusion rate, dose, and time. This review has identified five types of interventions to reduce hospital MAEs in children: barcode medicine administration, electronic prescribing, education, use of smart pumps, and standard concentration. Conclusion: This review has identified a wide variation in the prevalence of hospital MAEs in children. This is attributed to

  7. Causes of medication administration errors in hospitals: a systematic review of quantitative and qualitative evidence.

    Science.gov (United States)

    Keers, Richard N; Williams, Steven D; Cooke, Jonathan; Ashcroft, Darren M

    2013-11-01

    Underlying systems factors have been seen to be crucial contributors to the occurrence of medication errors. By understanding the causes of these errors, the most appropriate interventions can be designed and implemented to minimise their occurrence. This study aimed to systematically review and appraise empirical evidence relating to the causes of medication administration errors (MAEs) in hospital settings. Nine electronic databases (MEDLINE, EMBASE, International Pharmaceutical Abstracts, ASSIA, PsycINFO, British Nursing Index, CINAHL, Health Management Information Consortium and Social Science Citations Index) were searched between 1985 and May 2013. Inclusion and exclusion criteria were applied to identify eligible publications through title analysis followed by abstract and then full text examination. English language publications reporting empirical data on causes of MAEs were included. Reference lists of included articles and relevant review papers were hand searched for additional studies. Studies were excluded if they did not report data on specific MAEs, used accounts from individuals not directly involved in the MAE concerned or were presented as conference abstracts with insufficient detail. A total of 54 unique studies were included. Causes of MAEs were categorised according to Reason's model of accident causation. Studies were assessed to determine relevance to the research question and how likely the results were to reflect the potential underlying causes of MAEs based on the method(s) used. Slips and lapses were the most commonly reported unsafe acts, followed by knowledge-based mistakes and deliberate violations. Error-provoking conditions influencing administration errors included inadequate written communication (prescriptions, documentation, transcription), problems with medicines supply and storage (pharmacy dispensing errors and ward stock management), high perceived workload, problems with ward-based equipment (access, functionality

  8. Assessment of the possibility of using data mining methods to predict sorption isotherms of selected organic compounds on activated carbon

    Directory of Open Access Journals (Sweden)

    Dąbek Lidia

    2017-01-01

    Full Text Available The paper analyses the use of four data mining methods (Support Vector Machines. Cascade Neural Networks. Random Forests and Boosted Trees to predict sorption on activated carbons. The input data for statistical models included the activated carbon parameters, organic substances and equilibrium concentrations in the solution. The assessment of the predictive abilities of the developed models was made with the use of mean absolute error (MAE, mean absolute percentage error (MAPE, and root mean squared error (RMSE. The computations proved that methods of data mining considered in the study can be applied to predict sorption of selected organic compounds 011 activated carbon. The lowest values of sorption prediction errors were obtained with the Cascade Neural Networks method (MAE = 1.23 g/g; MAPE = 7.90% and RMSE = 1.81 g/g, while the highest error values were produced by the Boosted Trees method (MAE=14.31 g/g; MAPE = 39.43% and RMSE = 27.76 g/g.

  9. Designing and evaluating an automated system for real-time medication administration error detection in a neonatal intensive care unit.

    Science.gov (United States)

    Ni, Yizhao; Lingren, Todd; Hall, Eric S; Leonard, Matthew; Melton, Kristin; Kirkendall, Eric S

    2018-05-01

    Timely identification of medication administration errors (MAEs) promises great benefits for mitigating medication errors and associated harm. Despite previous efforts utilizing computerized methods to monitor medication errors, sustaining effective and accurate detection of MAEs remains challenging. In this study, we developed a real-time MAE detection system and evaluated its performance prior to system integration into institutional workflows. Our prospective observational study included automated MAE detection of 10 high-risk medications and fluids for patients admitted to the neonatal intensive care unit at Cincinnati Children's Hospital Medical Center during a 4-month period. The automated system extracted real-time medication use information from the institutional electronic health records and identified MAEs using logic-based rules and natural language processing techniques. The MAE summary was delivered via a real-time messaging platform to promote reduction of patient exposure to potential harm. System performance was validated using a physician-generated gold standard of MAE events, and results were compared with those of current practice (incident reporting and trigger tools). Physicians identified 116 MAEs from 10 104 medication administrations during the study period. Compared to current practice, the sensitivity with automated MAE detection was improved significantly from 4.3% to 85.3% (P = .009), with a positive predictive value of 78.0%. Furthermore, the system showed potential to reduce patient exposure to harm, from 256 min to 35 min (P patient exposure to potential harm following MAE events.

  10. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  11. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Science.gov (United States)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  12. Optimal quantum error correcting codes from absolutely maximally entangled states

    Science.gov (United States)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  13. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    Science.gov (United States)

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  14. Error Budget for a Calibration Demonstration System for the Reflected Solar Instrument for the Climate Absolute Radiance and Refractivity Observatory

    Science.gov (United States)

    Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan

    2013-01-01

    A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.

  15. A review on Black-Scholes model in pricing warrants in Bursa Malaysia

    Science.gov (United States)

    Gunawan, Nur Izzaty Ilmiah Indra; Ibrahim, Siti Nur Iqmal; Rahim, Norhuda Abdul

    2017-01-01

    This paper studies the accuracy of the Black-Scholes (BS) model and the dilution-adjusted Black-Scholes (DABS) model to pricing some warrants traded in the Malaysian market. Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) are used to compare the two models. Results show that the DABS model is more accurate than the BS model for the selected data.

  16. A large-area, spatially continuous assessment of land cover map error and its impact on downstream analyses.

    Science.gov (United States)

    Estes, Lyndon; Chen, Peng; Debats, Stephanie; Evans, Tom; Ferreira, Stefanus; Kuemmerle, Tobias; Ragazzo, Gabrielle; Sheffield, Justin; Wolf, Adam; Wood, Eric; Caylor, Kelly

    2018-01-01

    Land cover maps increasingly underlie research into socioeconomic and environmental patterns and processes, including global change. It is known that map errors impact our understanding of these phenomena, but quantifying these impacts is difficult because many areas lack adequate reference data. We used a highly accurate, high-resolution map of South African cropland to assess (1) the magnitude of error in several current generation land cover maps, and (2) how these errors propagate in downstream studies. We first quantified pixel-wise errors in the cropland classes of four widely used land cover maps at resolutions ranging from 1 to 100 km, and then calculated errors in several representative "downstream" (map-based) analyses, including assessments of vegetative carbon stocks, evapotranspiration, crop production, and household food security. We also evaluated maps' spatial accuracy based on how precisely they could be used to locate specific landscape features. We found that cropland maps can have substantial biases and poor accuracy at all resolutions (e.g., at 1 km resolution, up to ∼45% underestimates of cropland (bias) and nearly 50% mean absolute error (MAE, describing accuracy); at 100 km, up to 15% underestimates and nearly 20% MAE). National-scale maps derived from higher-resolution imagery were most accurate, followed by multi-map fusion products. Constraining mapped values to match survey statistics may be effective at minimizing bias (provided the statistics are accurate). Errors in downstream analyses could be substantially amplified or muted, depending on the values ascribed to cropland-adjacent covers (e.g., with forest as adjacent cover, carbon map error was 200%-500% greater than in input cropland maps, but ∼40% less for sparse cover types). The average locational error was 6 km (600%). These findings provide deeper insight into the causes and potential consequences of land cover map error, and suggest several recommendations for land

  17. Medication Administration Errors Involving Paediatric In-Patients in a ...

    African Journals Online (AJOL)

    Erah

    In-Patients in a Hospital in Ethiopia. Yemisirach Feleke ... Purpose: To assess the type and frequency of medication administration errors (MAEs) in the paediatric ward of .... prescribers, does not go beyond obeying ... specialists, 43 general practitioners, 2 health officers ..... Medication Errors, International Council of Nurses.

  18. Enterprise Mobile Tracking and Reminder System: MAE

    Directory of Open Access Journals (Sweden)

    Cheah Huei Yoong

    2012-07-01

    Full Text Available Mobile phones have made significant improvements from providing voice communications to advanced features such as camera, GPS, Wi-Fi, SMS, voice recognition, Internet surfing, and touch screen. This paper presents an enterprise mobile tracking and reminder system (MAE that enables the elderly to have a better elder-care experience. The high-level architecture and major software algorithms especially the tracking in Android phones and SMS functions in server are described. The analysis of data captured and performance study of the server are discussed. In order to show the effectiveness of MAE, a pilot test was carried out with a retirement village in Singapore and the feedback from the elderly was evaluated. Generally, most comments received from the elderly were positive.

  19. Carers' Medication Administration Errors in the Domiciliary Setting: A Systematic Review.

    Directory of Open Access Journals (Sweden)

    Anam Parand

    Full Text Available Medications are mostly taken in patients' own homes, increasingly administered by carers, yet studies of medication safety have been largely conducted in the hospital setting. We aimed to review studies of how carers cause and/or prevent medication administration errors (MAEs within the patient's home; to identify types, prevalence and causes of these MAEs and any interventions to prevent them.A narrative systematic review of literature published between 1 Jan 1946 and 23 Sep 2013 was carried out across the databases EMBASE, MEDLINE, PSYCHINFO, COCHRANE and CINAHL. Empirical studies were included where carers were responsible for preventing/causing MAEs in the home and standardised tools used for data extraction and quality assessment.Thirty-six papers met the criteria for narrative review, 33 of which included parents caring for children, two predominantly comprised adult children and spouses caring for older parents/partners, and one focused on paid carers mostly looking after older adults. The carer administration error rate ranged from 1.9 to 33% of medications administered and from 12 to 92.7% of carers administering medication. These included dosage errors, omitted administration, wrong medication and wrong time or route of administration. Contributory factors included individual carer factors (e.g. carer age, environmental factors (e.g. storage, medication factors (e.g. number of medicines, prescription communication factors (e.g. comprehensibility of instructions, psychosocial factors (e.g. carer-to-carer communication, and care-recipient factors (e.g. recipient age. The few interventions effective in preventing MAEs involved carer training and tailored equipment.This review shows that home medication administration errors made by carers are a potentially serious patient safety issue. Carers made similar errors to those made by professionals in other contexts and a wide variety of contributory factors were identified. The home care

  20. 31 CFR 354.5 - Obligations of Sallie Mae; no adverse claims.

    Science.gov (United States)

    2010-07-01

    ...-ENTRY SECURITIES OF THE STUDENT LOAN MARKETING ASSOCIATION (SALLIE MAE) § 354.5 Obligations of Sallie... a Federal Reserve Bank or otherwise as provided in § 354.4(c)(1), for the purposes of this part 354, Sallie Mae and the Federal Reserve Banks shall treat the Participant to whose Securities Account an...

  1. Medication Administration Errors in an Adult Emergency Department of a Tertiary Health Care Facility in Ghana.

    Science.gov (United States)

    Acheampong, Franklin; Tetteh, Ashalley Raymond; Anto, Berko Panyin

    2016-12-01

    This study determined the incidence, types, clinical significance, and potential causes of medication administration errors (MAEs) at the emergency department (ED) of a tertiary health care facility in Ghana. This study used a cross-sectional nonparticipant observational technique. Study participants (nurses) were observed preparing and administering medication at the ED of a 2000-bed tertiary care hospital in Accra, Ghana. The observations were then compared with patients' medication charts, and identified errors were clarified with staff for possible causes. Of the 1332 observations made, involving 338 patients and 49 nurses, 362 had errors, representing 27.2%. However, the error rate excluding "lack of drug availability" fell to 12.8%. Without wrong time error, the error rate was 22.8%. The 2 most frequent error types were omission (n = 281, 77.6%) and wrong time (n = 58, 16%) errors. Omission error was mainly due to unavailability of medicine, 48.9% (n = 177). Although only one of the errors was potentially fatal, 26.7% were definitely clinically severe. The common themes that dominated the probable causes of MAEs were unavailability, staff factors, patient factors, prescription, and communication problems. This study gives credence to similar studies in different settings that MAEs occur frequently in the ED of hospitals. Most of the errors identified were not potentially fatal; however, preventive strategies need to be used to make life-saving processes such as drug administration in such specialized units error-free.

  2. Antibacterial activity of a modified unfilled resin containing a novel polymerizable quaternary ammonium salt MAE-HB.

    Science.gov (United States)

    Huang, Li; Yu, Fan; Sun, Xiang; Dong, Yan; Lin, Ping-Ting; Yu, Hao-Han; Xiao, Yu-Hong; Chai, Zhi-Guo; Xing, Xiao-Dong; Chen, Ji-Hua

    2016-09-23

    Resins with strong and long-lasting antibacterial properties are critical for the prevention of secondary dental caries. In this study, we evaluated the antibacterial effect and the underlying mechanism of action of an unfilled resin incorporating 2-methacryloxylethyl hexadecyl methyl ammonium bromide (MAE-HB) against Streptococcus mutans UA159 (S. mutans UA159). MAE-HB was added into unfilled resin at 10 mass%, and unfilled resin without MAE-HB served as the control. Bacterial growth was inhibited on 10%-MAE-HB unfilled resin compared with the control at 1 d, 7 d, 30 d, or 180 d (P  0.05). No significant differences in the antibacterial activities of eluents from control versus 10%-MAE-HB unfilled resins were observed at any time point (P > 0.05). The number of bacteria attached to 10%-MAE-HB unfilled resin was considerably lower than that to control. Fe-SEM and CLSM showed that 10%-MAE-HB unfilled resin disturbed the integrity of bacterial cells. Expression of the bacterial glucosyltransferases, gtfB and gtfC, was lower on 10%-MAE-HB unfilled resin compared to that on control (P HB confers unfilled resin with strong and long-lasting antibacterial effects against S. mutans.

  3. Mapping the absolute magnetic field and evaluating the quadratic Zeeman-effect-induced systematic error in an atom interferometer gravimeter

    Science.gov (United States)

    Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim

    2017-09-01

    Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.

  4. An effective collaborative movie recommender system with cuckoo search

    Directory of Open Access Journals (Sweden)

    Rahul Katarya

    2017-07-01

    Full Text Available Recommender systems are information filtering tools that aspire to predict the rating for users and items, predominantly from big data to recommend their likes. Movie recommendation systems provide a mechanism to assist users in classifying users with similar interests. This makes recommender systems essentially a central part of websites and e-commerce applications. This article focuses on the movie recommendation systems whose primary objective is to suggest a recommender system through data clustering and computational intelligence. In this research article, a novel recommender system has been discussed which makes use of k-means clustering by adopting cuckoo search optimization algorithm applied on the Movielens dataset. Our approach has been explained systematically, and the subsequent results have been discussed. It is also compared with existing approaches, and the results have been analyzed and interpreted. Evaluation metrics such as mean absolute error (MAE, standard deviation (SD, root mean square error (RMSE and t-value for the movie recommender system delivers better results as our approach offers lesser value of the mean absolute error, standard deviation, and root mean square error. The experiment results obtained on Movielens dataset stipulate that the proposed approach may provide high performance regarding reliability, efficiency and delivers accurate personalized movie recommendations when compared with existing methods. Our proposed system (K-mean Cuckoo has 0.68 MAE, which is superior to existing work (0.78 MAE [1] and also has improvement of our previous work (0.75 MAE [2].

  5. 78 FR 21393 - Notice of Submission of Proposed Information Collection to OMB Ginnie Mae Multiclass Securities...

    Science.gov (United States)

    2013-04-10

    ..., Ginnie Mae has already guaranteed the collateral for the multiclass instruments. The Ginnie Mae... mortgage market and to attract new sources of capital for federally insured or guaranteed loans. Under this... guaranteed the collateral for the multiclass instruments. The Ginnie Mae Multiclass Securities Program...

  6. Optimization of microwave-assisted extraction with saponification (MAES) for the determination of polybrominated flame retardants in aquaculture samples.

    Science.gov (United States)

    Fajar, N M; Carro, A M; Lorenzo, R A; Fernandez, F; Cela, R

    2008-08-01

    The efficiency of microwave-assisted extraction with saponification (MAES) for the determination of seven polybrominated flame retardants (polybrominated biphenyls, PBBs; and polybrominated diphenyl ethers, PBDEs) in aquaculture samples is described and compared with microwave-assisted extraction (MAE). Chemometric techniques based on experimental designs and desirability functions were used for simultaneous optimization of the operational parameters used in both MAES and MAE processes. Application of MAES to this group of contaminants in aquaculture samples, which had not been previously applied to this type of analytes, was shown to be superior to MAE in terms of extraction efficiency, extraction time and lipid content extracted from complex matrices (0.7% as against 18.0% for MAE extracts). PBBs and PBDEs were determined by gas chromatography with micro-electron capture detection (GC-muECD). The quantification limits for the analytes were 40-750 pg g(-1) (except for BB-15, which was 1.43 ng g(-1)). Precision for MAES-GC-muECD (%RSD < 11%) was significantly better than for MAE-GC-muECD (%RSD < 20%). The accuracy of both optimized methods was satisfactorily demonstrated by analysis of appropriate certified reference material (CRM), WMF-01.

  7. Quality improvements in decreasing medication administration errors made by nursing staff in an academic medical center hospital: a trend analysis during the journey to Joint Commission International accreditation and in the post-accreditation era.

    Science.gov (United States)

    Wang, Hua-Fen; Jin, Jing-Fen; Feng, Xiu-Qin; Huang, Xin; Zhu, Ling-Ling; Zhao, Xiao-Ying; Zhou, Quan

    2015-01-01

    Medication errors may occur during prescribing, transcribing, prescription auditing, preparing, dispensing, administration, and monitoring. Medication administration errors (MAEs) are those that actually reach patients and remain a threat to patient safety. The Joint Commission International (JCI) advocates medication error prevention, but experience in reducing MAEs during the period of before and after JCI accreditation has not been reported. An intervention study, aimed at reducing MAEs in hospitalized patients, was performed in the Second Affiliated Hospital of Zhejiang University, Hangzhou, People's Republic of China, during the journey to JCI accreditation and in the post-JCI accreditation era (first half-year of 2011 to first half-year of 2014). Comprehensive interventions included organizational, information technology, educational, and process optimization-based measures. Data mining was performed on MAEs derived from a compulsory electronic reporting system. The number of MAEs continuously decreased from 143 (first half-year of 2012) to 64 (first half-year of 2014), with a decrease in occurrence rate by 60.9% (0.338% versus 0.132%, P<0.05). The number of MAEs related to high-alert medications decreased from 32 (the second half-year of 2011) to 16 (the first half-year of 2014), with a decrease in occurrence rate by 57.9% (0.0787% versus 0.0331%, P<0.05). Omission was the top type of MAE during the first half-year of 2011 to the first half-year of 2014, with a decrease by 50% (40 cases versus 20 cases). Intravenous administration error was the top type of error regarding administration route, but it continuously decreased from 64 (first half-year of 2012) to 27 (first half-year of 2014). More experienced registered nurses made fewer medication errors. The number of MAEs in surgical wards was twice that in medicinal wards. Compared with non-intensive care units, the intensive care units exhibited higher occurrence rates of MAEs (1.81% versus 0.24%, P<0

  8. Toward reduced transport errors in a high resolution urban CO2 inversion system

    Directory of Open Access Journals (Sweden)

    Aijun Deng

    2017-05-01

    Full Text Available We present a high-resolution atmospheric inversion system combining a Lagrangian Particle Dispersion Model (LPDM and the Weather Research and Forecasting model (WRF, and test the impact of assimilating meteorological observation on transport accuracy. A Four Dimensional Data Assimilation (FDDA technique continuously assimilates meteorological observations from various observing systems into the transport modeling system, and is coupled to the high resolution CO2 emission product Hestia to simulate the atmospheric mole fractions of CO2. For the Indianapolis Flux Experiment (INFLUX project, we evaluated the impact of assimilating different meteorological observation systems on the linearized adjoint solutions and the CO2 inverse fluxes estimated using observed CO2 mole fractions from 11 out of 12 communications towers over Indianapolis for the Sep.-Nov. 2013 period. While assimilating WMO surface measurements improved the simulated wind speed and direction, their impact on the planetary boundary layer (PBL was limited. Simulated PBL wind statistics improved significantly when assimilating upper-air observations from the commercial airline program Aircraft Communications Addressing and Reporting System (ACARS and continuous ground-based Doppler lidar wind observations. Wind direction mean absolute error (MAE decreased from 26 to 14 degrees and the wind speed MAE decreased from 2.0 to 1.2 m s–1, while the bias remains small in all configurations (< 6 degrees and 0.2 m s–1. Wind speed MAE and ME are larger in daytime than in nighttime. PBL depth MAE is reduced by ~10%, with little bias reduction. The inverse results indicate that the spatial distribution of CO2 inverse fluxes were affected by the model performance while the overall flux estimates changed little across WRF simulations when aggregated over the entire domain. Our results show that PBL wind observations are a potent tool for increasing the precision of urban meteorological reanalyses

  9. Sallie Mae Eyes Expansion beyond Its Charter.

    Science.gov (United States)

    Zook, Jim

    1995-01-01

    The Student Loan Marketing Association (Sallie Mae) and the Clinton Administration are preparing legislation to transform the federally sponsored corporation into a private business but must negotiate complex political and financial issues. Destabilization of the private student-loan industry and conflict over direct-lending policies are central…

  10. Quality improvements in decreasing medication administration errors made by nursing staff in an academic medical center hospital: a trend analysis during the journey to Joint Commission International accreditation and in the post-accreditation era

    Directory of Open Access Journals (Sweden)

    Wang HF

    2015-03-01

    Full Text Available Hua-fen Wang,1 Jing-fen Jin,1 Xiu-qin Feng,1 Xin Huang,1 Ling-ling Zhu,2 Xiao-ying Zhao,3 Quan Zhou4 1Division of Nursing, 2Geriatric VIP Ward, Division of Nursing, 3Office of Quality Administration, 4Department of Pharmacy, the Second Affiliated Hospital of Zhejiang University, School of Medicine, Zhejiang University, Hangzhou, Zhejiang Province, People’s Republic of China Background: Medication errors may occur during prescribing, transcribing, prescription auditing, preparing, dispensing, administration, and monitoring. Medication administration errors (MAEs are those that actually reach patients and remain a threat to patient safety. The Joint Commission International (JCI advocates medication error prevention, but experience in reducing MAEs during the period of before and after JCI accreditation has not been reported. Methods: An intervention study, aimed at reducing MAEs in hospitalized patients, was performed in the Second Affiliated Hospital of Zhejiang University, Hangzhou, People’s Republic of China, during the journey to JCI accreditation and in the post-JCI accreditation era (first half-year of 2011 to first half-year of 2014. Comprehensive interventions included organizational, information technology, educational, and process optimization-based measures. Data mining was performed on MAEs derived from a compulsory electronic reporting system. Results: The number of MAEs continuously decreased from 143 (first half-year of 2012 to 64 (first half-year of 2014, with a decrease in occurrence rate by 60.9% (0.338% versus 0.132%, P<0.05. The number of MAEs related to high-alert medications decreased from 32 (the second half-year of 2011 to 16 (the first half-year of 2014, with a decrease in occurrence rate by 57.9% (0.0787% versus 0.0331%, P<0.05. Omission was the top type of MAE during the first half-year of 2011 to the first half-year of 2014, with a decrease by 50% (40 cases versus 20 cases. Intravenous administration error was the

  11. Demand forecasting of electricity in Indonesia with limited historical data

    Science.gov (United States)

    Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif

    2018-03-01

    Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).

  12. A Multi-Approach Evaluation System (MA-ES) of Organic Rankine Cycles (ORC) used in waste heat utilization

    International Nuclear Information System (INIS)

    Shu, Gequn; Yu, Guopeng; Tian, Hua; Wei, Haiqiao; Liang, Xingyu

    2014-01-01

    Highlights: • The MA-ES provides comprehensive valuations on ORC used for waste heat utilization. • The MA-ES covers energetic, exergetic and economic evaluations of typical ORCs. • The MA-ES is a general assessing method without restriction to specific ORC condition. • Two ORC cases of ICE waste-heat-recovery are exemplified applying the MA-ES. - Abstract: A Multi-Approach Evaluation System (MA-ES) is established in this paper providing comprehensive evaluations on Organic Rankine Cycles (ORC) used for waste heat utilization. The MA-ES covers three main aspects of typical ORC performance: basic evaluations of energy distribution and system efficiency based on the 1st law of thermodynamics; evaluations of exergy distribution and exergy efficiency based on the 2nd law of thermodynamics; economic evaluations based on calculations of equipment capacity, investment and cost recovery. The MA-ES is reasonably organized aiming at providing a general method of ORC performance assessment, without restrictions to system configurations, operation modes, applications, working fluid types, equipment conditions, process parameters and so on. Two ORC cases of internal combustion engines’ (ICEs) waste-heat-recovery are exemplified to illustrate the applications of the evaluation system. The results clearly revealed the performance comparisons among ORC configurations and working fluids referred. The comparisons will provide credible guidance for ORC design, equipment selection and system construction

  13. Errors of absolute methods of reactor neutron activation analysis caused by non-1/E epithermal neutron spectra

    International Nuclear Information System (INIS)

    Erdtmann, G.

    1993-08-01

    A sufficiently accurate characterization of the neutron flux and spectrum, i.e. the determination of the thermal flux, the flux ratio and the epithermal flux spectrum shape factor, α, is a prerequisite for all types of absolute and monostandard methods of reactor neutron activation analysis. A convenient method for these measurements is the bare triple monitor method. However, the results of this method, are very imprecise, because there are high error propagation factors form the counting errors of the monitor activities. Procedures are described to calculate the errors of the flux parameters, the α-dependent cross-section ratios, and of the analytical results from the errors of the activities of the monitor isotopes. They are included in FORTRAN programs which also allow a graphical representation of the results. A great number of examples were calculated for ten different irradiation facilities in four reactors and for 28 elements. Plots of the results are presented and discussed. (orig./HP) [de

  14. Provider risk factors for medication administration error alerts: analyses of a large-scale closed-loop medication administration system using RFID and barcode.

    Science.gov (United States)

    Hwang, Yeonsoo; Yoon, Dukyong; Ahn, Eun Kyoung; Hwang, Hee; Park, Rae Woong

    2016-12-01

    To determine the risk factors and rate of medication administration error (MAE) alerts by analyzing large-scale medication administration data and related error logs automatically recorded in a closed-loop medication administration system using radio-frequency identification and barcodes. The subject hospital adopted a closed-loop medication administration system. All medication administrations in the general wards were automatically recorded in real-time using radio-frequency identification, barcodes, and hand-held point-of-care devices. MAE alert logs recorded during a full 1 year of 2012. We evaluated risk factors for MAE alerts including administration time, order type, medication route, the number of medication doses administered, and factors associated with nurse practices by logistic regression analysis. A total of 2 874 539 medication dose records from 30 232 patients (882.6 patient-years) were included in 2012. We identified 35 082 MAE alerts (1.22% of total medication doses). The MAE alerts were significantly related to administration at non-standard time [odds ratio (OR) 1.559, 95% confidence interval (CI) 1.515-1.604], emergency order (OR 1.527, 95%CI 1.464-1.594), and the number of medication doses administered (OR 0.993, 95%CI 0.992-0.993). Medication route, nurse's employment duration, and working schedule were also significantly related. The MAE alert rate was 1.22% over the 1-year observation period in the hospital examined in this study. The MAE alerts were significantly related to administration time, order type, medication route, the number of medication doses administered, nurse's employment duration, and working schedule. The real-time closed-loop medication administration system contributed to improving patient safety by preventing potential MAEs. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Analysis of the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE) in Assessing Rounding Model

    Science.gov (United States)

    Wang, Weijie; Lu, Yanmin

    2018-03-01

    Most existing Collaborative Filtering (CF) algorithms predict a rating as the preference of an active user toward a given item, which is always a decimal fraction. Meanwhile, the actual ratings in most data sets are integers. In this paper, we discuss and demonstrate why rounding can bring different influences to these two metrics; prove that rounding is necessary in post-processing of the predicted ratings, eliminate of model prediction bias, improving the accuracy of the prediction. In addition, we also propose two new rounding approaches based on the predicted rating probability distribution, which can be used to round the predicted rating to an optimal integer rating, and get better prediction accuracy compared to the Basic Rounding approach. Extensive experiments on different data sets validate the correctness of our analysis and the effectiveness of our proposed rounding approaches.

  16. Optimization of microwave-assisted extraction (MAE) of coriander phenolic antioxidants - response surface methodology approach.

    Science.gov (United States)

    Zeković, Zoran; Vladić, Jelena; Vidović, Senka; Adamović, Dušan; Pavlić, Branimir

    2016-10-01

    Microwave-assisted extraction (MAE) of polyphenols from coriander seeds was optimized by simultaneous maximization of total phenolic (TP) and total flavonoid (TF) yields, as well as maximized antioxidant activity determined by 1,1-diphenyl-2-picrylhydrazyl and reducing power assays. Box-Behnken experimental design with response surface methodology (RSM) was used for optimization of MAE. Extraction time (X1 , 15-35 min), ethanol concentration (X2 , 50-90% w/w) and irradiation power (X3 , 400-800 W) were investigated as independent variables. Experimentally obtained values of investigated responses were fitted to a second-order polynomial model, and multiple regression analysis and analysis of variance were used to determine fitness of the model and optimal conditions. The optimal MAE conditions for simultaneous maximization of polyphenol yield and increased antioxidant activity were an extraction time of 19 min, an ethanol concentration of 63% and an irradiation power of 570 W, while predicted values of TP, TF, IC50 and EC50 at optimal MAE conditions were 311.23 mg gallic acid equivalent per 100 g dry weight (DW), 213.66 mg catechin equivalent per 100 g DW, 0.0315 mg mL(-1) and 0.1311 mg mL(-1) respectively. RSM was successfully used for multi-response optimization of coriander seed polyphenols. Comparison of optimized MAE with conventional extraction techniques confirmed that MAE provides significantly higher polyphenol yields and extracts with increased antioxidant activity. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  17. Measurement error correction in the least absolute shrinkage and selection operator model when validation data are available.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano

    2017-01-01

    Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.

  18. Comparison of Two Hybrid Models for Forecasting the Incidence of Hemorrhagic Fever with Renal Syndrome in Jiangsu Province, China.

    Directory of Open Access Journals (Sweden)

    Wei Wu

    Full Text Available Cases of hemorrhagic fever with renal syndrome (HFRS are widely distributed in eastern Asia, especially in China, Russia, and Korea. It is proved to be a difficult task to eliminate HFRS completely because of the diverse animal reservoirs and effects of global warming. Reliable forecasting is useful for the prevention and control of HFRS.Two hybrid models, one composed of nonlinear autoregressive neural network (NARNN and autoregressive integrated moving average (ARIMA the other composed of generalized regression neural network (GRNN and ARIMA were constructed to predict the incidence of HFRS in the future one year. Performances of the two hybrid models were compared with ARIMA model.The ARIMA, ARIMA-NARNN ARIMA-GRNN model fitted and predicted the seasonal fluctuation well. Among the three models, the mean square error (MSE, mean absolute error (MAE and mean absolute percentage error (MAPE of ARIMA-NARNN hybrid model was the lowest both in modeling stage and forecasting stage. As for the ARIMA-GRNN hybrid model, the MSE, MAE and MAPE of modeling performance and the MSE and MAE of forecasting performance were less than the ARIMA model, but the MAPE of forecasting performance did not improve.Developing and applying the ARIMA-NARNN hybrid model is an effective method to make us better understand the epidemic characteristics of HFRS and could be helpful to the prevention and control of HFRS.

  19. Comparison of Parametric and Nonparametric Methods for Analyzing the Bias of a Numerical Model

    Directory of Open Access Journals (Sweden)

    Isaac Mugume

    2016-01-01

    Full Text Available Numerical models are presently applied in many fields for simulation and prediction, operation, or research. The output from these models normally has both systematic and random errors. The study compared January 2015 temperature data for Uganda as simulated using the Weather Research and Forecast model with actual observed station temperature data to analyze the bias using parametric (the root mean square error (RMSE, the mean absolute error (MAE, mean error (ME, skewness, and the bias easy estimate (BES and nonparametric (the sign test, STM methods. The RMSE normally overestimates the error compared to MAE. The RMSE and MAE are not sensitive to direction of bias. The ME gives both direction and magnitude of bias but can be distorted by extreme values while the BES is insensitive to extreme values. The STM is robust for giving the direction of bias; it is not sensitive to extreme values but it does not give the magnitude of bias. The graphical tools (such as time series and cumulative curves show the performance of the model with time. It is recommended to integrate parametric and nonparametric methods along with graphical methods for a comprehensive analysis of bias of a numerical model.

  20. Comparative study of four time series methods in forecasting typhoid fever incidence in China.

    Science.gov (United States)

    Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A; Li, Xiaosong

    2013-01-01

    Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model.

  1. Comparative study of four time series methods in forecasting typhoid fever incidence in China.

    Directory of Open Access Journals (Sweden)

    Xingyu Zhang

    Full Text Available Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN, radial basis function neural networks (RBFNN, and Elman recurrent neural networks (ERNN were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE, mean absolute percentage error (MAPE, and mean square error (MSE. The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model.

  2. Errors and limits in the determination of plasma electron density by measuring the absolute values of the emitted continuum radiation intensity

    International Nuclear Information System (INIS)

    Bilbao, L.; Bruzzone, H.; Grondona, D.

    1994-01-01

    The reliable determination of a plasma electron structure requires a good knowledge of the errors affecting the employed technique. A technique based on the measurements of the absolute light intensity emitted by travelling plasma structures in plasma focus devices has been used, but it can be easily modified to other geometries and even to stationary plasma structures with time-varying plasma densities. The purpose of this work is to discuss in some detail the errors and limits of this technique. Three separate errors are shown: the minimum size of the density structure that can be resolved, an overall error in the measurements themselves, and an uncertainty in the shape of the density profile. (author)

  3. An Ensemble Learning for Predicting Breakdown Field Strength of Polyimide Nanocomposite Films

    Directory of Open Access Journals (Sweden)

    Hai Guo

    2015-01-01

    Full Text Available Using the method of Stochastic Gradient Boosting, ten SMO-SVR are constructed into a strong prediction model (SGBS model that is efficient in predicting the breakdown field strength. Adopting the method of in situ polymerization, thirty-two samples of nanocomposite films with different percentage compositions, components, and thicknesses are prepared. Then, the breakdown field strength is tested by using voltage test equipment. From the test results, the correlation coefficient (CC, the mean absolute error (MAE, the root mean squared error (RMSE, the relative absolute error (RAE, and the root relative squared error (RRSE are 0.9664, 14.2598, 19.684, 22.26%, and 25.01% with SGBS model. The result indicates that the predicted values fit well with the measured ones. Comparisons between models such as linear regression, BP, GRNN, SVR, and SMO-SVR have also been made under the same conditions. They show that CC of the SGBS model is higher than those of other models. Nevertheless, the MAE, RMSE, RAE, and RRSE of the SGBS model are lower than those of other models. This demonstrates that the SGBS model is better than other models in predicting the breakdown field strength of polyimide nanocomposite films.

  4. The impact of a closed-loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before-and-after study.

    Science.gov (United States)

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-08-01

    To assess the impact of a closed-loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Before-and-after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Closed-loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Prescribing errors were identified in 3.8% of 2450 medication orders pre-intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre-intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre-intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; chi(2) test). A closed-loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication-related tasks increased.

  5. An affordable cuff-less blood pressure estimation solution.

    Science.gov (United States)

    Jain, Monika; Kumar, Niranjan; Deb, Sujay

    2016-08-01

    This paper presents a cuff-less hypertension pre-screening device that non-invasively monitors the Blood Pressure (BP) and Heart Rate (HR) continuously. The proposed device simultaneously records two clinically significant and highly correlated biomedical signals, viz., Electrocardiogram (ECG) and Photoplethysmogram (PPG). The device provides a common data acquisition platform that can interface with PC/laptop, Smart phone/tablet and Raspberry-pi etc. The hardware stores and processes the recorded ECG and PPG in order to extract the real-time BP and HR using kernel regression approach. The BP and HR estimation error is measured in terms of normalized mean square error, Error Standard Deviation (ESD) and Mean Absolute Error (MAE), with respect to a clinically proven digital BP monitor (OMRON HBP1300). The computed error falls under the maximum standard allowable error mentioned by Association for the Advancement of Medical Instrumentation; MAE cost home and clinic bases solution for continuous health monitoring.

  6. Microwave-assisted extraction (MAE) of bioactive saponin from mahogany seed (Swietenia mahogany Jacq)

    Science.gov (United States)

    Waziiroh, E.; Harijono; Kamilia, K.

    2018-03-01

    Mahogany is frequently used for medicines for cancer, tumor, and diabetes, as it contains saponin and flavonoid. Saponin is a complex glycosydic compound consisted of triterpenoids or steroids. Saponin can be extracted from a plant by using a solvent extraction. Microwave Assisted Extraction (MAE) is a non-conventional extraction method that use micro waves in the process. This research was conducted by a Complete Random Design with two factors which were extraction time (120, 150, and 180 seconds) and solvent ratio (10:1, 15:1, and 20:1 v/w). The best treatment of MAE were the solvent ratio 15:1 (v/w) for 180 seconds. The best treatment resulting crude saponin extract yield of 41.46%, containing 11.53% total saponins, and 49.17% of antioxidant activity. Meanwhile, the treatment of maceration method were the solvent ratio 20:1 (v/w) for 48 hours resulting 39.86% yield of saponin crude extract, 9.26% total saponins and 56.23% of antioxidant activity. The results showed MAE was more efficient (less time of extraction and solvent amount) than maceration method.

  7. Biological indices for classification of water quality around Mae Moh power plant, Thailand

    Directory of Open Access Journals (Sweden)

    Pongsarun Junshum and Siripen Traichaiyaporn

    2007-12-01

    Full Text Available The algal communities and water quality were monitored at eight sampling sites around Mae Moh power plant during January-December 2003. Three biological indices, viz. algal genus pollution index, saprobic index, and Shannon-Weaver index, were adopted to classify the water quality around the power plant in comparison with the measured physico-chemical water quality. The result shows that the Shannon-Weaver diversity index appears to be much more applicable and interpretable for the classification of water quality around the Mae Moh power plant than the algal genus pollution index and the saprobic index.

  8. Accuracy evaluation of Fourier series analysis and singular spectrum analysis for predicting the volume of motorcycle sales in Indonesia

    Science.gov (United States)

    Sasmita, Yoga; Darmawan, Gumgum

    2017-08-01

    This research aims to evaluate the performance of forecasting by Fourier Series Analysis (FSA) and Singular Spectrum Analysis (SSA) which are more explorative and not requiring parametric assumption. Those methods are applied to predicting the volume of motorcycle sales in Indonesia from January 2005 to December 2016 (monthly). Both models are suitable for seasonal and trend component data. Technically, FSA defines time domain as the result of trend and seasonal component in different frequencies which is difficult to identify in the time domain analysis. With the hidden period is 2,918 ≈ 3 and significant model order is 3, FSA model is used to predict testing data. Meanwhile, SSA has two main processes, decomposition and reconstruction. SSA decomposes the time series data into different components. The reconstruction process starts with grouping the decomposition result based on similarity period of each component in trajectory matrix. With the optimum of window length (L = 53) and grouping effect (r = 4), SSA predicting testing data. Forecasting accuracy evaluation is done based on Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). The result shows that in the next 12 month, SSA has MAPE = 13.54 percent, MAE = 61,168.43 and RMSE = 75,244.92 and FSA has MAPE = 28.19 percent, MAE = 119,718.43 and RMSE = 142,511.17. Therefore, to predict volume of motorcycle sales in the next period should use SSA method which has better performance based on its accuracy.

  9. Differences in vertical jumping and mae-geri kicking velocity between international and national level karateka

    Directory of Open Access Journals (Sweden)

    Carlos Balsalobre-Fernández

    2013-04-01

    Full Text Available Aim: Lower limb explosive strength and mae-geri kicking velocity are fundamental in karate competition; although it is unclear whether these variables could differentiate the high-level athletes. The objective of this research is to analyze the differences in the mae-geri kicking velocity and the counter-movement jump (CMJ between a group of international top level karateka and another group of national-level karateka.Methods: Thirteen international-level karateka and eleven national-level karateka participated in the study. After a standard warm-up, CMJ height (in cm and mae-geri kicking velocity (in m/s was measured using an IR-platform and a high-speed camera, respectively.Results: Proceeding with MANCOVA to analyze the differences between groups controlling the effect of age, the results show that the international-level karateka demonstrated significantly higher levels of CMJ than national-level competitors (+22.1%, F = 9.47, p = 0.006, η2 = 0.311. There were no significant differences between groups in the mae-geri kicking velocity (+5,7%, F=0.80; p=0.38; η2=0.03.Conclusion: Our data shows, first, the importance of CMJ assessment as a tool to detect talent in karate and, second, that to achieve international-level in karate it may be important to increase CMJ levels to values ​​similar to those offered here.

  10. African-American Soul Force: Dance, Music and Vera Mae Green.

    Science.gov (United States)

    Bolles, A. Lynn

    1986-01-01

    The Black anthropologist, Vera Mae Green, is featured in this analysis of the concept of soul as applied to African-Americans. Music and dance are used to express soul in cultural context. But soul is also a force, an energy which encompasses the Black experience and makes Black culture persevere. (VM)

  11. Quiebra de Fannie Mae y Freddie Mac desde la experiencia latinoamericana

    Directory of Open Access Journals (Sweden)

    Wesley Marshall

    2008-09-01

    Full Text Available Conforme al deterioro de la posición financiera de los bancos semi públicos, Fannie Mae y Freddie Mac, durante los últimos meses, el debate sobre su viabilidad actual y su futuro en el sistema financiero estadounidense se ha intensificado. Si bien el debate ha abarcado diversas posiciones dentro de círculos financieros y académicos, en los Estados Unidos, no se han tomado en cuenta las experiencias históricas de la banca pública latinoamericana durante la crisis financiera, y que al parecer ambas compartan muchos elementos con la dinámica que está detrás del debate actual sobre Fannie Mae and Freddie Mac. En América Latina, tales momentos han ofrecido la oportunidad para transferir activos financieros del sector privado de decreciente valor a la banca pública, y también para la reducción drástica de la actividad de la banca pública, permitiendo así la expansión de actores del sector privado a costa de la banca pública. Tales experiencias son de relevancia particular al futuro de Fannie Mae and Freddie Mac, dado que los mismos grupos que gestionaron estas crisis son los mismos que actualmente están administrando la crisis financiera de los Estados Unidos. Como se argumentará, las mismas estrategias usadas en América Latina para minimizar el papel del Estado en el sector financiero actualmente están siendo empleadas en los Estados Unidos.

  12. The impact of a closed‐loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before‐and‐after study

    Science.gov (United States)

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-01-01

    Objectives To assess the impact of a closed‐loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Design, setting and participants Before‐and‐after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Intervention Closed‐loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Main outcome measures Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Results Prescribing errors were identified in 3.8% of 2450 medication orders pre‐intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre‐intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre‐intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; χ2 test). Conclusions A closed‐loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication‐related tasks increased. PMID:17693676

  13. Effect of gases and particulate matter from electricity generation process on the radial growth of teak plantations surrounding Mae Moh power plant, Lampang province

    Directory of Open Access Journals (Sweden)

    Narapong Sangram

    2016-03-01

    Full Text Available The objectives of this study were to investigate radial growth patterns and influences of polluting gases and particulate matter on the radial growth of teak plantations surrounding the Mae Moh Power Plant. Twenty-four 32-year-old teak trees were selected from Mae Jang and Mae Moh plantations, which were 5 km and 15 km from the Mae Moe power plant, respectively. Forty-eight sample cores were collected from the 24 trees (two cores per tree. The growth patterns of all the cores were analyzed following the standard methods of dendrochronology. The relationships between the growth pattern and the amounts of sulfur dioxide, nitrogen dioxide, carbon monoxide and particulate matter were measured as average daily rates and then analyzed. The study showed that the best-fit model for the relationship between the radial current annual increment at breast height (CAIdbh and time (Y was an exponential equation. The fitted equations were: CAIdbh = 10.657e(−0.031Y for Mae Moh plantation and CAIdbh = 12.518e(−0.032Y for Mae Jang plantation. The coefficient of determination for the fitted equations was 0.410 and 0.423 for the Mae Moh and Mae Jang plantations, respectively. Moreover, carbon monoxide (CO and sulfur dioxide (SO2 had a statistically significant effect on radial teak growth (RT in the Mae Jang plantation, with a coefficient of determination of 0.69 (RTmj = 0.571 + 0.429(CO − 0.023(SO2.

  14. Association Between Workarounds and Medication Administration Errors in Bar Code-Assisted Medication Administration : Protocol of a Multicenter Study

    NARCIS (Netherlands)

    van der Veen, Willem; van den Bemt, Patricia Mla; Bijlsma, Maarten; de Gier, Han J; Taxis, Katja

    2017-01-01

    BACKGROUND: Information technology-based methods such as bar code-assisted medication administration (BCMA) systems have the potential to reduce medication administration errors (MAEs) in hospitalized patients. In practice, however, systems are often not used as intended, leading to workarounds.

  15. Role of dispersion corrected hybrid GGA class in accurately calculating the bond dissociation energy of carbon halogen bond: A benchmark study

    Science.gov (United States)

    Kosar, Naveen; Mahmood, Tariq; Ayub, Khurshid

    2017-12-01

    Benchmark study has been carried out to find a cost effective and accurate method for bond dissociation energy (BDE) of carbon halogen (Csbnd X) bond. BDE of C-X bond plays a vital role in chemical reactions, particularly for kinetic barrier and thermochemistry etc. The compounds (1-16, Fig. 1) with Csbnd X bond used for current benchmark study are important reactants in organic, inorganic and bioorganic chemistry. Experimental data of Csbnd X bond dissociation energy is compared with theoretical results. The statistical analysis tools such as root mean square deviation (RMSD), standard deviation (SD), Pearson's correlation (R) and mean absolute error (MAE) are used for comparison. Overall, thirty-one density functionals from eight different classes of density functional theory (DFT) along with Pople and Dunning basis sets are evaluated. Among different classes of DFT, the dispersion corrected range separated hybrid GGA class along with 6-31G(d), 6-311G(d), aug-cc-pVDZ and aug-cc-pVTZ basis sets performed best for bond dissociation energy calculation of C-X bond. ωB97XD show the best performance with less deviations (RMSD, SD), mean absolute error (MAE) and a significant Pearson's correlation (R) when compared to experimental data. ωB97XD along with Pople basis set 6-311g(d) has RMSD, SD, R and MAE of 3.14 kcal mol-1, 3.05 kcal mol-1, 0.97 and -1.07 kcal mol-1, respectively.

  16. [Aquatic Ecological Index based on freshwater (ICE(RN-MAE)) for the Rio Negro watershed, Colombia].

    Science.gov (United States)

    Forero, Laura Cristina; Longo, Magnolia; John Jairo, Ramirez; Guillermo, Chalar

    2014-04-01

    Aquatic Ecological Index based on freshwater (ICE(RN-MAE)) for the Rio Negro watershed, Colombia. Available indices to assess the ecological status of rivers in Colombia are mostly based on subjective hypotheses about macroinvertebrate tolerance to pollution, which have important limitations. Here we present the application of a method to establish an index of ecological quality for lotic systems in Colombia. The index, based on macroinvertebrate abundance and physicochemical variables, was developed as an alternative to the BMWP-Col index. The method consists on determining an environmental gradient from correlations between physicochemical variables and abundance. The scores obtained in each sampling point are used in a standardized correlation for a model of weighted averages (WA). In the WA model abundances are also weighted to estimate the optimum and tolerance values of each taxon; using this information we estimated the index of ecological quality based also on macroinvertebrate (ICE(RN-MAE)) abundance in each sampling site. Subsequently, we classified all sites using the index and concentrations of total phosphorus (TP) in a cluster analysis. Using TP and ICE(RN-MAE), mean, maximum, minimum and standard deviation, we defined threshold values corresponding to three categories of ecological status: good, fair and critical.

  17. Neural network cloud top pressure and height for MODIS

    Science.gov (United States)

    Håkansson, Nina; Adok, Claudia; Thoss, Anke; Scheirer, Ronald; Hörnquist, Sara

    2018-06-01

    Cloud top height retrieval from imager instruments is important for nowcasting and for satellite climate data records. A neural network approach for cloud top height retrieval from the imager instrument MODIS (Moderate Resolution Imaging Spectroradiometer) is presented. The neural networks are trained using cloud top layer pressure data from the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) dataset. Results are compared with two operational reference algorithms for cloud top height: the MODIS Collection 6 Level 2 height product and the cloud top temperature and height algorithm in the 2014 version of the NWC SAF (EUMETSAT (European Organization for the Exploitation of Meteorological Satellites) Satellite Application Facility on Support to Nowcasting and Very Short Range Forecasting) PPS (Polar Platform System). All three techniques are evaluated using both CALIOP and CPR (Cloud Profiling Radar for CloudSat (CLOUD SATellite)) height. Instruments like AVHRR (Advanced Very High Resolution Radiometer) and VIIRS (Visible Infrared Imaging Radiometer Suite) contain fewer channels useful for cloud top height retrievals than MODIS, therefore several different neural networks are investigated to test how infrared channel selection influences retrieval performance. Also a network with only channels available for the AVHRR1 instrument is trained and evaluated. To examine the contribution of different variables, networks with fewer variables are trained. It is shown that variables containing imager information for neighboring pixels are very important. The error distributions of the involved cloud top height algorithms are found to be non-Gaussian. Different descriptive statistic measures are presented and it is exemplified that bias and SD (standard deviation) can be misleading for non-Gaussian distributions. The median and mode are found to better describe the tendency of the error distributions and IQR (interquartile range) and MAE (mean absolute error) are found

  18. Using Gait Dynamics to Estimate Load from a Body-Worn Accelerometer

    Science.gov (United States)

    2016-02-05

    dynamics, ambulation, correlation structure, musculoskeletal injury I. INTRODUCTION ilitary personnel commonly engage in training and operational...according to their load estimation accuracy, which is defined by the Pearson correlation , r, of its load estimates with the true loads (see Tables...method. In Table IV are shown the mean absolute error, MAE, and Pearson correlation , r, of the load estimates using estimates from GS alone, PLS alone

  19. Comparison of artificial intelligence methods and empirical equations to estimate daily solar radiation

    Science.gov (United States)

    Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan

    2016-08-01

    In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.

  20. High-resolution spatial databases of monthly climate variables (1961-2010) over a complex terrain region in southwestern China

    Science.gov (United States)

    Wu, Wei; Xu, An-Ding; Liu, Hong-Bin

    2015-01-01

    Climate data in gridded format are critical for understanding climate change and its impact on eco-environment. The aim of the current study is to develop spatial databases for three climate variables (maximum, minimum temperatures, and relative humidity) over a large region with complex topography in southwestern China. Five widely used approaches including inverse distance weighting, ordinary kriging, universal kriging, co-kriging, and thin-plate smoothing spline were tested. Root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) showed that thin-plate smoothing spline with latitude, longitude, and elevation outperformed other models. Average RMSE, MAE, and MAPE of the best models were 1.16 °C, 0.74 °C, and 7.38 % for maximum temperature; 0.826 °C, 0.58 °C, and 6.41 % for minimum temperature; and 3.44, 2.28, and 3.21 % for relative humidity, respectively. Spatial datasets of annual and monthly climate variables with 1-km resolution covering the period 1961-2010 were then obtained using the best performance methods. Comparative study showed that the current outcomes were in well agreement with public datasets. Based on the gridded datasets, changes in temperature variables were investigated across the study area. Future study might be needed to capture the uncertainty induced by environmental conditions through remote sensing and knowledge-based methods.

  1. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations

    Science.gov (United States)

    Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank

    2016-01-01

    We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028

  2. Comparison of INAR(1)-Poisson model and Markov prediction model in forecasting the number of DHF patients in west java Indonesia

    Science.gov (United States)

    Ahdika, Atina; Lusiyana, Novyan

    2017-02-01

    World Health Organization (WHO) noted Indonesia as the country with the highest dengue (DHF) cases in Southeast Asia. There are no vaccine and specific treatment for DHF. One of the efforts which can be done by both government and resident is doing a prevention action. In statistics, there are some methods to predict the number of DHF cases to be used as the reference to prevent the DHF cases. In this paper, a discrete time series model, INAR(1)-Poisson model in specific, and Markov prediction model are used to predict the number of DHF patients in West Java Indonesia. The result shows that MPM is the best model since it has the smallest value of MAE (mean absolute error) and MAPE (mean absolute percentage error).

  3. [Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].

    Science.gov (United States)

    Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang

    2016-07-12

    To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.

  4. Measuring teachers' knowledge of attention deficit hyperactivity disorder: the MAE-TDAH Questionnaire.

    Science.gov (United States)

    Soroa, Marian; Balluerka, Nekane; Gorostiaga, Arantxa

    2014-10-28

    The lack of methodological rigor is frequent in most of instruments developed to assess the knowledge of teachers regarding Attention Deficit Hyperactivity Disorder (ADHD). The aim of this study was to develop a questionnaire, namely Questionnaire for the evaluation of teachers' knowledge of ADHD (MAE-TDAH), for measuring the level of knowledge about ADHD of infant and primary school teachers. A random sample of 526 teachers from 57 schools in the Autonomous Community of the Basque Country and Navarre was used for the analysis of the psychometric properties of the instrument. The participant teachers age range was between 22 and 65 (M = 42.59; SD = 10.89), and there were both generalist and specialized teachers. The measure showed a 4 factor structure (Etiology of ADHD, Symptoms/Diagnosis of ADHD, General information about ADHD and Treatment of ADHD) with adequate internal consistency (Omega values ranged between .83 and .91) and temporal stability indices (Spearman's Rho correlation values ranged between .62 and .79). Furthermore, evidence of convergent and external validity was obtained. Results suggest that the MAE-TDAH is a valid and reliable measure when it comes to evaluating teachers' level of knowledge of ADHD.

  5. Increasing the applicability of density functional theory. V. X-ray absorption spectra with ionization potential corrected exchange and correlation potentials

    Energy Technology Data Exchange (ETDEWEB)

    Verma, Prakash; Bartlett, Rodney J., E-mail: bartlett@qtp.ufl.edu [Quantum Theory Project, University of Florida, Gainesville, Florida 32611 (United States)

    2016-07-21

    Core excitation energies are computed with time-dependent density functional theory (TD-DFT) using the ionization energy corrected exchange and correlation potential QTP(0,0). QTP(0,0) provides C, N, and O K-edge spectra to about an electron volt. A mean absolute error (MAE) of 0.77 and a maximum error of 2.6 eV is observed for QTP(0,0) for many small molecules. TD-DFT based on QTP (0,0) is then used to describe the core-excitation spectra of the 22 amino acids. TD-DFT with conventional functionals greatly underestimates core excitation energies, largely due to the significant error in the Kohn-Sham occupied eigenvalues. To the contrary, the ionization energy corrected potential, QTP(0,0), provides excellent approximations (MAE of 0.53 eV) for core ionization energies as eigenvalues of the Kohn-Sham equations. As a consequence, core excitation energies are accurately described with QTP(0,0), as are the core ionization energies important in X-ray photoionization spectra or electron spectroscopy for chemical analysis.

  6. Increasing the applicability of density functional theory. V. X-ray absorption spectra with ionization potential corrected exchange and correlation potentials.

    Science.gov (United States)

    Verma, Prakash; Bartlett, Rodney J

    2016-07-21

    Core excitation energies are computed with time-dependent density functional theory (TD-DFT) using the ionization energy corrected exchange and correlation potential QTP(0,0). QTP(0,0) provides C, N, and O K-edge spectra to about an electron volt. A mean absolute error (MAE) of 0.77 and a maximum error of 2.6 eV is observed for QTP(0,0) for many small molecules. TD-DFT based on QTP (0,0) is then used to describe the core-excitation spectra of the 22 amino acids. TD-DFT with conventional functionals greatly underestimates core excitation energies, largely due to the significant error in the Kohn-Sham occupied eigenvalues. To the contrary, the ionization energy corrected potential, QTP(0,0), provides excellent approximations (MAE of 0.53 eV) for core ionization energies as eigenvalues of the Kohn-Sham equations. As a consequence, core excitation energies are accurately described with QTP(0,0), as are the core ionization energies important in X-ray photoionization spectra or electron spectroscopy for chemical analysis.

  7. Effect of mineral oxides on slag formation tendency of Mae Moh lignites

    Directory of Open Access Journals (Sweden)

    Anuwat Luxsanayotin

    2010-08-01

    Full Text Available Slagging is one of major ash deposition problems experienced in the boilers of coal–fired power plants especially theplants that use lignite, like Mae Moh lignites. The occurrence of slag is a complex phenomenon depending on several factorssuch as ash properties, furnace operating conditions, and coal properties. The main objective of this work is to study theeffect of mineral components in Mae Moh lignite on ash fusion temperatures (AFTs, which is commonly used as a keyindicator for slag formation tendency under pulverized combustion conditions. Two Mae Moh lignites from the coal seamsplanned to be used in the future were selected for the study to represent low CaO and high CaO lignite. The two lignites,namely K1 and K3, have 3.6 and 40.4 wt% CaO in ash, respectively. The AFT characterization shows that their initial deformationtemperatures (ITs were almost identical and considered as low for the typical flue gas temperature in the radiationsection of Mae Moh boilers, i.e. 1050-1100°C. These observed similar ITs were rather unexpected, especially for K1 consideringits sufficiently low base to acid (B/A ratios. The X-ray diffraction analyses evidently show the presence of illite, pyriteand anhydrite in K1, which explains the observed lower IT of the sample. Anhydrite, which is known to lower the ITs, is alsothe most abundant mineral in K3. Washing the lignite samples with HCl can significantly reduce CaO, MgO, and SO3 contentin the ash but not Fe2O3 as it is present in the form of pyrite. The addition of Al2O3, SiO2 and Fe2O3 can help increase AFTs ofthe studied samples. The Al2O3 addition gives the strongest effect on increasing AFTs, especially for the sample with lowAl2O3 content. When the CaO is added to the low CaO samples, the fluxing effect will initially occur. However, when the CaOcontent is higher than a critical value (i.e. CaO > 38%, the effect of its high melting point will dominate hence the AFTsincreased. Ternary phase diagrams

  8. Modeling rainfall-runoff process using soft computing techniques

    Science.gov (United States)

    Kisi, Ozgur; Shiri, Jalal; Tombul, Mustafa

    2013-02-01

    Rainfall-runoff process was modeled for a small catchment in Turkey, using 4 years (1987-1991) of measurements of independent variables of rainfall and runoff values. The models used in the study were Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Gene Expression Programming (GEP) which are Artificial Intelligence (AI) approaches. The applied models were trained and tested using various combinations of the independent variables. The goodness of fit for the model was evaluated in terms of the coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), coefficient of efficiency (CE) and scatter index (SI). A comparison was also made between these models and traditional Multi Linear Regression (MLR) model. The study provides evidence that GEP (with RMSE=17.82 l/s, MAE=6.61 l/s, CE=0.72 and R2=0.978) is capable of modeling rainfall-runoff process and is a viable alternative to other applied artificial intelligence and MLR time-series methods.

  9. Prediction of Missing Streamflow Data using Principle of Information Entropy

    Directory of Open Access Journals (Sweden)

    Santosa, B.

    2014-01-01

    Full Text Available Incomplete (missing of streamflow data often occurs. This can be caused by a not continous data recording or poor storage. In this study, missing consecutive streamflow data are predicted using the principle of information entropy. Predictions are performed ​​using the complete monthly streamflow information from the nearby river. Data on average monthly streamflow used as a simulation sample are taken from observation stations Katulampa, Batubeulah, and Genteng, which are the Ciliwung Cisadane river areas upstream. The simulated prediction of missing streamflow data in 2002 and 2003 at Katulampa Station are based on information from Genteng Station, and Batubeulah Station. The mean absolute error (MAE average obtained was 0,20 and 0,21 in 2002 and the MAE average in 2003 was 0,12 and 0,16. Based on the value of the error and pattern of filled gaps, this method has the potential to be developed further.

  10. Rapid extraction of PCDD/Fs from soil and fly ash samples. Pressurized fluid extraction (PFE) and microwave-assisted extraction (MAE)

    Energy Technology Data Exchange (ETDEWEB)

    Sanz, P.; Fabrellas, B. [Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (CIEMAT), Madrid (Spain)

    2004-09-15

    The main reference extraction method in the analysis of polychlorinated dibenzop- dioxins and dibenzofurans (PCDD/Fs) is still the Soxhlet extraction. But it requires long extraction times (up to 24 hs), large volumes of hazardous organic solvents (100-300 ml) and its automation is limited. Pressurized Fluid Extraction (PFE) and Microwave-Assisted Extraction (MAE) are two relatively new extraction techniques that reduce the time and the volume of solvent required for extraction. However, very different PFE extraction conditions are found for the same enviromental matrices in the literature. MAE is not a extraction technique very applied for the analysis of PCDD/Fs yet, although it is used for the determination of other organic compounds, such as PCBs and PAHs. In this study, PFE and MAE extraction conditions were optimized to determine PCDDs y PCDFs in fly ash and soil/sediment samples. Conventional Soxhlet extraction with toluene was used to compare the extraction efficiency of both techniques.

  11. Additional Burden of Diseases Associated with Cadmium Exposure: A Case Study of Cadmium Contaminated Rice Fields in Mae Sot District, Tak Province, Thailand

    Directory of Open Access Journals (Sweden)

    Nisarat Songprasert

    2015-08-01

    Full Text Available The cadmium (Cd contaminated rice fields in Mae Sot District, Tak Province, Thailand has been one of the major environmental problems in Thailand for the last 10 years. We used disability adjusted life years (DALYs to estimate the burden of disease attributable to Cd in terms of additional DALYs of Mae Sot residents. Cd exposure data included Cd and β2–microglobulin (β2-MG in urine (as an internal exposure dose and estimated cadmium daily intake (as an external exposure dose. Compared to the general Thai population, Mae Sot residents gained 10%–86% DALYs from nephrosis/nephritis, heart diseases, osteoporosis and cancer depending on their Cd exposure type and exposure level. The results for urinary Cd and dietary Cd intake varied according to the studies used for risk estimation. The ceiling effect was observed in results using dietary Cd intake because of the high Cd content in rice grown in the Mae Sot area. The results from β2-MG were more robust with additional DALYs ranging from 36%–86% for heart failure, cerebral infraction, and nephrosis/nephritis. Additional DALYs is a useful approach for assessing the magnitude of environmental Cd exposure. The Mae Sot population lost more healthy life compared to populations living in a non- or less Cd polluted area. This method should be applicable to various types of environmental contamination problems if exposure assessment information is available.

  12. Relaciones entre el salto vertical y la velocidad de mae-geri en karatecas de nivel internacional, especialidad kata

    Directory of Open Access Journals (Sweden)

    Víctor Martínez-Majolero

    2013-12-01

    Full Text Available El presente trabajo persiguió dos objetivos: (1 describir la capacidad de salto vertical y la velocidad y el tiempo de ejecución de la técnica de pierna frontal mae-geri en karatecas de nivel internacional y (2 analizar el grado de covariación entre dichas variables. Los participantes fueron 13 karatecas españoles masculinos de nivel internacional, estilo shito-ryu y especialidad de katas. El estudio siguió un diseño descriptivo y correlacional. Las variables analizadas fueron: salto vertical CMJ, medido con una plataforma de infrarrojos Optojump, y velocidad y tiempo de ejecución de patada mae-geri, medida con una cámara de alta velocidad (Casio EXFC-100. Los datos registrados fueron: altura media de salto de 48,7 ± 0,12 cm; velocidad media de mae-geri de 19,8 ± 1,9 km/h y de 19,6 ± 1,4 km/h, y tiempo de ejecución de dicha patada de 264,85 ± 28,14 ms y de 274,69 ± 18,4 ms, pierna dominante y no dominante respectivamente. Las intensidades de correlación se situaron entre r = 0,72 y r = –0,80. El salto vertical mantuvo una relación alta y estadísticamente significativa con la velocidad y el tiempo de ejecución de la patada mae-geri, técnica de gran importancia en las katas de competición en karate. Esta información puede ser valiosa tanto para planificar el entrenamiento mediante pruebas simples y de bajo coste como para detectar talentos.

  13. Characteristics of Mae Moh lignite: Hardgrove grindability index and approximate work index

    OpenAIRE

    Wutthiphong Tara; Chairoj Rattanakawin

    2012-01-01

    The purpose of this research was to preliminarily study the Mae Moh lignite grindability tests emphasizing onHardgrove grindability and approximate work index determination respectively. Firstly, the lignite samples were collected,prepared and analyzed for calorific value, total sulfur content, and proximate analysis. After that the Hardgrove grindabilitytest using ball-race test mill was performed. Knowing the Hardgrove indices, the Bond work indices of some samples wereestimated using the A...

  14. A Comparative Study Of Stock Price Forecasting Using Nonlinear Models

    Directory of Open Access Journals (Sweden)

    Diteboho Xaba

    2017-03-01

    Full Text Available This study compared the in-sample forecasting accuracy of three forecasting nonlinear models namely: the Smooth Transition Regression (STR model, the Threshold Autoregressive (TAR model and the Markov-switching Autoregressive (MS-AR model. Nonlinearity tests were used to confirm the validity of the assumptions of the study. The study used model selection criteria, SBC to select the optimal lag order and for the selection of appropriate models. The Mean Square Error (MSE, Mean Absolute Error (MAE and Root Mean Square Error (RMSE served as the error measures in evaluating the forecasting ability of the models. The MS-AR models proved to perform well with lower error measures as compared to LSTR and TAR models in most cases.

  15. An exact algorithm for optimal MAE stack filter design.

    Science.gov (United States)

    Dellamonica, Domingos; Silva, Paulo J S; Humes, Carlos; Hirata, Nina S T; Barrera, Junior

    2007-02-01

    We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly.

  16. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.

    Science.gov (United States)

    Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank

    2016-01-01

    We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.

  17. Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.

    Directory of Open Access Journals (Sweden)

    Holger Hoffmann

    Full Text Available We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.

  18. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    Science.gov (United States)

    Gupta, Rajarshi

    2016-05-01

    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  19. Comparison of artificial intelligence techniques for prediction of soil temperatures in Turkey

    Science.gov (United States)

    Citakoglu, Hatice

    2017-10-01

    Soil temperature is a meteorological data directly affecting the formation and development of plants of all kinds. Soil temperatures are usually estimated with various models including the artificial neural networks (ANNs), adaptive neuro-fuzzy inference system (ANFIS), and multiple linear regression (MLR) models. Soil temperatures along with other climate data are recorded by the Turkish State Meteorological Service (MGM) at specific locations all over Turkey. Soil temperatures are commonly measured at 5-, 10-, 20-, 50-, and 100-cm depths below the soil surface. In this study, the soil temperature data in monthly units measured at 261 stations in Turkey having records of at least 20 years were used to develop relevant models. Different input combinations were tested in the ANN and ANFIS models to estimate soil temperatures, and the best combination of significant explanatory variables turns out to be monthly minimum and maximum air temperatures, calendar month number, depth of soil, and monthly precipitation. Next, three standard error terms (mean absolute error (MAE, °C), root mean squared error (RMSE, °C), and determination coefficient ( R 2 )) were employed to check the reliability of the test data results obtained through the ANN, ANFIS, and MLR models. ANFIS (RMSE 1.99; MAE 1.09; R 2 0.98) is found to outperform both ANN and MLR (RMSE 5.80, 8.89; MAE 1.89, 2.36; R 2 0.93, 0.91) in estimating soil temperature in Turkey.

  20. Partial sums of arithmetical functions with absolutely convergent ...

    Indian Academy of Sciences (India)

    For an arithmetical function f with absolutely convergent Ramanujan expansion, we derive an asymptotic formula for the ∑ n ≤ N f(n)$ with explicit error term. As a corollary we obtain new results about sum-of-divisors functions and Jordan's totient functions.

  1. Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data

    Science.gov (United States)

    Young, Alistair A.; Li, Xiaosong

    2014-01-01

    Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382

  2. A Hierarchical Approach Using Machine Learning Methods in Solar Photovoltaic Energy Production Forecasting

    Directory of Open Access Journals (Sweden)

    Zhaoxuan Li

    2016-01-01

    Full Text Available We evaluate and compare two common methods, artificial neural networks (ANN and support vector regression (SVR, for predicting energy productions from a solar photovoltaic (PV system in Florida 15 min, 1 h and 24 h ahead of time. A hierarchical approach is proposed based on the machine learning algorithms tested. The production data used in this work corresponds to 15 min averaged power measurements collected from 2014. The accuracy of the model is determined using computing error statistics such as mean bias error (MBE, mean absolute error (MAE, root mean square error (RMSE, relative MBE (rMBE, mean percentage error (MPE and relative RMSE (rRMSE. This work provides findings on how forecasts from individual inverters will improve the total solar power generation forecast of the PV system.

  3. Optimization of microwave assisted extraction (MAE) and soxhlet extraction of phenolic compound from licorice root.

    Science.gov (United States)

    Karami, Zohreh; Emam-Djomeh, Zahra; Mirzaee, Habib Allah; Khomeiri, Morteza; Mahoonak, Alireza Sadeghi; Aydani, Emad

    2015-06-01

    In present study, response surface methodology was used to optimize extraction condition of phenolic compounds from licorice root by microwave application. Investigated factors were solvent (ethanol 80 %, methanol 80 % and water), liquid/solid ratio (10:1-25:1) and time (2-6 min). Experiments were designed according to the central composite rotatable design. The results showed that extraction conditions had significant effect on the extraction yield of phenolic compounds and antioxidant capacities. Optimal condition in microwave assisted method were ethanol 80 % as solvent, extraction time of 5-6 min and liquid/solid ratio of 12.7/1. Results were compared with those obtained by soxhlet extraction. In soxhlet extraction, Optimum conditions were extraction time of 6 h for ethanol 80 % as solvent. Value of phenolic compounds and extraction yield of licorice root in microwave assisted (MAE), and soxhlet were 47.47 mg/g and 16.38 %, 41.709 mg/g and 14.49 %, respectively. These results implied that MAE was more efficient extracting method than soxhlet.

  4. Quantifying Stream Flux of Carbon Nitrogen and Phosphorus in a Tropical Watershed, Mae Sa Thailand

    Science.gov (United States)

    Benner, S. G.; Ziegler, A. D.; Tantasirin, C.; Lu, X.; Giambelluca, T.

    2006-12-01

    Nutrient loading to rivers and, ultimately, oceans is of global concern with implications for both immediate aquatic health and as a longer term feedback system for global elemental cycling and climate change. Southeast Asia has been identified as a global hotspot for high yields of C, N, and P. We present initial results from a nutrient flux study from the Mae Sa watershed in northern Thailand. The Mae Sa is a 74 ha basin at the headwaters of the Chao Praya River and is characterized by diverse land use extending from forest reserves to intensive agriculture. The watershed receives the majority of its annual 1300-2000 mm of precipitation during the monsoonal season with runoff characterized by high intensity, short-duration flooding, typically produced by localized, sub-watershed, precipitation events. Dissolved (1 mm) fractions were sampled across a series of storm hydrograph events and each fraction was analyzed for total organic C, N, and P contents. Initial results indicate that a significant fraction of the nutrient flux during these events is found in the >1 mm size fraction.

  5. In vitro antibacterial activity of a novel resin-based pulp capping material containing the quaternary ammonium salt MAE-DB and Portland cement.

    Science.gov (United States)

    Yang, Yanwei; Huang, Li; Dong, Yan; Zhang, Hongchen; Zhou, Wei; Ban, Jinghao; Wei, Jingjing; Liu, Yan; Gao, Jing; Chen, Jihua

    2014-01-01

    Vital pulp preservation in the treatment of deep caries is challenging due to bacterial infection. The objectives of this study were to synthesize a novel, light-cured composite material containing bioactive calcium-silicate (Portland cement, PC) and the antimicrobial quaternary ammonium salt monomer 2-methacryloxylethyl dodecyl methyl ammonium bromide (MAE-DB) and to evaluate its effects on Streptococcus mutans growth in vitro. The experimental material was prepared from a 2 : 1 ratio of PC mixed with a resin of 2-hydroxyethylmethacrylate, bisphenol glycerolate dimethacrylate, and triethylene glycol dimethacrylate (4 : 3 : 1) containing 5 wt% MAE-DB. Cured resin containing 5% MAE-DB without PC served as the positive control material, and resin without MAE-DB or PC served as the negative control material. Mineral trioxide aggregate (MTA) and calcium hydroxide (Dycal) served as commercial controls. S. mutans biofilm formation on material surfaces and growth in the culture medium were tested according to colony-forming units (CFUs) and metabolic activity after 24 h incubation over freshly prepared samples or samples aged in water for 6 months. Biofilm formation was also assessed by Live/Dead staining and scanning electron microscopy. S. mutans biofilm formation on the experimental material was significantly inhibited, with CFU counts, metabolic activity, viability staining, and morphology similar to those of biofilms on the positive control material. None of the materials affected bacterial growth in solution. Contact-inhibition of biofilm formation was retained by the aged experimental material. Significant biofilm formation was observed on MTA and Dycal. The synthesized material containing HEMA-BisGMA-TEGDMA resin with MAE-DB as the antimicrobial agent and PC to support mineralized tissue formation inhibited S. mutans biofilm formation even after aging in water for 6 months, but had no inhibitory effect on bacteria in solution. Therefore, this material shows

  6. Characteristics of Mae Moh lignite: Hardgrove grindability index and approximate work index

    Directory of Open Access Journals (Sweden)

    Wutthiphong Tara

    2012-02-01

    Full Text Available The purpose of this research was to preliminarily study the Mae Moh lignite grindability tests emphasizing onHardgrove grindability and approximate work index determination respectively. Firstly, the lignite samples were collected,prepared and analyzed for calorific value, total sulfur content, and proximate analysis. After that the Hardgrove grindabilitytest using ball-race test mill was performed. Knowing the Hardgrove indices, the Bond work indices of some samples wereestimated using the Aplan’s formula. The approximate work indices were determined by running a batch dry-grinding testusing a laboratory ball mill. Finally, the work indices obtained from both methods were compared. It was found that allsamples could be ranked as lignite B, using the heating value as criteria, if the content of mineral matter is neglected. Similarly,all samples can be classified as lignite with the Hargrove grindability indices ranging from about 40 to 50. However, there isa significant difference in the work indices derived from Hardgrove and simplified Bond grindability tests. This may be due todifference in variability of lignite properties and the test procedures. To obtain more accurate values of the lignite workindex, the time-consuming Bond procedure should be performed with a number of corrections for different milling conditions.With Hardgrove grindability indices and the work indices calculated from Aplan’s formula, capacity of the roller-racepulverizer and grindability of the Mae Moh lignite should be investigated in detail further.

  7. Generalized regression neural network (GRNN)-based approach for colored dissolved organic matter (CDOM) retrieval: case study of Connecticut River at Middle Haddam Station, USA.

    Science.gov (United States)

    Heddam, Salim

    2014-11-01

    The prediction of colored dissolved organic matter (CDOM) using artificial neural network approaches has received little attention in the past few decades. In this study, colored dissolved organic matter (CDOM) was modeled using generalized regression neural network (GRNN) and multiple linear regression (MLR) models as a function of Water temperature (TE), pH, specific conductance (SC), and turbidity (TU). Evaluation of the prediction accuracy of the models is based on the root mean square error (RMSE), mean absolute error (MAE), coefficient of correlation (CC), and Willmott's index of agreement (d). The results indicated that GRNN can be applied successfully for prediction of colored dissolved organic matter (CDOM).

  8. Intranasal Pharmacokinetic Data for Triptans Such as Sumatriptan and Zolmitriptan Can Render Area Under the Curve (AUC) Predictions for the Oral Route: Strategy Development and Application

    DEFF Research Database (Denmark)

    Srinivas, Nuggehally R.; Syed, Muzeeb

    2016-01-01

    Limited pharmacokinetic sampling strategy may be useful for predicting the area under the curve (AUC) for triptans and may have clinical utility as a prospective tool for prediction. Using appropriate intranasal pharmacokinetic data, a Cmax vs. AUC relationship was established by linear regression...... models for sumatriptan and zolmitriptan. The predictions of the AUC values were performed using published mean/median Cmax data and appropriate regression lines. The quotient of observed and predicted values rendered fold-difference calculation. The mean absolute error (MAE), mean positive error (MPE......), mean negative error (MNE), root mean square error (RMSE), correlation coefficient (r), and the goodness of the AUC fold prediction were used to evaluate the two triptans. Also, data from the mean concentration profiles at time points of 1 hour (sumatriptan) and 3 hours (zolmitriptan) were used...

  9. Improved modification for the density-functional theory calculation of thermodynamic properties for C-H-O composite compounds.

    Science.gov (United States)

    Liu, Min Hsien; Chen, Cheng; Hong, Yaw Shun

    2005-02-08

    A three-parametric modification equation and the least-squares approach are adopted to calibrating hybrid density-functional theory energies of C(1)-C(10) straight-chain aldehydes, alcohols, and alkoxides to accurate enthalpies of formation DeltaH(f) and Gibbs free energies of formation DeltaG(f), respectively. All calculated energies of the C-H-O composite compounds were obtained based on B3LYP6-311++G(3df,2pd) single-point energies and the related thermal corrections of B3LYP6-31G(d,p) optimized geometries. This investigation revealed that all compounds had 0.05% average absolute relative error (ARE) for the atomization energies, with mean value of absolute error (MAE) of just 2.1 kJ/mol (0.5 kcal/mol) for the DeltaH(f) and 2.4 kJ/mol (0.6 kcal/mol) for the DeltaG(f) of formation.

  10. Analysis of dextromethorphan and dextrorphan in decomposed skeletal tissues by microwave assisted extraction, microplate solid-phase extraction and gas chromatography- mass spectrometry (MAE-MPSPE-GCMS).

    Science.gov (United States)

    Fraser, Candice D; Cornthwaite, Heather M; Watterson, James H

    2015-08-01

    Analysis of decomposed skeletal tissues for dextromethorphan (DXM) and dextrorphan (DXT) using microwave assisted extraction (MAE), microplate solid-phase extraction (MPSPE) and gas chromatography-mass spectrometry (GC-MS) is described. Rats (n = 3) received 100 mg/kg DXM (i.p.) and were euthanized by CO2 asphyxiation roughly 20 min post-dose. Remains decomposed to skeleton outdoors and vertebral bones were recovered, cleaned, and pulverized. Pulverized bone underwent MAE using methanol as an extraction solvent in a closed microwave system, followed by MPSPE and GC-MS. Analyte stability under MAE conditions was assessed and found to be stable for at least 60 min irradiation time. The majority (>90%) of each analyte was recovered after 15 min. The MPSPE-GCMS method was fit to a quadratic response (R(2)  > 0.99), over the concentration range 10-10 000 ng⋅mL(-1) , with coefficients of variation <20% in triplicate analysis. The MPSPE-GCMS method displayed a limit of detection of 10 ng⋅mL(-1) for both analytes. Following MAE for 60 min (80 °C, 1200 W), MPSPE-GCMS analysis of vertebral bone of DXM-exposed rats detected both analytes in all samples (DXM: 0.9-1.5 µg⋅g(-1) ; DXT: 0.5-1.8 µg⋅g(-1) ). Copyright © 2014 John Wiley & Sons, Ltd.

  11. Modeling solar radiation of Mediterranean region in Turkey by using fuzzy genetic approach

    International Nuclear Information System (INIS)

    Kisi, Ozgur

    2014-01-01

    The study investigates the ability of FG (fuzzy genetic) approach in modeling solar radiation of seven cities from Mediterranean region of Anatolia, Turkey. Latitude, longitude, altitude and month of the year data from the Adana, K. Maras, Mersin, Antalya, Isparta, Burdur and Antakya cities are used as inputs to the FG model to estimate one month ahead solar radiation. FG model is compared with ANNs (artificial neural networks) and ANFIS (adaptive neruro fuzzzy inference system) models with respect to RMSE (root mean square errors), MAE (mean absolute errors) and determination coefficient (R 2 ) statistics. Comparison results indicate that the FG model performs better than the ANN and ANFIS models. It is found that the FG model can be successfully used for estimating solar radiation by using latitude, longitude, altitude and month of the year information. FG model with RMSE = 6.29 MJ/m 2 , MAE = 4.69 MJ/m 2 and R 2 = 0.905 in the test stage was found to be superior to the optimal ANN model with RMSE = 7.17 MJ/m 2 , MAE = 5.29 MJ/m 2 and R 2 = 0.876 and ANFIS model with RMSE = 6.75 MJ/m 2 , MAE = 5.10 MJ/m 2 and R 2 = 0.892 in estimating solar radiation. - Highlights: • SR (Solar radiation) of seven cities from Mediterranean region of Turkey is predicted. • FG (Fuzzy genetic) models are developed for accurately estimation of SR. • The ability of the FG models used in the study is found to be satisfactory. • FG models are compared with commonly used ANNs (artificial neural networks). • FG models are found to perform better than the ANNs models

  12. Womanism and Black Feminism in the Work of Carrie Mae Weems

    Directory of Open Access Journals (Sweden)

    Christiane Stephens

    2016-04-01

    Full Text Available This article examines the liberatory aspects of Womanisn and Black Feminism in the work of artist Carrie Mae Weems.  Weems, artist and anthropologist creates artwork that highlights the issues of oppression and giving voice to worldwide issues.  Under the theoretical lens of Womanism, the article utilizes  Arts- Based -Educational Research (ABER, a non traidtional methodology, which aligns with Womanism to provide into past and present issues of liberation and equity. Womanism, Black women’s feminism, and ABER have the potential to bring issues of equity and social justice out of the academies and into the everyday world for those most in need of liberation.

  13. Comparison of the WSA-ENLIL model with three CME cone types

    Science.gov (United States)

    Jang, Soojeong; Moon, Y.; Na, H.

    2013-07-01

    We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.Abstract (2,250 Maximum Characters): We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the

  14. Analysis of corrosion products in some metallic statuettes of the Museum of Archaeology and Ethnology (MAE-USP)

    International Nuclear Information System (INIS)

    Rizzutto, Marcia A.; Tabacniks, Manfredo H.; Added, Nemitala; Barbosa, Marcel D.L.; Lima, Silvia Cunha; Melo, Hercilio G.; Neiva, Augusto C.

    2005-01-01

    The recent acquisition of a sealed chamber with controlled humidity by the Museum of Archaeology and Ethnology of the University of Sao Paulo (MAE-USP) requires new methods for conservation and restoration of metallic objects in its collection. To establish new procedures for the identification of corrosion mechanisms and agents in the exhibition environment, and to set up new standards for conservation of the museum's collection, Proton Induced X-Ray Emission (PIXE) elementary analysis of some metallic objects is in progress, using the external beam facility at LAMFI. The first analysis involved metallic objects from the collection of MAE, two African statuettes 'male Edans' from the Ogboni Secret Society, of the Ilobu-Ioruba ethnic group, one pectoral adornment from the Chimu culture, Peru and one anthropomorphic pendant from the Tairona culture, Colombia. The in air non destructive PIXE analysis allowed identifying major and some secondary components in the alloys and in the corrosion products on the samples, data that were used to identify the corrosion sources and to set up the exhibition environment. (author)

  15. An Artificial Neural Networks Approach to Estimate Occupational Accident: A National Perspective for Turkey

    Directory of Open Access Journals (Sweden)

    Hüseyin Ceylan

    2014-01-01

    Full Text Available Occupational accident estimation models were developed by using artificial neural networks (ANNs for Turkey. Using these models the number of occupational accidents and death and permanent incapacity numbers resulting from occupational accidents were estimated for Turkey until the year of 2025 by the three different scenarios. In the development of the models, insured workers, workplace, occupational accident, death, and permanent incapacity values were used as model parameters with data between 1970 and 2012. 2-5-1 neural network architecture was selected as the best network architecture. Sigmoid was used in hidden layers and linear function was used at output layer. The feed forward back propagation algorithm was used to train the network. In order to obtain a useful model, the network was trained between 1970 and 1999 to estimate the values of 2000 to 2012. The result was compared with the real values and it was seen that it is applicable for this aim. The performances of all developed models were evaluated using mean absolute percent errors (MAPE, mean absolute errors (MAE, and root mean square errors (RMSE.

  16. Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques

    Science.gov (United States)

    Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi

    2017-08-01

    The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.

  17. Hybrid empirical mode decomposition- ARIMA for forecasting exchange rates

    Science.gov (United States)

    Abadan, Siti Sarah; Shabri, Ani; Ismail, Shuhaida

    2015-02-01

    This paper studied the forecasting of monthly Malaysian Ringgit (MYR)/ United State Dollar (USD) exchange rates using the hybrid of two methods which are the empirical model decomposition (EMD) and the autoregressive integrated moving average (ARIMA). MYR is pegged to USD during the Asian financial crisis causing the exchange rates are fixed to 3.800 from 2nd of September 1998 until 21st of July 2005. Thus, the chosen data in this paper is the post-July 2005 data, starting from August 2005 to July 2010. The comparative study using root mean square error (RMSE) and mean absolute error (MAE) showed that the EMD-ARIMA outperformed the single-ARIMA and the random walk benchmark model.

  18. Predictive ability of machine learning methods for massive crop yield prediction

    Directory of Open Access Journals (Sweden)

    Alberto Gonzalez-Sanchez

    2014-04-01

    Full Text Available An important issue for agricultural planning purposes is the accurate yield estimation for the numerous crops involved in the planning. Machine learning (ML is an essential approach for achieving practical and effective solutions for this problem. Many comparisons of ML methods for yield prediction have been made, seeking for the most accurate technique. Generally, the number of evaluated crops and techniques is too low and does not provide enough information for agricultural planning purposes. This paper compares the predictive accuracy of ML and linear regression techniques for crop yield prediction in ten crop datasets. Multiple linear regression, M5-Prime regression trees, perceptron multilayer neural networks, support vector regression and k-nearest neighbor methods were ranked. Four accuracy metrics were used to validate the models: the root mean square error (RMS, root relative square error (RRSE, normalized mean absolute error (MAE, and correlation factor (R. Real data of an irrigation zone of Mexico were used for building the models. Models were tested with samples of two consecutive years. The results show that M5-Prime and k-nearest neighbor techniques obtain the lowest average RMSE errors (5.14 and 4.91, the lowest RRSE errors (79.46% and 79.78%, the lowest average MAE errors (18.12% and 19.42%, and the highest average correlation factors (0.41 and 0.42. Since M5-Prime achieves the largest number of crop yield models with the lowest errors, it is a very suitable tool for massive crop yield prediction in agricultural planning.

  19. Comparison of Gene Expression Programming with neuro-fuzzy and neural network computing techniques in estimating daily incoming solar radiation in the Basque Country (Northern Spain)

    International Nuclear Information System (INIS)

    Landeras, Gorka; López, José Javier; Kisi, Ozgur; Shiri, Jalal

    2012-01-01

    Highlights: ► Solar radiation estimation based on Gene Expression Programming is unexplored. ► This approach is evaluated for the first time in this study. ► Other artificial intelligence models (ANN and ANFIS) are also included in the study. ► New alternatives for solar radiation estimation based on temperatures are provided. - Abstract: Surface incoming solar radiation is a key variable for many agricultural, meteorological and solar energy conversion related applications. In absence of the required meteorological sensors for the detection of global solar radiation it is necessary to estimate this variable. Temperature based modeling procedures are reported in this study for estimating daily incoming solar radiation by using Gene Expression Programming (GEP) for the first time, and other artificial intelligence models such as Artificial Neural Networks (ANNs), and Adaptive Neuro-Fuzzy Inference System (ANFIS). A comparison was also made among these techniques and traditional temperature based global solar radiation estimation equations. Root mean square error (RMSE), mean absolute error (MAE) RMSE-based skill score (SS RMSE ), MAE-based skill score (SS MAE ) and r 2 criterion of Nash and Sutcliffe criteria were used to assess the models’ performances. An ANN (a four-input multilayer perceptron with 10 neurons in the hidden layer) presented the best performance among the studied models (2.93 MJ m −2 d −1 of RMSE). The ability of GEP approach to model global solar radiation based on daily atmospheric variables was found to be satisfactory.

  20. INTERACTION’S EFFECT OF ORGANIC MATERIAL AND AGGREGATION ON EXTRACTION EFFICIENCY OF TPHS FROM PETROLEUM CONTAMINATED SOILS WITH MAE

    Directory of Open Access Journals (Sweden)

    H. Ganjidoust and Gh. Naghizadeh

    2005-10-01

    Full Text Available Microwave-Assisted Extraction (MAE is a type of low-temperature thermal desorption process that its numerous advantages have caused a wide spread use of it. Microwave heating is a potentially attractive technique as it provides volumetric heating process to improve heating efficiencies as compared with conventional techniques. The ability to rapidly heat the sample solvent mixture is inherent to MAE and the main advantage of this technique. Presently MAE has been shown to be one of the best technologies for removing environmental pollutants specially PAHs, phenols and PCBs from soils and sediments. Five different mixtures and types of aggregation (Sand, Top soil, Kaolinite besides three concentrations of crude oil as a contaminant (1000, 5000 and 10000 mg/L were considered. The results indicated that regardless of aggregation, the presence of humus component in soil reduces the efficiency. Minimum and maximum efficiencies were for sandy soil (containing organic components and kaolinite (without any organic content, respectively. According to the results of this research when some amount of humus and organic materials are available in the matrix, it causes the extraction efficiency to perform as a function of just humus materials but not aggregation. Increasing the concentration of crude oil reduced the efficiency with a sharp steep for higher concentration (5000-10000 mg/L and less steeper for lower concentration (1000-5000 mg/L. The concentration of the contaminant, works just as an independent function with extraction time and aggregation factors. The extraction period of 10 min. can be suggested as an optimum extraction time in FMAE for PAHs contaminated soils.

  1. Definition of correcting factors for absolute radon content measurement formula

    International Nuclear Information System (INIS)

    Ji Changsong; Xiao Ziyun; Yang Jianfeng

    1992-01-01

    The absolute method of radio content measurement is based on thomas radon measurement formula. It was found in experiment that the systematic error existed in radon content measurement by means of thomas formula. By the analysis on the behaviour of radon daughter five factors including filter efficiency, detector construction factor, self-absorbance, energy spectrum factor, and gravity factor were introduced into the thomas formula, so that the systematic error was eliminated. The measuring methods of the five factors are given

  2. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  3. Predicting online ratings based on the opinion spreading process

    Science.gov (United States)

    He, Xing-Sheng; Zhou, Ming-Yang; Zhuo, Zhao; Fu, Zhong-Qian; Liu, Jian-Guo

    2015-10-01

    Predicting users' online ratings is always a challenge issue and has drawn lots of attention. In this paper, we present a rating prediction method by combining the user opinion spreading process with the collaborative filtering algorithm, where user similarity is defined by measuring the amount of opinion a user transfers to another based on the primitive user-item rating matrix. The proposed method could produce a more precise rating prediction for each unrated user-item pair. In addition, we introduce a tunable parameter λ to regulate the preferential diffusion relevant to the degree of both opinion sender and receiver. The numerical results for Movielens and Netflix data sets show that this algorithm has a better accuracy than the standard user-based collaborative filtering algorithm using Cosine and Pearson correlation without increasing computational complexity. By tuning λ, our method could further boost the prediction accuracy when using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as measurements. In the optimal cases, on Movielens and Netflix data sets, the corresponding algorithmic accuracy (MAE and RMSE) are improved 11.26% and 8.84%, 13.49% and 10.52% compared to the item average method, respectively.

  4. Spatially explicit estimation of aboveground boreal forest biomass in the Yukon River Basin, Alaska

    Science.gov (United States)

    Ji, Lei; Wylie, Bruce K.; Brown, Dana R. N.; Peterson, Birgit E.; Alexander, Heather D.; Mack, Michelle C.; Rover, Jennifer R.; Waldrop, Mark P.; McFarland, Jack W.; Chen, Xuexia; Pastick, Neal J.

    2015-01-01

    Quantification of aboveground biomass (AGB) in Alaska’s boreal forest is essential to the accurate evaluation of terrestrial carbon stocks and dynamics in northern high-latitude ecosystems. Our goal was to map AGB at 30 m resolution for the boreal forest in the Yukon River Basin of Alaska using Landsat data and ground measurements. We acquired Landsat images to generate a 3-year (2008–2010) composite of top-of-atmosphere reflectance for six bands as well as the brightness temperature (BT). We constructed a multiple regression model using field-observed AGB and Landsat-derived reflectance, BT, and vegetation indices. A basin-wide boreal forest AGB map at 30 m resolution was generated by applying the regression model to the Landsat composite. The fivefold cross-validation with field measurements had a mean absolute error (MAE) of 25.7 Mg ha−1 (relative MAE 47.5%) and a mean bias error (MBE) of 4.3 Mg ha−1(relative MBE 7.9%). The boreal forest AGB product was compared with lidar-based vegetation height data; the comparison indicated that there was a significant correlation between the two data sets.

  5. Absolute GPS Positioning Using Genetic Algorithms

    Science.gov (United States)

    Ramillien, G.

    A new inverse approach for restoring the absolute coordinates of a ground -based station from three or four observed GPS pseudo-ranges is proposed. This stochastic method is based on simulations of natural evolution named genetic algorithms (GA). These iterative procedures provide fairly good and robust estimates of the absolute positions in the Earth's geocentric reference system. For comparison/validation, GA results are compared to the ones obtained using the classical linearized least-square scheme for the determination of the XYZ location proposed by Bancroft (1985) which is strongly limited by the number of available observations (i.e. here, the number of input pseudo-ranges must be four). The r.m.s. accuracy of the non -linear cost function reached by this latter method is typically ~10-4 m2 corresponding to ~300-500-m accuracies for each geocentric coordinate. However, GA can provide more acceptable solutions (r.m.s. errors < 10-5 m2), even when only three instantaneous pseudo-ranges are used, such as a lost of lock during a GPS survey. Tuned GA parameters used in different simulations are N=1000 starting individuals, as well as Pc=60-70% and Pm=30-40% for the crossover probability and mutation rate, respectively. Statistical tests on the ability of GA to recover acceptable coordinates in presence of important levels of noise are made simulating nearly 3000 random samples of erroneous pseudo-ranges. Here, two main sources of measurement errors are considered in the inversion: (1) typical satellite-clock errors and/or 300-metre variance atmospheric delays, and (2) Geometrical Dilution of Precision (GDOP) due to the particular GPS satellite configuration at the time of acquisition. Extracting valuable information and even from low-quality starting range observations, GA offer an interesting alternative for high -precision GPS positioning.

  6. Bone mineral density at distal forearm in men over 40 years of age in Mae Chaem district, Chiang Mai Province, Thailand: a pilot study.

    Science.gov (United States)

    Tungjai, Montree; Kaewjaeng, Siriprapa; Jumpee, Chayanit; Sriburee, Sompong; Hongsriti, Pongsiri; Tapanya, Monruedee; Maghanemi, Utumma; Ratanasthien, Kwanchai; Kothan, Suchart

    2017-09-01

    To study the prevalence of bone mineral density (BMD) and osteoporosis in the distal forearm among Thai men over 40 years of age in Mae Chaem District, Chiang Mai Province, Thailand. The subjects in this study were 194 Thai men, aged between 40 and 87 years who resided in Mae Chaem District, Chiang Mai Province, Thailand. Self-administered questionnaires were used for receiving the demographic characteristics information. BMD was measured by peripheral dual energy X-ray absorptiometry at the nondominant distal forearm in all men. The BMD was highest in the age-group 40-49 years and lowest in the age-group 70-87 years. The average T-score at the distal forearm was also highest in the age-group 40-49 years and lowest in the age-group 70-87 years. The BMD decreased as a function of age-group (p  .05). The percentage of osteopenia and osteoporosis are increased as a function of age-group in, while decreased in that of normal bone density. We found the prevalence of osteoporosis in men who resided in Mae Chaem District, Chiang Mai Province, Thailand.

  7. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    Science.gov (United States)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  8. Systematic errors of EIT systems determined by easily-scalable resistive phantoms

    International Nuclear Information System (INIS)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-01-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design

  9. Absolute measurement of 152Eu

    International Nuclear Information System (INIS)

    Baba, Hiroshi; Baba, Sumiko; Ichikawa, Shinichi; Sekine, Toshiaki; Ishikawa, Isamu

    1981-08-01

    A new method of the absolute measurement for 152 Eu was established based on the 4πβ-γ spectroscopic anti-coincidence method. It is a coincidence counting method consisting of a 4πβ-counter and a Ge(Li) γ-ray detector, in which the effective counting efficiencies of the 4πβ-counter for β-rays, conversion electrons, and Auger electrons were obtained by taking the intensity ratios for certain γ-rays between the single spectrum and the spectrum coincident with the pulses from the 4πβ-counter. First, in order to verify the method, three different methods of the absolute measurement were performed with a prepared 60 Co source to find excellent agreement among the results deduced by them. Next, the 4πβ-γ spectroscopic coincidence measurement was applied to 152 Eu sources prepared by irradiating an enriched 151 Eu target in a reactor. The result was compared with that obtained by the γ-ray spectrometry using a 152 Eu standard source supplied by LMRI. They agreed with each other within the error of 2%. (author)

  10. Online absolute pose compensation and steering control of industrial robot based on six degrees of freedom laser measurement

    Science.gov (United States)

    Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu

    2017-03-01

    In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.

  11. Pseudo-absolute quantitative analysis using gas chromatography – Vacuum ultraviolet spectroscopy – A tutorial

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Ling [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States); Smuts, Jonathan; Walsh, Phillip [VUV Analytics, Inc., Cedar Park, TX (United States); Qiu, Changling [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States); McNair, Harold M. [Department of Chemistry, Virginia Tech, Blacksburg, VA (United States); Schug, Kevin A., E-mail: kschug@uta.edu [Department of Chemistry & Biochemistry, The University of Texas at Arlington, Arlington, TX (United States)

    2017-02-08

    The vacuum ultraviolet detector (VUV) is a new non-destructive mass sensitive detector for gas chromatography that continuously and rapidly collects full wavelength range absorption between 120 and 240 nm. In addition to conventional methods of quantification (internal and external standard), gas chromatography - vacuum ultraviolet spectroscopy has the potential for pseudo-absolute quantification of analytes based on pre-recorded cross sections (well-defined absorptivity across the 120–240 nm wavelength range recorded by the detector) without the need for traditional calibration. The pseudo-absolute method was used in this research to experimentally evaluate the sources of sample loss and gain associated with sample introduction into a typical gas chromatograph. Standard samples of benzene and natural gas were used to assess precision and accuracy for the analysis of liquid and gaseous samples, respectively, based on the amount of analyte loaded on-column. Results indicate that injection volume, split ratio, and sampling times for splitless analysis can all contribute to inaccurate, yet precise sample introduction. For instance, an autosampler can very reproducibly inject a designated volume, but there are significant systematic errors (here, a consistently larger volume than that designated) in the actual volume introduced. The pseudo-absolute quantification capability of the vacuum ultraviolet detector provides a new means for carrying out system performance checks and potentially for solving challenging quantitative analytical problems. For practical purposes, an internal standardized approach to normalize systematic errors can be used to perform quantitative analysis with the pseudo-absolute method. - Highlights: • Gas chromatography diagnostics and quantification using VUV detector. • Absorption cross-sections for molecules enable pseudo-absolute quantitation. • Injection diagnostics reveal systematic errors in hardware settings. • Internal

  12. Pseudo-absolute quantitative analysis using gas chromatography – Vacuum ultraviolet spectroscopy – A tutorial

    International Nuclear Information System (INIS)

    Bai, Ling; Smuts, Jonathan; Walsh, Phillip; Qiu, Changling; McNair, Harold M.; Schug, Kevin A.

    2017-01-01

    The vacuum ultraviolet detector (VUV) is a new non-destructive mass sensitive detector for gas chromatography that continuously and rapidly collects full wavelength range absorption between 120 and 240 nm. In addition to conventional methods of quantification (internal and external standard), gas chromatography - vacuum ultraviolet spectroscopy has the potential for pseudo-absolute quantification of analytes based on pre-recorded cross sections (well-defined absorptivity across the 120–240 nm wavelength range recorded by the detector) without the need for traditional calibration. The pseudo-absolute method was used in this research to experimentally evaluate the sources of sample loss and gain associated with sample introduction into a typical gas chromatograph. Standard samples of benzene and natural gas were used to assess precision and accuracy for the analysis of liquid and gaseous samples, respectively, based on the amount of analyte loaded on-column. Results indicate that injection volume, split ratio, and sampling times for splitless analysis can all contribute to inaccurate, yet precise sample introduction. For instance, an autosampler can very reproducibly inject a designated volume, but there are significant systematic errors (here, a consistently larger volume than that designated) in the actual volume introduced. The pseudo-absolute quantification capability of the vacuum ultraviolet detector provides a new means for carrying out system performance checks and potentially for solving challenging quantitative analytical problems. For practical purposes, an internal standardized approach to normalize systematic errors can be used to perform quantitative analysis with the pseudo-absolute method. - Highlights: • Gas chromatography diagnostics and quantification using VUV detector. • Absorption cross-sections for molecules enable pseudo-absolute quantitation. • Injection diagnostics reveal systematic errors in hardware settings. • Internal

  13. Examination of Spectral Transformations on Spectral Mixture Analysis

    Science.gov (United States)

    Deng, Y.; Wu, C.

    2018-04-01

    While many spectral transformation techniques have been applied on spectral mixture analysis (SMA), few study examined their necessity and applicability. This paper focused on exploring the difference between spectrally transformed schemes and untransformed scheme to find out which transformed scheme performed better in SMA. In particular, nine spectrally transformed schemes as well as untransformed scheme were examined in two study areas. Each transformed scheme was tested 100 times using different endmember classes' spectra under the endmember model of vegetation- high albedo impervious surface area-low albedo impervious surface area-soil (V-ISAh-ISAl-S). Performance of each scheme was assessed based on mean absolute error (MAE). Statistical analysis technique, Paired-Samples T test, was applied to test the significance of mean MAEs' difference between transformed and untransformed schemes. Results demonstrated that only NSMA could exceed the untransformed scheme in all study areas. Some transformed schemes showed unstable performance since they outperformed the untransformed scheme in one area but weakened the SMA result in another region.

  14. Machine Learning Methods to Predict Density Functional Theory B3LYP Energies of HOMO and LUMO Orbitals.

    Science.gov (United States)

    Pereira, Florbela; Xiao, Kaixia; Latino, Diogo A R S; Wu, Chengcheng; Zhang, Qingyou; Aires-de-Sousa, Joao

    2017-01-23

    Machine learning algorithms were explored for the fast estimation of HOMO and LUMO orbital energies calculated by DFT B3LYP, on the basis of molecular descriptors exclusively based on connectivity. The whole project involved the retrieval and generation of molecular structures, quantum chemical calculations for a database with >111 000 structures, development of new molecular descriptors, and training/validation of machine learning models. Several machine learning algorithms were screened, and an applicability domain was defined based on Euclidean distances to the training set. Random forest models predicted an external test set of 9989 compounds achieving mean absolute error (MAE) up to 0.15 and 0.16 eV for the HOMO and LUMO orbitals, respectively. The impact of the quantum chemical calculation protocol was assessed with a subset of compounds. Inclusion of the orbital energy calculated by PM7 as an additional descriptor significantly improved the quality of estimations (reducing the MAE in >30%).

  15. Prediction of Backbreak in Open-Pit Blasting Operations Using the Machine Learning Method

    Science.gov (United States)

    Khandelwal, Manoj; Monjezi, M.

    2013-03-01

    Backbreak is an undesirable phenomenon in blasting operations. It can cause instability of mine walls, falling down of machinery, improper fragmentation, reduced efficiency of drilling, etc. The existence of various effective parameters and their unknown relationships are the main reasons for inaccuracy of the empirical models. Presently, the application of new approaches such as artificial intelligence is highly recommended. In this paper, an attempt has been made to predict backbreak in blasting operations of Soungun iron mine, Iran, incorporating rock properties and blast design parameters using the support vector machine (SVM) method. To investigate the suitability of this approach, the predictions by SVM have been compared with multivariate regression analysis (MVRA). The coefficient of determination (CoD) and the mean absolute error (MAE) were taken as performance measures. It was found that the CoD between measured and predicted backbreak was 0.987 and 0.89 by SVM and MVRA, respectively, whereas the MAE was 0.29 and 1.07 by SVM and MVRA, respectively.

  16. Preliminary investigation on the prevalence of malaria and HIV co-infection in Mae Sot District, Tak Province of Thailand

    Directory of Open Access Journals (Sweden)

    Siwalee Rattanapunya

    2015-05-01

    Conclusions: The increasing trend of prevalence of malaria and HIV co-infection in Mae Sot, Tak province was of a great concern on either pharmacodynamics or pharmacokinetics aspect. The study in a larger numbers of malaria patients in different endemic areas throughout the country with different time periods is underway.

  17. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    Science.gov (United States)

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  18. Absolute beam-charge measurement for single-bunch electron beams

    International Nuclear Information System (INIS)

    Suwada, Tsuyoshi; Ohsawa, Satoshi; Furukawa, Kazuro; Akasaka, Nobumasa

    2000-01-01

    The absolute beam charge of a single-bunch electron beam with a pulse width of 10 ps and that of a short-pulsed electron beam with a pulse width of 1 ns were measured with a Faraday cup in a beam test for the KEK B-Factory (KEKB) injector linac. It is strongly desired to obtain a precise beam-injection rate to the KEKB rings, and to estimate the amount of beam loss. A wall-current monitor was also recalibrated within an error of ±2%. This report describes the new results for an absolute beam-charge measurement for single-bunch and short-pulsed electron beams, and recalibration of the wall-current monitors in detail. (author)

  19. Absolute method of measuring magnetic susceptibility

    Science.gov (United States)

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  20. Prediction of monthly average global solar radiation based on statistical distribution of clearness index

    International Nuclear Information System (INIS)

    Ayodele, T.R.; Ogunjuyigbe, A.S.O.

    2015-01-01

    In this paper, probability distribution of clearness index is proposed for the prediction of global solar radiation. First, the clearness index is obtained from the past data of global solar radiation, then, the parameters of the appropriate distribution that best fit the clearness index are determined. The global solar radiation is thereafter predicted from the clearness index using inverse transformation of the cumulative distribution function. To validate the proposed method, eight years global solar radiation data (2000–2007) of Ibadan, Nigeria are used to determine the parameters of appropriate probability distribution for clearness index. The calculated parameters are then used to predict the future monthly average global solar radiation for the following year (2008). The predicted values are compared with the measured values using four statistical tests: the Root Mean Square Error (RMSE), MAE (Mean Absolute Error), MAPE (Mean Absolute Percentage Error) and the coefficient of determination (R"2). The proposed method is also compared to the existing regression models. The results show that logistic distribution provides the best fit for clearness index of Ibadan and the proposed method is effective in predicting the monthly average global solar radiation with overall RMSE of 0.383 MJ/m"2/day, MAE of 0.295 MJ/m"2/day, MAPE of 2% and R"2 of 0.967. - Highlights: • Distribution of clearnes index is proposed for prediction of global solar radiation. • The clearness index is obtained from the past data of global solar radiation. • The parameters of distribution that best fit the clearness index are determined. • Solar radiation is predicted from the clearness index using inverse transformation. • The method is effective in predicting the monthly average global solar radiation.

  1. A new accuracy measure based on bounded relative error for time series forecasting.

    Science.gov (United States)

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  2. [Comparison of three daily global solar radiation models].

    Science.gov (United States)

    Yang, Jin-Ming; Fan, Wen-Yi; Zhao, Ying-Hui

    2014-08-01

    Three daily global solar radiation estimation models ( Å-P model, Thornton-Running model and model provided by Liu Ke-qun et al.) were analyzed and compared using data of 13 weather stations from 1982 to 2012 from three northeastern provinces and eastern Inner Mongolia. After cross-validation analysis, the result showed that mean absolute error (MAE) for each model was 1.71, 2.83 and 1.68 MJ x m(-2) x d(-1) respectively, showing that Å-P model and model provided by Liu Ke-qun et al. which used percentage of sunshine had an advantage over Thornton-Running model which didn't use percentage of sunshine. Model provided by Liu Ke-qun et al. played a good effect on the situation of non-sunshine, and its MAE and bias percentage were 18.5% and 33.8% smaller than those of Å-P model, respectively. High precision results could be obtained by using the simple linear model of Å-P. Å-P model, Thornton-Running model and model provided by Liu Ke-qun et al. overvalued daily global solar radiation by 12.2%, 19.2% and 9.9% respectively. MAE for each station varied little with the spatial change of location, and annual MAE decreased with the advance of years. The reason for this might be that the change of observation accuracy caused by the replacement of radiation instrument in 1993. MAEs for rainy days, non-sunshine days and warm seasons of the three models were greater than those for days without rain, sunshine days and cold seasons respectively, showing that different methods should be used for different weather conditions on estimating solar radiation with meteorological elements.

  3. Deep learning methods for protein torsion angle prediction.

    Science.gov (United States)

    Li, Haiou; Hou, Jie; Adhikari, Badri; Lyu, Qiang; Cheng, Jianlin

    2017-09-18

    Deep learning is one of the most powerful machine learning methods that has achieved the state-of-the-art performance in many domains. Since deep learning was introduced to the field of bioinformatics in 2012, it has achieved success in a number of areas such as protein residue-residue contact prediction, secondary structure prediction, and fold recognition. In this work, we developed deep learning methods to improve the prediction of torsion (dihedral) angles of proteins. We design four different deep learning architectures to predict protein torsion angles. The architectures including deep neural network (DNN) and deep restricted Boltzmann machine (DRBN), deep recurrent neural network (DRNN) and deep recurrent restricted Boltzmann machine (DReRBM) since the protein torsion angle prediction is a sequence related problem. In addition to existing protein features, two new features (predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments) are used as input to each of the four deep learning architectures to predict phi and psi angles of protein backbone. The mean absolute error (MAE) of phi and psi angles predicted by DRNN, DReRBM, DRBM and DNN is about 20-21° and 29-30° on an independent dataset. The MAE of phi angle is comparable to the existing methods, but the MAE of psi angle is 29°, 2° lower than the existing methods. On the latest CASP12 targets, our methods also achieved the performance better than or comparable to a state-of-the art method. Our experiment demonstrates that deep learning is a valuable method for predicting protein torsion angles. The deep recurrent network architecture performs slightly better than deep feed-forward architecture, and the predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments are useful features for improving prediction accuracy.

  4. Smartphone application for mechanical quality assurance of medical linear accelerators

    Science.gov (United States)

    Kim, Hwiyoung; Lee, Hyunseok; In Park, Jong; Choi, Chang Heon; Park, So-Yeon; Kim, Hee Jung; Kim, Young Suk; Ye, Sung-Joon

    2017-06-01

    Mechanical quality assurance (QA) of medical linear accelerators consists of time-consuming and human-error-prone procedures. We developed a smartphone application system for mechanical QA. The system consists of two smartphones: one attached to a gantry for obtaining real-time information on the mechanical parameters of the medical linear accelerator, and another displaying real-time information via a Bluetooth connection with the former. Motion sensors embedded in the smartphone were used to measure gantry and collimator rotations. Images taken by the smartphone’s high-resolution camera were processed to evaluate accuracies of jaw-positioning, crosshair centering and source-to-surface distance (SSD). The application was developed using Android software development kit and OpenCV library. The accuracy and precision of the system was validated against an optical rotation stage and digital calipers, prior to routine QA measurements of five medical linear accelerators. The system accuracy and precision in measuring angles and lengths were determined to be 0.05  ±  0.05° and 0.25  ±  0.14 mm, respectively. The mean absolute errors (MAEs) in QA measurements of gantry and collimator rotation were 0.05  ±  0.04° and 0.05  ±  0.04°, respectively. The MAE in QA measurements of light field was 0.39  ±  0.36 mm. The MAEs in QA measurements of crosshair centering and SSD were 0.40  ±  0.35 mm and 0.41  ±  0.32 mm, respectively. In conclusion, most routine mechanical QA procedures could be performed using the smartphone application system with improved precision and within a shorter time-frame, while eliminating potential human errors.

  5. Smartphone application for mechanical quality assurance of medical linear accelerators.

    Science.gov (United States)

    Kim, Hwiyoung; Lee, Hyunseok; Park, Jong In; Choi, Chang Heon; Park, So-Yeon; Kim, Hee Jung; Kim, Young Suk; Ye, Sung-Joon

    2017-06-07

    Mechanical quality assurance (QA) of medical linear accelerators consists of time-consuming and human-error-prone procedures. We developed a smartphone application system for mechanical QA. The system consists of two smartphones: one attached to a gantry for obtaining real-time information on the mechanical parameters of the medical linear accelerator, and another displaying real-time information via a Bluetooth connection with the former. Motion sensors embedded in the smartphone were used to measure gantry and collimator rotations. Images taken by the smartphone's high-resolution camera were processed to evaluate accuracies of jaw-positioning, crosshair centering and source-to-surface distance (SSD). The application was developed using Android software development kit and OpenCV library. The accuracy and precision of the system was validated against an optical rotation stage and digital calipers, prior to routine QA measurements of five medical linear accelerators. The system accuracy and precision in measuring angles and lengths were determined to be 0.05  ±  0.05° and 0.25  ±  0.14 mm, respectively. The mean absolute errors (MAEs) in QA measurements of gantry and collimator rotation were 0.05  ±  0.04° and 0.05  ±  0.04°, respectively. The MAE in QA measurements of light field was 0.39  ±  0.36 mm. The MAEs in QA measurements of crosshair centering and SSD were 0.40  ±  0.35 mm and 0.41  ±  0.32 mm, respectively. In conclusion, most routine mechanical QA procedures could be performed using the smartphone application system with improved precision and within a shorter time-frame, while eliminating potential human errors.

  6. Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing

    Science.gov (United States)

    Williams, McKay D.

    coarse scale imagery-specific AMRD, and 3) demonstration of comparisons between coarse scale unmixing abundances and AMRD. Spatial alignment was performed using our scene-wide spectral comparison (SWSC) algorithm, which aligned imagery with accuracy approaching the distance of a single fine scale pixel. We compared simple rectangular aggregation to coarse sensor point spread function (PSF) aggregation, and found that the PSF approach returned lower error, but that rectangular aggregation more accurately estimated true abundances at ground level. We demonstrated various metrics for comparing unmixing results to AMRD, including mean absolute error (MAE) and linear regression (LR). We additionally introduced reference data mean adjusted MAE (MA-MAE), and reference data confidence interval adjusted MAE (CIA-MAE), which account for known error in the reference data itself. MA-MAE analysis indicated that fully constrained linear unmixing of coarse scale imagery across all three scenes returned an error of 10.83% per class and pixel, with regression analysis yielding a slope = 0.85, intercept = 0.04, and R2 = 0.81. Our reference data research has demonstrated a viable methodology to efficiently generate, validate, and apply AMRD to specific examples of airborne remote sensing imagery, thereby enabling direct quantitative assessment of spectral unmixing performance.

  7. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  8. Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.

    Science.gov (United States)

    Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2016-01-01

    Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (Pcutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Left-hemisphere activation is associated with enhanced vocal pitch error detection in musicians with absolute pitch

    Science.gov (United States)

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A.; Larson, Charles R.

    2014-01-01

    The ability to process auditory feedback for vocal pitch control is crucial during speaking and singing. Previous studies have suggested that musicians with absolute pitch (AP) develop specialized left-hemisphere mechanisms for pitch processing. The present study adopted an auditory feedback pitch perturbation paradigm combined with ERP recordings to test the hypothesis whether the neural mechanisms of the left-hemisphere enhance vocal pitch error detection and control in AP musicians compared with relative pitch (RP) musicians and non-musicians (NM). Results showed a stronger N1 response to pitch-shifted voice feedback in the right-hemisphere for both AP and RP musicians compared with the NM group. However, the left-hemisphere P2 component activation was greater in AP and RP musicians compared with NMs and also for the AP compared with RP musicians. The NM group was slower in generating compensatory vocal reactions to feedback pitch perturbation compared with musicians, and they failed to re-adjust their vocal pitch after the feedback perturbation was removed. These findings suggest that in the earlier stages of cortical neural processing, the right hemisphere is more active in musicians for detecting pitch changes in voice feedback. In the later stages, the left-hemisphere is more active during the processing of auditory feedback for vocal motor control and seems to involve specialized mechanisms that facilitate pitch processing in the AP compared with RP musicians. These findings indicate that the left hemisphere mechanisms of AP ability are associated with improved auditory feedback pitch processing during vocal pitch control in tasks such as speaking or singing. PMID:24355545

  10. STAR barrel electromagnetic calorimeter absolute calibration using 'minimum ionizing particles' from collisions at RHIC

    International Nuclear Information System (INIS)

    Cormier, T.M.; Pavlinov, A.I.; Rykov, M.V.; Rykov, V.L.; Shestermanov, K.E.

    2002-01-01

    The procedure for the STAR Barrel Electromagnetic Calorimeter (BEMC) absolute calibrations, using penetrating charged particle hits (MIP-hits) from physics events at RHIC, is presented. Its systematic and statistical errors are evaluated. It is shown that, using this technique, the equalization and transfer of the absolute scale from the test beam can be done to a percent level accuracy in a reasonable amount of time for the entire STAR BEMC. MIP-hits would also be an effective tool for continuously monitoring the variations of the BEMC tower's gains, virtually without interference to STAR's main physics program. The method does not rely on simulations for anything other than geometric and some other small corrections, and also for estimations of the systematic errors. It directly transfers measured test beam responses to operations at RHIC

  11. A hybrid model for dissolved oxygen prediction in aquaculture based on multi-scale features

    Directory of Open Access Journals (Sweden)

    Chen Li

    2018-03-01

    Full Text Available To increase prediction accuracy of dissolved oxygen (DO in aquaculture, a hybrid model based on multi-scale features using ensemble empirical mode decomposition (EEMD is proposed. Firstly, original DO datasets are decomposed by EEMD and we get several components. Secondly, these components are used to reconstruct four terms including high frequency term, intermediate frequency term, low frequency term and trend term. Thirdly, according to the characteristics of high and intermediate frequency terms, which fluctuate violently, the least squares support vector machine (LSSVR is used to predict the two terms. The fluctuation of low frequency term is gentle and periodic, so it can be modeled by BP neural network with an optimal mind evolutionary computation (MEC-BP. Then, the trend term is predicted using grey model (GM because it is nearly linear. Finally, the prediction values of DO datasets are calculated by the sum of the forecasting values of all terms. The experimental results demonstrate that our hybrid model outperforms EEMD-ELM (extreme learning machine based on EEMD, EEMD-BP and MEC-BP models based on the mean absolute error (MAE, mean absolute percentage error (MAPE, mean square error (MSE and root mean square error (RMSE. Our hybrid model is proven to be an effective approach to predict aquaculture DO.

  12. Artificial evolutionary approaches to produce smoother surface in magnetic abrasive finishing of hardened AISI 52100 steel

    Energy Technology Data Exchange (ETDEWEB)

    Teimouri, Reza; Baseri, Hamid [Babol University of Technology, Babol (Iran, Islamic Republic of)

    2013-02-15

    In this work, two models of feed forward back-propagation neural network (FFBP-NN) and adaptive neuro-fuzzy inference system (ANFIS) have been developed to predict the performance of magnetic abrasive finishing process, based on experimental data of literature. Input parameters of process are electromagnet's voltage, mesh number of abrasive particles, poles rotational speed and weight percent of abrasive particles, and also the output is percentage of surface roughness variation. In order to select the best model, a comparison between developed models has been done based on their mean absolute error (MAE) and root mean square error (RMSE). Moreover, optimization methods based on simulated annealing (SA) and particle swarm optimization (PSO) algorithms were used to maximize the percent of surface roughness variation and select the optimal process parameters. Results indicated that the models based on artificial intelligence predict much more precise values with respect to predictive regression model developed in main literature. Also, the ANFIS model had a lowest value of MAE and RMSE with respect to others. So it was used as an objective function to maximize the surface roughness variation by using SA and PSO. Comparison between the obtained optimal solutions and analysis of results in main literature indicated that SA and PSO could find the optimal answers logically and precisely.

  13. An absolute distance interferometer with two external cavity diode lasers

    International Nuclear Information System (INIS)

    Hartmann, L; Meiners-Hagen, K; Abou-Zeid, A

    2008-01-01

    An absolute interferometer for length measurements in the range of several metres has been developed. The use of two external cavity diode lasers allows the implementation of a two-step procedure which combines the length measurement with a variable synthetic wavelength and its interpolation with a fixed synthetic wavelength. This synthetic wavelength is obtained at ≈42 µm by a modulation-free stabilization of both lasers to Doppler-reduced rubidium absorption lines. A stable reference interferometer is used as length standard. Different contributions to the total measurement uncertainty are discussed. It is shown that the measurement uncertainty can considerably be reduced by correcting the influence of vibrations on the measurement result and by applying linear regression to the quadrature signals of the absolute interferometer and the reference interferometer. The comparison of the absolute interferometer with a counting interferometer for distances up to 2 m results in a linearity error of 0.4 µm in good agreement with an estimation of the measurement uncertainty

  14. Globular Clusters: Absolute Proper Motions and Galactic Orbits

    Science.gov (United States)

    Chemel, A. A.; Glushkova, E. V.; Dambis, A. K.; Rastorguev, A. S.; Yalyalieva, L. N.; Klinichev, A. D.

    2018-04-01

    We cross-match objects from several different astronomical catalogs to determine the absolute proper motions of stars within the 30-arcmin radius fields of 115 Milky-Way globular clusters with the accuracy of 1-2 mas yr-1. The proper motions are based on positional data recovered from the USNO-B1, 2MASS, URAT1, ALLWISE, UCAC5, and Gaia DR1 surveys with up to ten positions spanning an epoch difference of up to about 65 years, and reduced to Gaia DR1 TGAS frame using UCAC5 as the reference catalog. Cluster members are photometrically identified by selecting horizontal- and red-giant branch stars on color-magnitude diagrams, and the mean absolute proper motions of the clusters with a typical formal error of about 0.4 mas yr-1 are computed by averaging the proper motions of selected members. The inferred absolute proper motions of clusters are combined with available radial-velocity data and heliocentric distance estimates to compute the cluster orbits in terms of the Galactic potential models based on Miyamoto and Nagai disk, Hernquist spheroid, and modified isothermal dark-matter halo (axisymmetric model without a bar) and the same model + rotating Ferre's bar (non-axisymmetric). Five distant clusters have higher-than-escape velocities, most likely due to large errors of computed transversal velocities, whereas the computed orbits of all other clusters remain bound to the Galaxy. Unlike previously published results, we find the bar to affect substantially the orbits of most of the clusters, even those at large Galactocentric distances, bringing appreciable chaotization, especially in the portions of the orbits close to the Galactic center, and stretching out the orbits of some of the thick-disk clusters.

  15. Artificial neural network and response surface methodology modeling in mass transfer parameters predictions during osmotic dehydration of Carica papaya L.

    Directory of Open Access Journals (Sweden)

    J. Prakash Maran

    2013-09-01

    Full Text Available In this study, a comparative approach was made between artificial neural network (ANN and response surface methodology (RSM to predict the mass transfer parameters of osmotic dehydration of papaya. The effects of process variables such as temperature, osmotic solution concentration and agitation speed on water loss, weight reduction, and solid gain during osmotic dehydration were investigated using a three-level three-factor Box-Behnken experimental design. Same design was utilized to train a feed-forward multilayered perceptron (MLP ANN with back-propagation algorithm. The predictive capabilities of the two methodologies were compared in terms of root mean square error (RMSE, mean absolute error (MAE, standard error of prediction (SEP, model predictive error (MPE, chi square statistic (χ2, and coefficient of determination (R2 based on the validation data set. The results showed that properly trained ANN model is found to be more accurate in prediction as compared to RSM model.

  16. Mapping MOS-HIV to HUI3 and EQ-5D-3L in Patients With HIV

    Directory of Open Access Journals (Sweden)

    Vilija R. Joyce MS

    2017-07-01

    Full Text Available Objectives: The Medical Outcomes Study HIV Health Survey (MOS-HIV is frequently used in HIV clinical trials; however, scores generated from the MOS-HIV are not suited for cost-effectiveness analyses as they do not assign utility values to health states. Our objective was to estimate and externally validate several mapping algorithms to predict Health Utilities Index Mark 3 (HUI3 and EQ-5D-3L utility values from the MOS-HIV. Methods: We developed and validated mapping algorithms using data from two HIV clinical trials. Data from the first trial (n = 367 formed the estimation data set for the HUI3 (4,610 observations and EQ-5D-3L (4,662 observations mapping algorithms; data from the second trial (n = 168 formed the HUI3 (1,135 observations and EQ-5D-3L (1,152 observations external validation data set. We compared ordinary least squares (OLS models of increasing complexity with the more flexible two-part, beta regression, and finite mixture models. We assessed model performance using mean absolute error (MAE and mean squared error (MSE. Results: The OLS model that used MOS-HIV dimension scores along with squared terms gave the best HUI3 predictions (mean observed 0.84; mean predicted 0.80; MAE 0.0961; the finite mixture model gave the best EQ-5D-3L predictions (mean observed 0.90; mean predicted 0.88; MAE 0.0567. All models produced higher prediction errors at the lower end of the HUI3 and EQ-5D-3L score ranges (<0.40. Conclusions: The proposed mapping algorithms can be used to predict HUI3 and EQ-5D-3L utility values from the MOS-HIV, although greater error may pose a problem in samples where a substantial proportion of patients are in poor health. These algorithms may be useful for estimating utility values from the MOS-HIV for cost-effectiveness studies when HUI3 or EQ-5D-3L data are not available.

  17. A Model of Self-Monitoring Blood Glucose Measurement Error.

    Science.gov (United States)

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  18. TU-AB-BRA-03: Atlas-Based Algorithms with Local Registration-Goodness Weighting for MRI-Driven Electron Density Mapping

    International Nuclear Information System (INIS)

    Farjam, R; Tyagi, N; Veeraraghavan, H; Apte, A; Zakian, K; Deasy, J; Hunt, M

    2016-01-01

    Purpose: To develop image-analysis algorithms to synthesize CT with accurate electron densities for MR-only radiotherapy of head & neck (H&N) and pelvis anatomies. Methods: CT and 3T-MRI (Philips, mDixon sequence) scans were randomly selected from a pool of H&N (n=11) and pelvis (n=12) anatomies to form an atlas. All MRIs were pre-processed to eliminate scanner and patient-induced intensity inhomogeneities and standardize their intensity histograms. CT and MRI for each patient were then co-registered to construct CT-MRI atlases. For more accurate CT-MR fusion, bone intensities in CT were suppressed to improve the similarity between CT and MRI. For a new patient, all CT-MRI atlases are deformed onto the new patients’ MRI initially. A newly-developed generalized registration error (GRE) metric was then calculated as a measure of local registration accuracy. The synthetic CT value at each point is a 1/GRE-weighted average of CTs from all CT-MR atlases. For evaluation, the mean absolute error (MAE) between the original and synthetic CT (generated in a leave-one-out scheme) was computed. The planning dose from the original and synthetic CT was also compared. Results: For H&N patients, MAE was 67±9, 114±22, and 116±9 HU over the entire-CT, air and bone regions, respectively. For pelvis anatomy, MAE was 47±5 and 146±14 for the entire and bone regions. In comparison with MIRADA medical, an FDA-approved registration tool, we found that our proposed registration strategy reduces MAE by ∼30% and ∼50% over the entire and bone regions, respectively. GRE-weighted strategy further lowers MAE by ∼15% to ∼40%. Our primary dose calculation also showed highly consistent results between the original and synthetic CT. Conclusion: We’ve developed a novel image-analysis technique to synthesize CT for H&N and pelvis anatomies. Our proposed image fusion strategy and GRE metric help generate more accurate synthetic CT using locally more similar atlases (Support: Philips

  19. Improving Accuracy Estimation of Forest Aboveground Biomass Based on Incorporation of ALOS-2 PALSAR-2 and Sentinel-2A Imagery and Machine Learning: A Case Study of the Hyrcanian Forest Area (Iran

    Directory of Open Access Journals (Sweden)

    Sasan Vafaei

    2018-01-01

    Full Text Available The main objective of this research is to investigate the potential combination of Sentinel-2A and ALOS-2 PALSAR-2 (Advanced Land Observing Satellite -2 Phased Array type L-band Synthetic Aperture Radar-2 imagery for improving the accuracy of the Aboveground Biomass (AGB measurement. According to the current literature, this kind of investigation has rarely been conducted. The Hyrcanian forest area (Iran is selected as the case study. For this purpose, a total of 149 sample plots for the study area were documented through fieldwork. Using the imagery, three datasets were generated including the Sentinel-2A dataset, the ALOS-2 PALSAR-2 dataset, and the combination of the Sentinel-2A dataset and the ALOS-2 PALSAR-2 dataset (Sentinel-ALOS. Because the accuracy of the AGB estimation is dependent on the method used, in this research, four machine learning techniques were selected and compared, namely Random Forests (RF, Support Vector Regression (SVR, Multi-Layer Perceptron Neural Networks (MPL Neural Nets, and Gaussian Processes (GP. The performance of these AGB models was assessed using the coefficient of determination (R2, the root-mean-square error (RMSE, and the mean absolute error (MAE. The results showed that the AGB models derived from the combination of the Sentinel-2A and the ALOS-2 PALSAR-2 data had the highest accuracy, followed by models using the Sentinel-2A dataset and the ALOS-2 PALSAR-2 dataset. Among the four machine learning models, the SVR model (R2 = 0.73, RMSE = 38.68, and MAE = 32.28 had the highest prediction accuracy, followed by the GP model (R2 = 0.69, RMSE = 40.11, and MAE = 33.69, the RF model (R2 = 0.62, RMSE = 43.13, and MAE = 35.83, and the MPL Neural Nets model (R2 = 0.44, RMSE = 64.33, and MAE = 53.74. Overall, the Sentinel-2A imagery provides a reasonable result while the ALOS-2 PALSAR-2 imagery provides a poor result of the forest AGB estimation. The combination of the Sentinel-2A imagery and the ALOS-2 PALSAR-2

  20. TU-AB-BRA-03: Atlas-Based Algorithms with Local Registration-Goodness Weighting for MRI-Driven Electron Density Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Farjam, R; Tyagi, N [Memorial Sloan-Kettering Cancer Center, New York, NY (United States); Veeraraghavan, H; Apte, A; Zakian, K; Deasy, J [Memorial Sloan Kettering Cancer Center, New York, NY (United States); Hunt, M [Mem Sloan-Kettering Cancer Center, New York, NY (United States)

    2016-06-15

    Purpose: To develop image-analysis algorithms to synthesize CT with accurate electron densities for MR-only radiotherapy of head & neck (H&N) and pelvis anatomies. Methods: CT and 3T-MRI (Philips, mDixon sequence) scans were randomly selected from a pool of H&N (n=11) and pelvis (n=12) anatomies to form an atlas. All MRIs were pre-processed to eliminate scanner and patient-induced intensity inhomogeneities and standardize their intensity histograms. CT and MRI for each patient were then co-registered to construct CT-MRI atlases. For more accurate CT-MR fusion, bone intensities in CT were suppressed to improve the similarity between CT and MRI. For a new patient, all CT-MRI atlases are deformed onto the new patients’ MRI initially. A newly-developed generalized registration error (GRE) metric was then calculated as a measure of local registration accuracy. The synthetic CT value at each point is a 1/GRE-weighted average of CTs from all CT-MR atlases. For evaluation, the mean absolute error (MAE) between the original and synthetic CT (generated in a leave-one-out scheme) was computed. The planning dose from the original and synthetic CT was also compared. Results: For H&N patients, MAE was 67±9, 114±22, and 116±9 HU over the entire-CT, air and bone regions, respectively. For pelvis anatomy, MAE was 47±5 and 146±14 for the entire and bone regions. In comparison with MIRADA medical, an FDA-approved registration tool, we found that our proposed registration strategy reduces MAE by ∼30% and ∼50% over the entire and bone regions, respectively. GRE-weighted strategy further lowers MAE by ∼15% to ∼40%. Our primary dose calculation also showed highly consistent results between the original and synthetic CT. Conclusion: We’ve developed a novel image-analysis technique to synthesize CT for H&N and pelvis anatomies. Our proposed image fusion strategy and GRE metric help generate more accurate synthetic CT using locally more similar atlases (Support: Philips

  1. Genomic DNA-based absolute quantification of gene expression in Vitis.

    Science.gov (United States)

    Gambetta, Gregory A; McElrone, Andrew J; Matthews, Mark A

    2013-07-01

    Many studies in which gene expression is quantified by polymerase chain reaction represent the expression of a gene of interest (GOI) relative to that of a reference gene (RG). Relative expression is founded on the assumptions that RG expression is stable across samples, treatments, organs, etc., and that reaction efficiencies of the GOI and RG are equal; assumptions which are often faulty. The true variability in RG expression and actual reaction efficiencies are seldom determined experimentally. Here we present a rapid and robust method for absolute quantification of expression in Vitis where varying concentrations of genomic DNA were used to construct GOI standard curves. This methodology was utilized to absolutely quantify and determine the variability of the previously validated RG ubiquitin (VvUbi) across three test studies in three different tissues (roots, leaves and berries). In addition, in each study a GOI was absolutely quantified. Data sets resulting from relative and absolute methods of quantification were compared and the differences were striking. VvUbi expression was significantly different in magnitude between test studies and variable among individual samples. Absolute quantification consistently reduced the coefficients of variation of the GOIs by more than half, often resulting in differences in statistical significance and in some cases even changing the fundamental nature of the result. Utilizing genomic DNA-based absolute quantification is fast and efficient. Through eliminating error introduced by assuming RG stability and equal reaction efficiencies between the RG and GOI this methodology produces less variation, increased accuracy and greater statistical power. © 2012 Scandinavian Plant Physiology Society.

  2. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  3. Absolute risk, absolute risk reduction and relative risk

    Directory of Open Access Journals (Sweden)

    Jose Andres Calvache

    2012-12-01

    Full Text Available This article illustrates the epidemiological concepts of absolute risk, absolute risk reduction and relative risk through a clinical example. In addition, it emphasizes the usefulness of these concepts in clinical practice, clinical research and health decision-making process.

  4. Artificial Neural Network to Predict Vine Water Status Spatial Variability Using Multispectral Information Obtained from an Unmanned Aerial Vehicle (UAV).

    Science.gov (United States)

    Poblete, Tomas; Ortega-Farías, Samuel; Moreno, Miguel Angel; Bardeen, Matthew

    2017-10-30

    Water stress, which affects yield and wine quality, is often evaluated using the midday stem water potential (Ψ stem ). However, this measurement is acquired on a per plant basis and does not account for the assessment of vine water status spatial variability. The use of multispectral cameras mounted on unmanned aerial vehicle (UAV) is capable to capture the variability of vine water stress in a whole field scenario. It has been reported that conventional multispectral indices (CMI) that use information between 500-800 nm, do not accurately predict plant water status since they are not sensitive to water content. The objective of this study was to develop artificial neural network (ANN) models derived from multispectral images to predict the Ψ stem spatial variability of a drip-irrigated Carménère vineyard in Talca, Maule Region, Chile. The coefficient of determination (R²) obtained between ANN outputs and ground-truth measurements of Ψ stem were between 0.56-0.87, with the best performance observed for the model that included the bands 550, 570, 670, 700 and 800 nm. Validation analysis indicated that the ANN model could estimate Ψ stem with a mean absolute error (MAE) of 0.1 MPa, root mean square error (RMSE) of 0.12 MPa, and relative error (RE) of -9.1%. For the validation of the CMI, the MAE, RMSE and RE values were between 0.26-0.27 MPa, 0.32-0.34 MPa and -24.2-25.6%, respectively.

  5. Hybrid artificial intelligence approach based on neural fuzzy inference model and metaheuristic optimization for flood susceptibilitgy modeling in a high-frequency tropical cyclone area using GIS

    Science.gov (United States)

    Tien Bui, Dieu; Pradhan, Biswajeet; Nampak, Haleh; Bui, Quang-Thanh; Tran, Quynh-An; Nguyen, Quoc-Phi

    2016-09-01

    This paper proposes a new artificial intelligence approach based on neural fuzzy inference system and metaheuristic optimization for flood susceptibility modeling, namely MONF. In the new approach, the neural fuzzy inference system was used to create an initial flood susceptibility model and then the model was optimized using two metaheuristic algorithms, Evolutionary Genetic and Particle Swarm Optimization. A high-frequency tropical cyclone area of the Tuong Duong district in Central Vietnam was used as a case study. First, a GIS database for the study area was constructed. The database that includes 76 historical flood inundated areas and ten flood influencing factors was used to develop and validate the proposed model. Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Receiver Operating Characteristic (ROC) curve, and area under the ROC curve (AUC) were used to assess the model performance and its prediction capability. Experimental results showed that the proposed model has high performance on both the training (RMSE = 0.306, MAE = 0.094, AUC = 0.962) and validation dataset (RMSE = 0.362, MAE = 0.130, AUC = 0.911). The usability of the proposed model was evaluated by comparing with those obtained from state-of-the art benchmark soft computing techniques such as J48 Decision Tree, Random Forest, Multi-layer Perceptron Neural Network, Support Vector Machine, and Adaptive Neuro Fuzzy Inference System. The results show that the proposed MONF model outperforms the above benchmark models; we conclude that the MONF model is a new alternative tool that should be used in flood susceptibility mapping. The result in this study is useful for planners and decision makers for sustainable management of flood-prone areas.

  6. Orbital-optimized coupled-electron pair theory and its analytic gradients: Accurate equilibrium geometries, harmonic vibrational frequencies, and hydrogen transfer reactions

    Science.gov (United States)

    Bozkaya, Uǧur; Sherrill, C. David

    2013-08-01

    Orbital-optimized coupled-electron pair theory [or simply "optimized CEPA(0)," OCEPA(0), for short] and its analytic energy gradients are presented. For variational optimization of the molecular orbitals for the OCEPA(0) method, a Lagrangian-based approach is used along with an orbital direct inversion of the iterative subspace algorithm. The cost of the method is comparable to that of CCSD [O(N6) scaling] for energy computations. However, for analytic gradient computations the OCEPA(0) method is only half as expensive as CCSD since there is no need to solve the λ2-amplitude equation for OCEPA(0). The performance of the OCEPA(0) method is compared with that of the canonical MP2, CEPA(0), CCSD, and CCSD(T) methods, for equilibrium geometries, harmonic vibrational frequencies, and hydrogen transfer reactions between radicals. For bond lengths of both closed and open-shell molecules, the OCEPA(0) method improves upon CEPA(0) and CCSD by 25%-43% and 38%-53%, respectively, with Dunning's cc-pCVQZ basis set. Especially for the open-shell test set, the performance of OCEPA(0) is comparable with that of CCSD(T) (ΔR is 0.0003 Å on average). For harmonic vibrational frequencies of closed-shell molecules, the OCEPA(0) method again outperforms CEPA(0) and CCSD by 33%-79% and 53%-79%, respectively. For harmonic vibrational frequencies of open-shell molecules, the mean absolute error (MAE) of the OCEPA(0) method (39 cm-1) is fortuitously even better than that of CCSD(T) (50 cm-1), while the MAEs of CEPA(0) (184 cm-1) and CCSD (84 cm-1) are considerably higher. For complete basis set estimates of hydrogen transfer reaction energies, the OCEPA(0) method again exhibits a substantially better performance than CEPA(0), providing a mean absolute error of 0.7 kcal mol-1, which is more than 6 times lower than that of CEPA(0) (4.6 kcal mol-1), and comparing to MP2 (7.7 kcal mol-1) there is a more than 10-fold reduction in errors. Whereas the MAE for the CCSD method is only 0.1 kcal

  7. Evaluations on Profiles of the Eddy Diffusion Coefficients through Simulations of Super Typhoons in the Northwestern Pacific

    Directory of Open Access Journals (Sweden)

    Jimmy Chi Hung Fung

    2016-01-01

    Full Text Available The modeling of the eddy diffusion coefficients (also known as eddy diffusivity in the first-order turbulence closure schemes is important for the typhoon simulations, since the coefficients control the magnitude of the sensible heat flux and the latent heat flux, which are energy sources for the typhoon intensification. Profiles of the eddy diffusion coefficients in the YSU planetary boundary layer (PBL scheme are evaluated in the advanced research WRF (ARW system. Three versions of the YSU scheme (original, K025, and K200 are included in this study. The simulation results are compared with the observational data from track, center sea-level pressure (CSLP, and maximum surface wind speed (MWSP. Comparing with the original version, the K200 improves the averaged mean absolute errors (MAE of track, CSLP, and MWSP by 6.0%, 3.7%, and 23.1%, respectively, while the K025 deteriorates the averaged MAEs of track, CSLP, and MWSP by 25.1%, 19.0%, and 95.0%, respectively. Our results suggest that the enlarged eddy diffusion coefficients may be more suitable for super typhoon simulations.

  8. Sparse dictionary for synthetic transmit aperture medical ultrasound imaging.

    Science.gov (United States)

    Wang, Ping; Jiang, Jin-Yang; Li, Na; Luo, Han-Wu; Li, Fang; Cui, Shi-Gang

    2017-07-01

    It is possible to recover a signal below the Nyquist sampling limit using a compressive sensing technique in ultrasound imaging. However, the reconstruction enabled by common sparse transform approaches does not achieve satisfactory results. Considering the ultrasound echo signal's features of attenuation, repetition, and superposition, a sparse dictionary with the emission pulse signal is proposed. Sparse coefficients in the proposed dictionary have high sparsity. Images reconstructed with this dictionary were compared with those obtained with the three other common transforms, namely, discrete Fourier transform, discrete cosine transform, and discrete wavelet transform. The performance of the proposed dictionary was analyzed via a simulation and experimental data. The mean absolute error (MAE) was used to quantify the quality of the reconstructions. Experimental results indicate that the MAE associated with the proposed dictionary was always the smallest, the reconstruction time required was the shortest, and the lateral resolution and contrast of the reconstructed images were also the closest to the original images. The proposed sparse dictionary performed better than the other three sparse transforms. With the same sampling rate, the proposed dictionary achieved excellent reconstruction quality.

  9. Comparison of the biometric formulas used for applanation A-scan ultrasound biometry.

    Science.gov (United States)

    Özcura, Fatih; Aktaş, Serdar; Sağdık, Hacı Murat; Tetikoğlu, Mehmet

    2016-10-01

    The purpose of the study was to compare the accuracy of various biometric formulas for predicting postoperative refraction determined using applanation A-scan ultrasound. This retrospective comparative study included 485 eyes that underwent uneventful phacoemulsification with intraocular lens (IOL) implantation. Applanation A-scan ultrasound biometry and postoperative manifest refraction were obtained in all eyes. Biometric data were entered into each of the five IOL power calculation formulas: SRK-II, SRK/T, Holladay I, Hoffer Q, and Binkhorst II. All eyes were divided into three groups according to axial length: short (≤22.0 mm), average (22.0-25.0 mm), and long (≥25.0 mm) eyes. The postoperative spherical equivalent was calculated and compared with the predicted refractive error using each biometric formula. The results showed that all formulas had significantly lower mean absolute error (MAE) in comparison with Binkhorst II formula (P < 0.01). The lowest MAE was obtained with the SRK-II for average (0.49 ± 0.40 D) and short (0.67 ± 0.54 D) eyes and the SRK/T for long (0.61 ± 0.50 D) eyes. The highest postoperative hyperopic shift was seen with the SRK-II for average (46.8 %), short (28.1 %), and long (48.4 %) eyes. The highest postoperative myopic shift was seen with the Holladay I for average (66.4 %) and long (71.0 %) eyes and the SRK/T for short eyes (80.6 %). In conclusion, the SRK-II formula produced the lowest MAE in average and short eyes and the SRK/T formula produced the lowest MAE in long eyes. The SRK-II has the highest postoperative hyperopic shift in all eyes. The highest postoperative myopic shift is with the Holladay I for average and long eyes and SRK/T for short eyes.

  10. The AFGL (Air Force Geophysics Laboratory) Absolute Gravity System’s Error Budget Revisted.

    Science.gov (United States)

    1985-05-08

    also be induced by equipment not associated with the system. A systematic bias of 68 pgal was observed by the Istituto di Metrologia "G. Colonnetti...Laboratory Astrophysics, Univ. of Colo., Boulder, Colo. IMGC: Istituto di Metrologia "G. Colonnetti", Torino, Italy Table 1. Absolute Gravity Values...measurements were made with three Model D and three Model G La Coste-Romberg gravity meters. These instruments were operated by the following agencies

  11. Informing the Human Plasma Protein Binding of ...

    Science.gov (United States)

    The free fraction of a xenobiotic in plasma (Fub) is an important determinant of chemical adsorption, distribution, metabolism, elimination, and toxicity, yet experimental plasma protein binding data is scarce for environmentally relevant chemicals. The presented work explores the merit of utilizing available pharmaceutical data to predict Fub for environmentally relevant chemicals via machine learning techniques. Quantitative structure-activity relationship (QSAR) models were constructed with k nearest neighbors (kNN), support vector machines (SVM), and random forest (RF) machine learning algorithms from a training set of 1045 pharmaceuticals. The models were then evaluated with independent test sets of pharmaceuticals (200 compounds) and environmentally relevant ToxCast chemicals (406 total, in two groups of 238 and 168 compounds). The selection of a minimal feature set of 10-15 2D molecular descriptors allowed for both informative feature interpretation and practical applicability domain assessment via a bounded box of descriptor ranges and principal component analysis. The diverse pharmaceutical and environmental chemical sets exhibit similarities in terms of chemical space (99-82% overlap), as well as comparable bias and variance in constructed learning curves. All the models exhibit significant predictability with mean absolute errors (MAE) in the range of 0.10-0.18 Fub. The models performed best for highly bound chemicals (MAE 0.07-0.12), neutrals (MAE 0

  12. Wind power application research on the fusion of the determination and ensemble prediction

    Science.gov (United States)

    Lan, Shi; Lina, Xu; Yuzhu, Hao

    2017-07-01

    The fused product of wind speed for the wind farm is designed through the use of wind speed products of ensemble prediction from the European Centre for Medium-Range Weather Forecasts (ECMWF) and professional numerical model products on wind power based on Mesoscale Model5 (MM5) and Beijing Rapid Update Cycle (BJ-RUC), which are suitable for short-term wind power forecasting and electric dispatch. The single-valued forecast is formed by calculating the different ensemble statistics of the Bayesian probabilistic forecasting representing the uncertainty of ECMWF ensemble prediction. Using autoregressive integrated moving average (ARIMA) model to improve the time resolution of the single-valued forecast, and based on the Bayesian model averaging (BMA) and the deterministic numerical model prediction, the optimal wind speed forecasting curve and the confidence interval are provided. The result shows that the fusion forecast has made obvious improvement to the accuracy relative to the existing numerical forecasting products. Compared with the 0-24 h existing deterministic forecast in the validation period, the mean absolute error (MAE) is decreased by 24.3 % and the correlation coefficient (R) is increased by 12.5 %. In comparison with the ECMWF ensemble forecast, the MAE is reduced by 11.7 %, and R is increased 14.5 %. Additionally, MAE did not increase with the prolongation of the forecast ahead.

  13. Improvements in absolute seismometer sensitivity calibration using local earth gravity measurements

    Science.gov (United States)

    Anthony, Robert E.; Ringler, Adam; Wilson, David

    2018-01-01

    The ability to determine both absolute and relative seismic amplitudes is fundamentally limited by the accuracy and precision with which scientists are able to calibrate seismometer sensitivities and characterize their response. Currently, across the Global Seismic Network (GSN), errors in midband sensitivity exceed 3% at the 95% confidence interval and are the least‐constrained response parameter in seismic recording systems. We explore a new methodology utilizing precise absolute Earth gravity measurements to determine the midband sensitivity of seismic instruments. We first determine the absolute sensitivity of Kinemetrics EpiSensor accelerometers to 0.06% at the 99% confidence interval by inverting them in a known gravity field at the Albuquerque Seismological Laboratory (ASL). After the accelerometer is calibrated, we install it in its normal configuration next to broadband seismometers and subject the sensors to identical ground motions to perform relative calibrations of the broadband sensors. Using this technique, we are able to determine the absolute midband sensitivity of the vertical components of Nanometrics Trillium Compact seismometers to within 0.11% and Streckeisen STS‐2 seismometers to within 0.14% at the 99% confidence interval. The technique enables absolute calibrations from first principles that are traceable to National Institute of Standards and Technology (NIST) measurements while providing nearly an order of magnitude more precision than step‐table calibrations.

  14. Simplified fringe order correction for absolute phase maps recovered with multiple-spatial-frequency fringe projections

    International Nuclear Information System (INIS)

    Ding, Yi; Peng, Kai; Lu, Lei; Zhong, Kai; Zhu, Ziqi

    2017-01-01

    Various kinds of fringe order errors may occur in the absolute phase maps recovered with multi-spatial-frequency fringe projections. In existing methods, multiple successive pixels corrupted by fringe order errors are detected and corrected pixel-by-pixel with repeating searches, which is inefficient for applications. To improve the efficiency of multiple successive fringe order corrections, in this paper we propose a method to simplify the error detection and correction by the stepwise increasing property of fringe order. In the proposed method, the numbers of pixels in each step are estimated to find the possible true fringe order values, repeating the search in detecting multiple successive errors can be avoided for efficient error correction. The effectiveness of our proposed method is validated by experimental results. (paper)

  15. Sea surface temperature predictions using a multi-ocean analysis ensemble scheme

    Science.gov (United States)

    Zhang, Ying; Zhu, Jieshun; Li, Zhongxian; Chen, Haishan; Zeng, Gang

    2017-08-01

    This study examined the global sea surface temperature (SST) predictions by a so-called multiple-ocean analysis ensemble (MAE) initialization method which was applied in the National Centers for Environmental Prediction (NCEP) Climate Forecast System Version 2 (CFSv2). Different from most operational climate prediction practices which are initialized by a specific ocean analysis system, the MAE method is based on multiple ocean analyses. In the paper, the MAE method was first justified by analyzing the ocean temperature variability in four ocean analyses which all are/were applied for operational climate predictions either at the European Centre for Medium-range Weather Forecasts or at NCEP. It was found that these systems exhibit substantial uncertainties in estimating the ocean states, especially at the deep layers. Further, a set of MAE hindcasts was conducted based on the four ocean analyses with CFSv2, starting from each April during 1982-2007. The MAE hindcasts were verified against a subset of hindcasts from the NCEP CFS Reanalysis and Reforecast (CFSRR) Project. Comparisons suggested that MAE shows better SST predictions than CFSRR over most regions where ocean dynamics plays a vital role in SST evolutions, such as the El Niño and Atlantic Niño regions. Furthermore, significant improvements were also found in summer precipitation predictions over the equatorial eastern Pacific and Atlantic oceans, for which the local SST prediction improvements should be responsible. The prediction improvements by MAE imply a problem for most current climate predictions which are based on a specific ocean analysis system. That is, their predictions would drift towards states biased by errors inherent in their ocean initialization system, and thus have large prediction errors. In contrast, MAE arguably has an advantage by sampling such structural uncertainties, and could efficiently cancel these errors out in their predictions.

  16. Medication errors in chemotherapy preparation and administration: a survey conducted among oncology nurses in Turkey.

    Science.gov (United States)

    Ulas, Arife; Silay, Kamile; Akinci, Sema; Dede, Didem Sener; Akinci, Muhammed Bulent; Sendur, Mehmet Ali Nahit; Cubukcu, Erdem; Coskun, Hasan Senol; Degirmenci, Mustafa; Utkan, Gungor; Ozdemir, Nuriye; Isikdogan, Abdurrahman; Buyukcelik, Abdullah; Inanc, Mevlude; Bilici, Ahmet; Odabasi, Hatice; Cihan, Sener; Avci, Nilufer; Yalcin, Bulent

    2015-01-01

    Medication errors in oncology may cause severe clinical problems due to low therapeutic indices and high toxicity of chemotherapeutic agents. We aimed to investigate unintentional medication errors and underlying factors during chemotherapy preparation and administration based on a systematic survey conducted to reflect oncology nurses experience. This study was conducted in 18 adult chemotherapy units with volunteer participation of 206 nurses. A survey developed by primary investigators and medication errors (MAEs) defined preventable errors during prescription of medication, ordering, preparation or administration. The survey consisted of 4 parts: demographic features of nurses; workload of chemotherapy units; errors and their estimated monthly number during chemotherapy preparation and administration; and evaluation of the possible factors responsible from ME. The survey was conducted by face to face interview and data analyses were performed with descriptive statistics. Chi-square or Fisher exact tests were used for a comparative analysis of categorical data. Some 83.4% of the 210 nurses reported one or more than one error during chemotherapy preparation and administration. Prescribing or ordering wrong doses by physicians (65.7%) and noncompliance with administration sequences during chemotherapy administration (50.5%) were the most common errors. The most common estimated average monthly error was not following the administration sequence of the chemotherapeutic agents (4.1 times/month, range 1-20). The most important underlying reasons for medication errors were heavy workload (49.7%) and insufficient number of staff (36.5%). Our findings suggest that the probability of medication error is very high during chemotherapy preparation and administration, the most common involving prescribing and ordering errors. Further studies must address the strategies to minimize medication error in chemotherapy receiving patients, determine sufficient protective measures

  17. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    Energy Technology Data Exchange (ETDEWEB)

    Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly

  18. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    International Nuclear Information System (INIS)

    Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D

    2015-01-01

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly

  19. Absolute measurement of the $\\beta\\alpha$ decay of $^{16}$N

    CERN Multimedia

    We propose to study the $\\beta$-decay of $^{16}$N at ISOLDE with the aim of determining the branching ratio for $\\beta\\alpha$ decay on an absolute scale. There are indications that the previously measured branching ratio is in error by an amount significantly larger than the quoted uncertainty. This limits the precision with which the S-factor of the astrophysically important $^{12}$C($\\alpha, \\gamma)^{16}$O reaction can be determined.

  20. Recursive wind speed forecasting based on Hammerstein Auto-Regressive model

    International Nuclear Information System (INIS)

    Ait Maatallah, Othman; Achuthan, Ajit; Janoyan, Kerop; Marzocca, Pier

    2015-01-01

    Highlights: • Developed a new recursive WSF model for 1–24 h horizon based on Hammerstein model. • Nonlinear HAR model successfully captured chaotic dynamics of wind speed time series. • Recursive WSF intrinsic error accumulation corrected by applying rotation. • Model verified for real wind speed data from two sites with different characteristics. • HAR model outperformed both ARIMA and ANN models in terms of accuracy of prediction. - Abstract: A new Wind Speed Forecasting (WSF) model, suitable for a short term 1–24 h forecast horizon, is developed by adapting Hammerstein model to an Autoregressive approach. The model is applied to real data collected for a period of three years (2004–2006) from two different sites. The performance of HAR model is evaluated by comparing its prediction with the classical Autoregressive Integrated Moving Average (ARIMA) model and a multi-layer perceptron Artificial Neural Network (ANN). Results show that the HAR model outperforms both the ARIMA model and ANN model in terms of root mean square error (RMSE), mean absolute error (MAE), and Mean Absolute Percentage Error (MAPE). When compared to the conventional models, the new HAR model can better capture various wind speed characteristics, including asymmetric (non-gaussian) wind speed distribution, non-stationary time series profile, and the chaotic dynamics. The new model is beneficial for various applications in the renewable energy area, particularly for power scheduling

  1. Forecasting Energy CO2 Emissions Using a Quantum Harmony Search Algorithm-Based DMSFE Combination Model

    Directory of Open Access Journals (Sweden)

    Xingsheng Gu

    2013-03-01

    Full Text Available he accurate forecasting of carbon dioxide (CO2 emissions from fossil fuel energy consumption is a key requirement for making energy policy and environmental strategy. In this paper, a novel quantum harmony search (QHS algorithm-based discounted mean square forecast error (DMSFE combination model is proposed. In the DMSFE combination forecasting model, almost all investigations assign the discounting factor (β arbitrarily since β varies between 0 and 1 and adopt one value for all individual models and forecasting periods. The original method doesn’t consider the influences of the individual model and the forecasting period. This work contributes by changing β from one value to a matrix taking the different model and the forecasting period into consideration and presenting a way of searching for the optimal β values by using the QHS algorithm through optimizing the mean absolute percent error (MAPE objective function. The QHS algorithm-based optimization DMSFE combination forecasting model is established and tested by forecasting CO2 emission of the World top‒5 CO2 emitters. The evaluation indexes such as MAPE, root mean squared error (RMSE and mean absolute error (MAE are employed to test the performance of the presented approach. The empirical analyses confirm the validity of the presented method and the forecasting accuracy can be increased in a certain degree.

  2. Encasing the Absolutes

    Directory of Open Access Journals (Sweden)

    Uroš Martinčič

    2014-05-01

    Full Text Available The paper explores the issue of structure and case in English absolute constructions, whose subjects are deduced by several descriptive grammars as being in the nominative case due to its supposed neutrality in terms of register. This deduction is countered by systematic accounts presented within the framework of the Minimalist Program which relate the case of absolute constructions to specific grammatical factors. Each proposal is shown as an attempt of analysing absolute constructions as basic predication structures, either full clauses or small clauses. I argue in favour of the small clause approach due to its minimal reliance on transformations and unique stipulations. Furthermore, I propose that small clauses project a singular category, and show that the use of two cases in English absolute constructions can be accounted for if they are analysed as depictive phrases, possibly selected by prepositions. The case of the subject in absolutes is shown to be a result of syntactic and non-syntactic factors. I thus argue in accordance with Minimalist goals that syntactic case does not exist, attributing its role in absolutes to other mechanisms.

  3. Micro ionization chamber dosimetry in IMRT verification: Clinical implications of dosimetric errors in the PTV

    International Nuclear Information System (INIS)

    Sanchez-Doblado, Francisco; Capote, Roberto; Rosello, Joan V.; Leal, Antonio; Lagares, Juan I.; Arrans, Rafael; Hartmann, Guenther H.

    2005-01-01

    Background and purpose: Absolute dose measurements for Intensity Modulated Radiotherapy (IMRT) beamlets is difficult due to the lack of lateral electron equilibrium. Recently we found that the absolute dosimetry in the penumbra region of the IMRT beamlet, can suffer from significant errors (Capote et al., Med Phys 31 (2004) 2416-2422). This work has the goal to estimate the error made when measuring the Planning Target Volume's (PTV) absolute dose by a micro ion chamber (μIC) in typical IMRT treatment. The dose error comes from the assumption that the dosimetric parameters determining the absolute dose are the same as for the reference conditions. Materials and Methods: Two IMRT treatment plans for common prostate carcinoma case, derived by forward and inverse optimisation, were considered. Detailed geometrical simulation of the μIC and the dose verification set-up was performed. The Monte Carlo (MC) simulation allows us to calculate the delivered dose to water and the dose delivered to the active volume of the ion chamber. However, the measured dose in water is usually derived from chamber readings assuming reference conditions. The MC simulation provides needed correction factors for ion chamber dosimetry in non reference conditions. Results: Dose calculations were carried out for some representative beamlets, a combination of segments and for the delivered IMRT treatments. We observe that the largest dose errors (i.e. the largest correction factors) correspond to the smaller contribution of the corresponding IMRT beamlets to the total dose delivered in the ionization chamber within PTV. Conclusion: The clinical impact of the calculated dose error in PTV measured dose was found to be negligible for studied IMRT treatments

  4. Area under the curve predictions of dalbavancin, a new lipoglycopeptide agent, using the end of intravenous infusion concentration data point by regression analyses such as linear, log-linear and power models.

    Science.gov (United States)

    Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally

    2018-02-01

    1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.

  5. A novel capacitive absolute positioning sensor based on time grating with nanometer resolution

    Science.gov (United States)

    Pu, Hongji; Liu, Hongzhong; Liu, Xiaokang; Peng, Kai; Yu, Zhicheng

    2018-05-01

    The present work proposes a novel capacitive absolute positioning sensor based on time grating. The sensor includes a fine incremental-displacement measurement component combined with a coarse absolute-position measurement component to obtain high-resolution absolute positioning measurements. A single row type sensor was proposed to achieve fine displacement measurement, which combines the two electrode rows of a previously proposed double-row type capacitive displacement sensor based on time grating into a single row. To achieve absolute positioning measurement, the coarse measurement component is designed as a single-row type displacement sensor employing a single spatial period over the entire measurement range. In addition, this component employs a rectangular induction electrode and four groups of orthogonal discrete excitation electrodes with half-sinusoidal envelope shapes, which were formed by alternately extending the rectangular electrodes of the fine measurement component. The fine and coarse measurement components are tightly integrated to form a compact absolute positioning sensor. A prototype sensor was manufactured using printed circuit board technology for testing and optimization of the design in conjunction with simulations. Experimental results show that the prototype sensor achieves a ±300 nm measurement accuracy with a 1 nm resolution over a displacement range of 200 mm when employing error compensation. The proposed sensor is an excellent alternative to presently available long-range absolute nanometrology sensors owing to its low cost, simple structure, and ease of manufacturing.

  6. [Medication adverse events: Impact of pharmaceutical consultations during the hospitalization of patients].

    Science.gov (United States)

    Santucci, R; Levêque, D; Herbrecht, R; Fischbach, M; Gérout, A C; Untereiner, C; Bouayad-Agha, K; Couturier, F

    2014-11-01

    The medication iatrogenic events are responsible for nearly one iatrogenic event in five. The main purpose of this prospective multicenter study is to determine the effect of pharmaceutical consultations on the occurrence of medication adverse events during hospitalization (MAE). The other objectives are to study the impact of age, of the number of medications and pharmaceutical consultations on the risk of MAE. The pharmaceutical consultation is associated to a complete reassessment done by both a physician and a pharmacist for the home medication, the hospital treatment (3days after admission), the treatment during chemotherapy, and/or, the treatment when the patient goes back home. All MAE are subject to an advice for the patient, additional clinical-biological monitoring and/or prescription changes. Among the 318 patients, 217 (68%) had 1 or more clinically important MAE (89% drug-drug interaction, 8% dosing error, 2% indication error, 1% risk behavior). The patients have had 1121 pharmaceutical consultations (3.2±1.4/patient). Thus, the pharmaceutical consultations divided by 2.34 the risk of MAE (unadjusted incidence ratio, P≤0.05). Each consultation decreased by 24% the risk of MAE. Moreover, adding one medication increases from 14 to 30% as a risk of MAE on the population. Pharmaceutical consultations during the hospital stay could reduce significantly the number of medication adverse effects. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  7. Absolute advantage

    NARCIS (Netherlands)

    J.G.M. van Marrewijk (Charles)

    2008-01-01

    textabstractA country is said to have an absolute advantage over another country in the production of a good or service if it can produce that good or service using fewer real resources. Equivalently, using the same inputs, the country can produce more output. The concept of absolute advantage can

  8. The Drag-based Ensemble Model (DBEM) for Coronal Mass Ejection Propagation

    Science.gov (United States)

    Dumbović, Mateja; Čalogović, Jaša; Vršnak, Bojan; Temmer, Manuela; Mays, M. Leila; Veronig, Astrid; Piantschitsch, Isabell

    2018-02-01

    The drag-based model for heliospheric propagation of coronal mass ejections (CMEs) is a widely used analytical model that can predict CME arrival time and speed at a given heliospheric location. It is based on the assumption that the propagation of CMEs in interplanetary space is solely under the influence of magnetohydrodynamical drag, where CME propagation is determined based on CME initial properties as well as the properties of the ambient solar wind. We present an upgraded version, the drag-based ensemble model (DBEM), that covers ensemble modeling to produce a distribution of possible ICME arrival times and speeds. Multiple runs using uncertainty ranges for the input values can be performed in almost real-time, within a few minutes. This allows us to define the most likely ICME arrival times and speeds, quantify prediction uncertainties, and determine forecast confidence. The performance of the DBEM is evaluated and compared to that of ensemble WSA-ENLIL+Cone model (ENLIL) using the same sample of events. It is found that the mean error is ME = ‑9.7 hr, mean absolute error MAE = 14.3 hr, and root mean square error RMSE = 16.7 hr, which is somewhat higher than, but comparable to ENLIL errors (ME = ‑6.1 hr, MAE = 12.8 hr and RMSE = 14.4 hr). Overall, DBEM and ENLIL show a similar performance. Furthermore, we find that in both models fast CMEs are predicted to arrive earlier than observed, most likely owing to the physical limitations of models, but possibly also related to an overestimation of the CME initial speed for fast CMEs.

  9. Comparison of intraocular lens power prediction using immersion ultrasound and optical biometry with and without formula optimization.

    Science.gov (United States)

    Nemeth, Gabor; Nagy, Attila; Berta, Andras; Modis, Laszlo

    2012-09-01

    Comparison of postoperative refraction results using ultrasound biometry with closed immersion shell and optical biometry. Three hundred and sixty-four eyes of 306 patients (age: 70.6 ± 12.8 years) underwent cataract surgery where intraocular lenses calculated by SRK/T formula were implanted. In 159 cases immersion ultrasonic biometry, in 205 eyes optical biometry was used. Differences between predicted and actual postoperative refractions were calculated both prior to and after optimization with the SRK/T formula, after which we analysed the similar data in the case of Holladay, Haigis, and Hoffer-Q formulas. Mean absolute error (MAE) and the percentage rate of patients within ±0.5 and ±1.0 D difference in the predicted error were calculated with these four formulas. MAE was 0.5-0.7 D in cases of both methods with SRK/T, Holladay, and Hoffer-Q formula, but higher with Haigis formula. With no optimization, 60-65 % of the patients were under 0.5 D error in the immersion group (except for Haigis formula). Using the optical method, this value was slightly higher (62-67 %), however, in this case, Haigis formula also did not perform so well (45 %). Refraction results significantly improved with Holladay, Hoffer-Q, and Haigis formulas in both groups. The rate of patients under 0.5 D error increased to 65 % by the immersion technique, and up to 80 % by the optical one. According to our results, optical biometry offers only slightly better outcomes compared to those of immersion shell with no optimized formulas. However, in case of new generation formulas with both methods, the optimization of IOL-constants give significantly better results.

  10. A digital, constant-frequency pulsed phase-locked-loop instrument for real-time, absolute ultrasonic phase measurements

    Science.gov (United States)

    Haldren, H. A.; Perey, D. F.; Yost, W. T.; Cramer, K. E.; Gupta, M. C.

    2018-05-01

    A digitally controlled instrument for conducting single-frequency and swept-frequency ultrasonic phase measurements has been developed based on a constant-frequency pulsed phase-locked-loop (CFPPLL) design. This instrument uses a pair of direct digital synthesizers to generate an ultrasonically transceived tone-burst and an internal reference wave for phase comparison. Real-time, constant-frequency phase tracking in an interrogated specimen is possible with a resolution of 0.000 38 rad (0.022°), and swept-frequency phase measurements can be obtained. Using phase measurements, an absolute thickness in borosilicate glass is presented to show the instrument's efficacy, and these results are compared to conventional ultrasonic pulse-echo time-of-flight (ToF) measurements. The newly developed instrument predicted the thickness with a mean error of -0.04 μm and a standard deviation of error of 1.35 μm. Additionally, the CFPPLL instrument shows a lower measured phase error in the absence of changing temperature and couplant thickness than high-resolution cross-correlation ToF measurements at a similar signal-to-noise ratio. By showing higher accuracy and precision than conventional pulse-echo ToF measurements and lower phase errors than cross-correlation ToF measurements, the new digitally controlled CFPPLL instrument provides high-resolution absolute ultrasonic velocity or path-length measurements in solids or liquids, as well as tracking of material property changes with high sensitivity. The ability to obtain absolute phase measurements allows for many new applications than possible with previous ultrasonic pulsed phase-locked loop instruments. In addition to improved resolution, swept-frequency phase measurements add useful capability in measuring properties of layered structures, such as bonded joints, or materials which exhibit non-linear frequency-dependent behavior, such as dispersive media.

  11. A New Method for a Piezoelectric Energy Harvesting System Using a Backtracking Search Algorithm-Based PI Voltage Controller

    Directory of Open Access Journals (Sweden)

    Mahidur R. Sarker

    2016-09-01

    Full Text Available This paper presents a new method for a vibration-based piezoelectric energy harvesting system using a backtracking search algorithm (BSA-based proportional-integral (PI voltage controller. This technique eliminates the exhaustive conventional trial-and-error procedure for obtaining optimized parameter values of proportional gain (Kp, and integral gain (Ki for PI voltage controllers. The generated estimate values of Kp and Ki are executed in the PI voltage controller that is developed through the BSA optimization technique. In this study, mean absolute error (MAE is used as an objective function to minimize output error for a piezoelectric energy harvesting system (PEHS. The model for the PEHS is designed and analyzed using the BSA optimization technique. The BSA-based PI voltage controller of the PEHS produces a significant improvement in minimizing the output error of the converter and a robust, regulated pulse-width modulation (PWM signal to convert a MOSFET switch, with the best response in terms of rise time and settling time under various load conditions.

  12. Predictive Models for Photovoltaic Electricity Production in Hot Weather Conditions

    Directory of Open Access Journals (Sweden)

    Jabar H. Yousif

    2017-07-01

    Full Text Available The process of finding a correct forecast equation for photovoltaic electricity production from renewable sources is an important matter, since knowing the factors affecting the increase in the proportion of renewable energy production and reducing the cost of the product has economic and scientific benefits. This paper proposes a mathematical model for forecasting energy production in photovoltaic (PV panels based on a self-organizing feature map (SOFM model. The proposed model is compared with other models, including the multi-layer perceptron (MLP and support vector machine (SVM models. Moreover, a mathematical model based on a polynomial function for fitting the desired output is proposed. Different practical measurement methods are used to validate the findings of the proposed neural and mathematical models such as mean square error (MSE, mean absolute error (MAE, correlation (R, and coefficient of determination (R2. The proposed SOFM model achieved a final MSE of 0.0007 in the training phase and 0.0005 in the cross-validation phase. In contrast, the SVM model resulted in a small MSE value equal to 0.0058, while the MLP model achieved a final MSE of 0.026 with a correlation coefficient of 0.9989, which indicates a strong relationship between input and output variables. The proposed SOFM model closely fits the desired results based on the R2 value, which is equal to 0.9555. Finally, the comparison results of MAE for the three models show that the SOFM model achieved a best result of 0.36156, whereas the SVM and MLP models yielded 4.53761 and 3.63927, respectively. A small MAE value indicates that the output of the SOFM model closely fits the actual results and predicts the desired output.

  13. Mapping health assessment questionnaire disability index (HAQ-DI) score, pain visual analog scale (VAS), and disease activity score in 28 joints (DAS28) onto the EuroQol-5D (EQ-5D) utility score with the KORean Observational study Network for Arthritis (KORONA) registry data.

    Science.gov (United States)

    Kim, Hye-Lin; Kim, Dam; Jang, Eun Jin; Lee, Min-Young; Song, Hyun Jin; Park, Sun-Young; Cho, Soo-Kyung; Sung, Yoon-Kyoung; Choi, Chan-Bum; Won, Soyoung; Bang, So-Young; Cha, Hoon-Suk; Choe, Jung-Yoon; Chung, Won Tae; Hong, Seung-Jae; Jun, Jae-Bum; Kim, Jinseok; Kim, Seong-Kyu; Kim, Tae-Hwan; Kim, Tae-Jong; Koh, Eunmi; Lee, Hwajeong; Lee, Hye-Soon; Lee, Jisoo; Lee, Shin-Seok; Lee, Sung Won; Park, Sung-Hoon; Shim, Seung-Cheol; Yoo, Dae-Hyun; Yoon, Bo Young; Bae, Sang-Cheol; Lee, Eui-Kyung

    2016-04-01

    The aim of this study was to estimate the mapping model for EuroQol-5D (EQ-5D) utility values using the health assessment questionnaire disability index (HAQ-DI), pain visual analog scale (VAS), and disease activity score in 28 joints (DAS28) in a large, nationwide cohort of rheumatoid arthritis (RA) patients in Korea. The KORean Observational study Network for Arthritis (KORONA) registry data on 3557 patients with RA were used. Data were randomly divided into a modeling set (80 % of the data) and a validation set (20 % of the data). The ordinary least squares (OLS), Tobit, and two-part model methods were employed to construct a model to map to the EQ-5D index. Using a combination of HAQ-DI, pain VAS, and DAS28, four model versions were examined. To evaluate the predictive accuracy of the models, the root-mean-square error (RMSE) and mean absolute error (MAE) were calculated using the validation dataset. A model that included HAQ-DI, pain VAS, and DAS28 produced the highest adjusted R (2) as well as the lowest Akaike information criterion, RMSE, and MAE, regardless of the statistical methods used in modeling set. The mapping equation of the OLS method is given as EQ-5D = 0.95-0.21 × HAQ-DI-0.24 × pain VAS/100-0.01 × DAS28 (adjusted R (2) = 57.6 %, RMSE = 0.1654 and MAE = 0.1222). Also in the validation set, the RMSE and MAE were shown to be the smallest. The model with HAQ-DI, pain VAS, and DAS28 showed the best performance, and this mapping model enabled the estimation of an EQ-5D value for RA patients in whom utility values have not been measured.

  14. The Absolute Stability Analysis in Fuzzy Control Systems with Parametric Uncertainties and Reference Inputs

    Science.gov (United States)

    Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei

    This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.

  15. Numerical evaluation of magnetic absolute measurements with arbitrarily distributed DI-fluxgate theodolite orientations

    Science.gov (United States)

    Brunke, Heinz-Peter; Matzka, Jürgen

    2018-01-01

    At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer). Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination) measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996).We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters) of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017).

  16. Numerical evaluation of magnetic absolute measurements with arbitrarily distributed DI-fluxgate theodolite orientations

    Directory of Open Access Journals (Sweden)

    H.-P. Brunke

    2018-01-01

    Full Text Available At geomagnetic observatories the absolute measurements are needed to determine the calibration parameters of the continuously recording vector magnetometer (variometer. Absolute measurements are indispensable for determining the vector of the geomagnetic field over long periods of time. A standard DI (declination, inclination measuring scheme for absolute measurements establishes routines in magnetic observatories. The traditional measuring schema uses a fixed number of eight orientations (Jankowski et al., 1996.We present a numerical method, allowing for the evaluation of an arbitrary number (minimum of five as there are five independent parameters of telescope orientations. Our method provides D, I and Z base values and calculated error bars of them.A general approach has significant advantages. Additional measurements may be seamlessly incorporated for higher accuracy. Individual erroneous readings are identified and can be discarded without invalidating the entire data set. A priori information can be incorporated. We expect the general method to also ease requirements for automated DI-flux measurements. The method can reveal certain properties of the DI theodolite which are not captured by the conventional method.Based on the alternative evaluation method, a new faster and less error-prone measuring schema is presented. It avoids needing to calculate the magnetic meridian prior to the inclination measurements.Measurements in the vicinity of the magnetic equator are possible with theodolites and without a zenith ocular.The implementation of the method in MATLAB is available as source code at the GFZ Data Center Brunke (2017.

  17. A novel alkaloid isolated from Crotalaria paulina and identified by NMR and DFT calculations

    Science.gov (United States)

    Oliveira, Ramon Prata; Demuner, Antonio Jacinto; Alvarenga, Elson Santiago; Barbosa, Luiz Claudio Almeida; de Melo Silva, Thiago

    2018-01-01

    Pyrrolizidine alkaloids (PAs) are secondary metabolites found in Crotalaria genus and are known to have several biological activities. A novel macrocycle bislactone alkaloid, coined ethylcrotaline, was isolated and purified from the aerial parts of Crotalaria paulina. The novel macrocycle was identified with the aid of high resolution mass spectrometry and advanced nuclear magnetic resonance techniques. The relative stereochemistry of the alkaloid was defined by comparing the calculated quantum mechanical hydrogen and carbon chemical shifts of eight candidate structures with the experimental NMR data. The best fit between the eight candidate structures and the experimental NMR chemical shifts was defined by the DP4 statistical analyses and the Mean Absolute Error (MAE) calculations.

  18. Artificial Neural Network to Predict Vine Water Status Spatial Variability Using Multispectral Information Obtained from an Unmanned Aerial Vehicle (UAV

    Directory of Open Access Journals (Sweden)

    Tomas Poblete

    2017-10-01

    Full Text Available Water stress, which affects yield and wine quality, is often evaluated using the midday stem water potential (Ψstem. However, this measurement is acquired on a per plant basis and does not account for the assessment of vine water status spatial variability. The use of multispectral cameras mounted on unmanned aerial vehicle (UAV is capable to capture the variability of vine water stress in a whole field scenario. It has been reported that conventional multispectral indices (CMI that use information between 500–800 nm, do not accurately predict plant water status since they are not sensitive to water content. The objective of this study was to develop artificial neural network (ANN models derived from multispectral images to predict the Ψstem spatial variability of a drip-irrigated Carménère vineyard in Talca, Maule Region, Chile. The coefficient of determination (R2 obtained between ANN outputs and ground-truth measurements of Ψstem were between 0.56–0.87, with the best performance observed for the model that included the bands 550, 570, 670, 700 and 800 nm. Validation analysis indicated that the ANN model could estimate Ψstem with a mean absolute error (MAE of 0.1 MPa, root mean square error (RMSE of 0.12 MPa, and relative error (RE of −9.1%. For the validation of the CMI, the MAE, RMSE and RE values were between 0.26–0.27 MPa, 0.32–0.34 MPa and −24.2–25.6%, respectively.

  19. Absolutely relative or relatively absolute: violations of value invariance in human decision making.

    Science.gov (United States)

    Teodorescu, Andrei R; Moran, Rani; Usher, Marius

    2016-02-01

    Making decisions based on relative rather than absolute information processing is tied to choice optimality via the accumulation of evidence differences and to canonical neural processing via accumulation of evidence ratios. These theoretical frameworks predict invariance of decision latencies to absolute intensities that maintain differences and ratios, respectively. While information about the absolute values of the choice alternatives is not necessary for choosing the best alternative, it may nevertheless hold valuable information about the context of the decision. To test the sensitivity of human decision making to absolute values, we manipulated the intensities of brightness stimuli pairs while preserving either their differences or their ratios. Although asked to choose the brighter alternative relative to the other, participants responded faster to higher absolute values. Thus, our results provide empirical evidence for human sensitivity to task irrelevant absolute values indicating a hard-wired mechanism that precedes executive control. Computational investigations of several modelling architectures reveal two alternative accounts for this phenomenon, which combine absolute and relative processing. One account involves accumulation of differences with activation dependent processing noise and the other emerges from accumulation of absolute values subject to the temporal dynamics of lateral inhibition. The potential adaptive role of such choice mechanisms is discussed.

  20. ABNORMAL RETURN TRADING VOLUME ACTIVITY PADA PERISTIWA AMBRUKNYA FANNIE MAE DAN FREDDIE MAC

    Directory of Open Access Journals (Sweden)

    Dyah Ani Pangastuti

    2017-03-01

    Full Text Available The global economic crisis was a disaster for all nations in the world due to itsimpact once seemed to hamper the economy of a nation. This research studied the eventsthat would see if there was an effect of global economic crisis preceded by the U.S. Financialcrisis was triggered by the collapse of Fannie Mae and Freddie Mac in the property business(subprime mortgages on September 7th, 2008. This study used samples that had been pub-licly traded company listed on the Indonesia Stock Exchange and entered into the sequenceof LQ-45 in the year of 2008. Hypothesis testing used was t-test on the average abnormalreturn and average trading volume of activity. Test results for the average abnormal returnshowed there were no significant differences before and after the Subprime Mortgage. Thetest results for the average trading volume of activity indicated the presence of a significantdifference before and after the Subprime Mortgage.

  1. Standard Error Computations for Uncertainty Quantification in Inverse Problems: Asymptotic Theory vs. Bootstrapping.

    Science.gov (United States)

    Banks, H T; Holm, Kathleen; Robbins, Danielle

    2010-11-01

    We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems. We compare the bootstrapping and asymptotic theory approaches for problems involving data with several noise forms and levels. We consider both constant variance absolute error data and relative error which produces non-constant variance data in our parameter estimation formulations. We compare and contrast parameter estimates, standard errors, confidence intervals, and computational times for both bootstrapping and asymptotic theory methods.

  2. Wavelet regression model in forecasting crude oil price

    Science.gov (United States)

    Hamid, Mohd Helmie; Shabri, Ani

    2017-05-01

    This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.

  3. Numerical simulation of shower cooling tower based on artificial neural network

    International Nuclear Information System (INIS)

    Qi Xiaoni; Liu Zhenyan; Li Dandan

    2008-01-01

    This study was prompted by the need to design towers for applications in which, due to salt deposition on the packing and subsequent blockage, the use of tower packing is not practical. The cooling tower analyzed in this study is void of fill, named shower cooling tower (SCT). However, the present study focuses mostly on experimental investigation of the SCT, and no systematic numerical method is available. In this paper, we first developed a one dimensional model and analyzed the heat and mass transfer processes of the SCT; then we used the concept of artificial neural network (ANN) to propose a computer design tool that can help the designer evaluate the outlet water temperature from a given set of experimentally obtained data. For comparison purposes and accurate evaluation of the predictions, part of the experimental data was used to train the neural network and the remainder to test the model. The results predicted by the ANN model were compared with those of the standard model and the experimental data. The ANN model predicted the outlet water temperature with a MAE (mean absolute error) of 1.31%, whereas the standard one dimensional model showed a MAE of 9.42%

  4. Improving gridded snow water equivalent products in British Columbia, Canada: multi-source data fusion by neural network models

    Science.gov (United States)

    Snauffer, Andrew M.; Hsieh, William W.; Cannon, Alex J.; Schnorbus, Markus A.

    2018-03-01

    Estimates of surface snow water equivalent (SWE) in mixed alpine environments with seasonal melts are particularly difficult in areas of high vegetation density, topographic relief, and snow accumulations. These three confounding factors dominate much of the province of British Columbia (BC), Canada. An artificial neural network (ANN) was created using as predictors six gridded SWE products previously evaluated for BC. Relevant spatiotemporal covariates were also included as predictors, and observations from manual snow surveys at stations located throughout BC were used as target data. Mean absolute errors (MAEs) and interannual correlations for April surveys were found using cross-validation. The ANN using the three best-performing SWE products (ANN3) had the lowest mean station MAE across the province. ANN3 outperformed each product as well as product means and multiple linear regression (MLR) models in all of BC's five physiographic regions except for the BC Plains. Subsequent comparisons with predictions generated by the Variable Infiltration Capacity (VIC) hydrologic model found ANN3 to better estimate SWE over the VIC domain and within most regions. The superior performance of ANN3 over the individual products, product means, MLR, and VIC was found to be statistically significant across the province.

  5. A kinetic-based sigmoidal model for the polymerase chain reaction and its application to high-capacity absolute quantitative real-time PCR

    Directory of Open Access Journals (Sweden)

    Stewart Don

    2008-05-01

    Full Text Available Abstract Background Based upon defining a common reference point, current real-time quantitative PCR technologies compare relative differences in amplification profile position. As such, absolute quantification requires construction of target-specific standard curves that are highly resource intensive and prone to introducing quantitative errors. Sigmoidal modeling using nonlinear regression has previously demonstrated that absolute quantification can be accomplished without standard curves; however, quantitative errors caused by distortions within the plateau phase have impeded effective implementation of this alternative approach. Results Recognition that amplification rate is linearly correlated to amplicon quantity led to the derivation of two sigmoid functions that allow target quantification via linear regression analysis. In addition to circumventing quantitative errors produced by plateau distortions, this approach allows the amplification efficiency within individual amplification reactions to be determined. Absolute quantification is accomplished by first converting individual fluorescence readings into target quantity expressed in fluorescence units, followed by conversion into the number of target molecules via optical calibration. Founded upon expressing reaction fluorescence in relation to amplicon DNA mass, a seminal element of this study was to implement optical calibration using lambda gDNA as a universal quantitative standard. Not only does this eliminate the need to prepare target-specific quantitative standards, it relegates establishment of quantitative scale to a single, highly defined entity. The quantitative competency of this approach was assessed by exploiting "limiting dilution assay" for absolute quantification, which provided an independent gold standard from which to verify quantitative accuracy. This yielded substantive corroborating evidence that absolute accuracies of ± 25% can be routinely achieved. Comparison

  6. Fluctuation theorems in feedback-controlled open quantum systems: Quantum coherence and absolute irreversibility

    Science.gov (United States)

    Murashita, Yûto; Gong, Zongping; Ashida, Yuto; Ueda, Masahito

    2017-10-01

    The thermodynamics of quantum coherence has attracted growing attention recently, where the thermodynamic advantage of quantum superposition is characterized in terms of quantum thermodynamics. We investigate the thermodynamic effects of quantum coherent driving in the context of the fluctuation theorem. We adopt a quantum-trajectory approach to investigate open quantum systems under feedback control. In these systems, the measurement backaction in the forward process plays a key role, and therefore the corresponding time-reversed quantum measurement and postselection must be considered in the backward process, in sharp contrast to the classical case. The state reduction associated with quantum measurement, in general, creates a zero-probability region in the space of quantum trajectories of the forward process, which causes singularly strong irreversibility with divergent entropy production (i.e., absolute irreversibility) and hence makes the ordinary fluctuation theorem break down. In the classical case, the error-free measurement ordinarily leads to absolute irreversibility, because the measurement restricts classical paths to the region compatible with the measurement outcome. In contrast, in open quantum systems, absolute irreversibility is suppressed even in the presence of the projective measurement due to those quantum rare events that go through the classically forbidden region with the aid of quantum coherent driving. This suppression of absolute irreversibility exemplifies the thermodynamic advantage of quantum coherent driving. Absolute irreversibility is shown to emerge in the absence of coherent driving after the measurement, especially in systems under time-delayed feedback control. We show that absolute irreversibility is mitigated by increasing the duration of quantum coherent driving or decreasing the delay time of feedback control.

  7. Application of Holt exponential smoothing and ARIMA method for data population in West Java

    Science.gov (United States)

    Supriatna, A.; Susanti, D.; Hertini, E.

    2017-01-01

    One method of time series that is often used to predict data that contains trend is Holt. Holt method using different parameters used in the original data which aims to smooth the trend value. In addition to Holt, ARIMA method can be used on a wide variety of data including data pattern containing a pattern trend. Data actual of population from 1998-2015 contains the trends so can be solved by Holt and ARIMA method to obtain the prediction value of some periods. The best method is measured by looking at the smallest MAPE and MAE error. The result using Holt method is 47.205.749 populations in 2016, 47.535.324 populations in 2017, and 48.041.672 populations in 2018, with MAPE error is 0,469744 and MAE error is 189.731. While the result using ARIMA method is 46.964.682 populations in 2016, 47.342.189 in 2017, and 47.899.696 in 2018, with MAPE error is 0,4380 and MAE is 176.626.

  8. Taxi trips distribution modeling based on Entropy-Maximizing theory: A case study in Harbin city-China

    Science.gov (United States)

    Tang, Jinjun; Zhang, Shen; Chen, Xinqiang; Liu, Fang; Zou, Yajie

    2018-03-01

    Understanding Origin-Destination distribution of taxi trips is very important for improving effects of transportation planning and enhancing quality of taxi services. This study proposes a new method based on Entropy-Maximizing theory to model OD distribution in Harbin city using large-scale taxi GPS trajectories. Firstly, a K-means clustering method is utilized to partition raw pick-up and drop-off location into different zones, and trips are assumed to start from and end at zone centers. A generalized cost function is further defined by considering travel distance, time and fee between each OD pair. GPS data collected from more than 1000 taxis at an interval of 30 s during one month are divided into two parts: data from first twenty days is treated as training dataset and last ten days is taken as testing dataset. The training dataset is used to calibrate model while testing dataset is used to validate model. Furthermore, three indicators, mean absolute error (MAE), root mean square error (RMSE) and mean percentage absolute error (MPAE), are applied to evaluate training and testing performance of Entropy-Maximizing model versus Gravity model. The results demonstrate Entropy-Maximizing model is superior to Gravity model. Findings of the study are used to validate the feasibility of OD distribution from taxi GPS data in urban system.

  9. Characterization of kinesiological patterns of the frontal kick, mae-geri, in karate experts and non-karate practitioners

    Directory of Open Access Journals (Sweden)

    António M. VencesBrito

    2014-02-01

    Full Text Available Presently, coaches and researchers need to have a better comprehension of the kinesiological parameters that should be an important tool to support teaching methodologies and to improve skills performance in sports. The aim of this study was to (i identify the kinematic and neuromuscular control patterns of the front kick (mae-geri to a fixed target performed by 14 experienced karate practitioners, and (ii compare it with the execution of 16 participants without any karate experience, allowing the use of those references in the analysis of the training and learning process. Results showed that the kinematic and neuromuscular activity during the kick performance occurs within 600 ms. Muscle activity and kinematic analysis demonstrated a sequence of activation bracing a proximal-to-distal direction, with the muscles presenting two distinct periods of activity (1, 2, where the karateka group has a greater intensity of activation – root mean square (RMS and electromyography (EMG peak – in the first period on Rectus Femoris (RF1 and  Vastus Lateralis (VL1 and a lower duration of co-contraction in both periods on Rectus Femoris-Biceps Femoris and Vastus Lateralis-Biceps Femoris (RF-BF; VL-BF. In the skill performance, the hip flexion, the knee extension and the ankle plantar flexion movements were executed with smaller difference in the range of action (ROA in the karateka group, reflecting different positions of the segments. In conclusion, it was observed a general kinesiological pattern, which was similar in karateka and non-karateka practitioners. However, in the karateka group, the training induces a specialization in the muscle activity reflected in EMG and kinematic data, which leads to a better ballistic performance in the execution of the mae-geri kick, associated with a maximum speed of the distal segments, reached closer to the impact moment, possibly representing more power in the contact.

  10. Estimation of Energy Balance Components over a Drip-Irrigated Olive Orchard Using Thermal and Multispectral Cameras Placed on a Helicopter-Based Unmanned Aerial Vehicle (UAV

    Directory of Open Access Journals (Sweden)

    Samuel Ortega-Farías

    2016-08-01

    Full Text Available A field experiment was carried out to implement a remote sensing energy balance (RSEB algorithm for estimating the incoming solar radiation (Rsi, net radiation (Rn, sensible heat flux (H, soil heat flux (G and latent heat flux (LE over a drip-irrigated olive (cv. Arbequina orchard located in the Pencahue Valley, Maule Region, Chile (35°25′S; 71°44′W; 90 m above sea level. For this study, a helicopter-based unmanned aerial vehicle (UAV was equipped with multispectral and infrared thermal cameras to obtain simultaneously the normalized difference vegetation index (NDVI and surface temperature (Tsurface at very high resolution (6 cm × 6 cm. Meteorological variables and surface energy balance components were measured at the time of the UAV overpass (near solar noon. The performance of the RSEB algorithm was evaluated using measurements of H and LE obtained from an eddy correlation system. In addition, estimated values of Rsi and Rn were compared with ground-truth measurements from a four-way net radiometer while those of G were compared with soil heat flux based on flux plates. Results indicated that RSEB algorithm estimated LE and H with errors of 7% and 5%, respectively. Values of the root mean squared error (RMSE and mean absolute error (MAE for LE were 50 and 43 W m−2 while those for H were 56 and 46 W m−2, respectively. Finally, the RSEB algorithm computed Rsi, Rn and G with error less than 5% and with values of RMSE and MAE less than 38 W m−2. Results demonstrated that multispectral and thermal cameras placed on an UAV could provide an excellent tool to evaluate the intra-orchard spatial variability of Rn, G, H, LE, NDVI and Tsurface over the tree canopy and soil surface between rows.

  11. Evaluation of three semi-empirical approaches to estimate the net radiation over a drip-irrigated olive orchard

    Directory of Open Access Journals (Sweden)

    Rafael López-Olivari

    2015-09-01

    Full Text Available The use of actual evapotranspiration (ETα models requires an appropriate parameterization of the available energy, where the net radiation (Rn is the most important component. Thus, a study was carried out to calibrate and evaluate three semi-empirical approaches to estimate net radiation (Rn over a drip-irrigated olive (Olea europaea L. 'Arbequina' orchard during 2009/2010 and 2010/2011 seasons. The orchard was planted in 2005 at high density in the Pencahue Valley, Maule Region, Chile. The evaluated models were calculated using the balance between long and short wave radiation. To achieve this objective it was assumed that Ts = Tα for Model 1, Ts = Tv for Model 2 and Ts = Tr for Model 3 (Ts is surface temperature; Tα is air temperature; and Tv is temperature inside of the tree canopy; Tr is radiometric temperature. For the three models, the Brutsaert's empirical coefficient (Φ was calibrated using incoming long wave radiation equation with the database of 2009/2010 season. Thus, the calibration indicated that Φ was equal to 1.75. Using the database from 2010/2011 season, the validation indicated that the three models were able to predict the Rn at a 30-min interval with errors lower than 6%, root mean square error (RMSE between 26 and 39 W m-2 and mean absolute error (MAE between 20 and 31 W m-2. On daily time intervals, validation indicated that models presented errors, RMSE and MAE between 2% and 3%, 1.22-1.54 and 1.04-1.35 MJ m-2 d-1, respectively. The three R„-Models would be evaluated and used in others Mediterranean conditions according to the availability of data to estimate net radiation over a drip-irrigated olive orchard planted at high density.

  12. Adaptive Surface Modeling of Soil Properties in Complex Landforms

    Directory of Open Access Journals (Sweden)

    Wei Liu

    2017-06-01

    Full Text Available Abstract: Spatial discontinuity often causes poor accuracy when a single model is used for the surface modeling of soil properties in complex geomorphic areas. Here we present a method for adaptive surface modeling of combined secondary variables to improve prediction accuracy during the interpolation of soil properties (ASM-SP. Using various secondary variables and multiple base interpolation models, ASM-SP was used to interpolate soil K+ in a typical complex geomorphic area (Qinghai Lake Basin, China. Five methods, including inverse distance weighting (IDW, ordinary kriging (OK, and OK combined with different secondary variables (e.g., OK-Landuse, OK-Geology, and OK-Soil, were used to validate the proposed method. The mean error (ME, mean absolute error (MAE, root mean square error (RMSE, mean relative error (MRE, and accuracy (AC were used as evaluation indicators. Results showed that: (1 The OK interpolation result is spatially smooth and has a weak bull's-eye effect, and the IDW has a stronger ‘bull’s-eye’ effect, relatively. They both have obvious deficiencies in depicting spatial variability of soil K+. (2 The methods incorporating combinations of different secondary variables (e.g., ASM-SP, OK-Landuse, OK-Geology, and OK-Soil were associated with lower estimation bias. Compared with IDW, OK, OK-Landuse, OK-Geology, and OK-Soil, the accuracy of ASM-SP increased by 13.63%, 10.85%, 9.98%, 8.32%, and 7.66%, respectively. Furthermore, ASM-SP was more stable, with lower MEs, MAEs, RMSEs, and MREs. (3 ASM-SP presents more details than others in the abrupt boundary, which can render the result consistent with the true secondary variables. In conclusion, ASM-SP can not only consider the nonlinear relationship between secondary variables and soil properties, but can also adaptively combine the advantages of multiple models, which contributes to making the spatial interpolation of soil K+ more reasonable.

  13. Effective Acceleration Model for the Arrival Time of Interplanetary Shocks driven by Coronal Mass Ejections

    Science.gov (United States)

    Paouris, Evangelos; Mavromichalaki, Helen

    2017-12-01

    In a previous work (Paouris and Mavromichalaki in Solar Phys. 292, 30, 2017), we presented a total of 266 interplanetary coronal mass ejections (ICMEs) with as much information as possible. We developed a new empirical model for estimating the acceleration of these events in the interplanetary medium from this analysis. In this work, we present a new approach on the effective acceleration model (EAM) for predicting the arrival time of the shock that preceds a CME, using data of a total of 214 ICMEs. For the first time, the projection effects of the linear speed of CMEs are taken into account in this empirical model, which significantly improves the prediction of the arrival time of the shock. In particular, the mean value of the time difference between the observed time of the shock and the predicted time was equal to +3.03 hours with a mean absolute error (MAE) of 18.58 hours and a root mean squared error (RMSE) of 22.47 hours. After the improvement of this model, the mean value of the time difference is decreased to -0.28 hours with an MAE of 17.65 hours and an RMSE of 21.55 hours. This improved version was applied to a set of three recent Earth-directed CMEs reported in May, June, and July of 2017, and we compare our results with the values predicted by other related models.

  14. A Novel Quantum-Behaved Lightning Search Algorithm Approach to Improve the Fuzzy Logic Speed Controller for an Induction Motor Drive

    Directory of Open Access Journals (Sweden)

    Jamal Abd Ali

    2015-11-01

    Full Text Available This paper presents a novel lightning search algorithm (LSA using quantum mechanics theories to generate a quantum-inspired LSA (QLSA. The QLSA improves the searching of each step leader to obtain the best position for a projectile. To evaluate the reliability and efficiency of the proposed algorithm, the QLSA is tested using eighteen benchmark functions with various characteristics. The QLSA is applied to improve the design of the fuzzy logic controller (FLC for controlling the speed response of the induction motor drive. The proposed algorithm avoids the exhaustive conventional trial-and-error procedure for obtaining membership functions (MFs. The generated adaptive input and output MFs are implemented in the fuzzy speed controller design to formulate the objective functions. Mean absolute error (MAE of the rotor speed is the objective function of optimization controller. An optimal QLSA-based FLC (QLSAF optimization controller is employed to tune and minimize the MAE, thereby improving the performance of the induction motor with the change in speed and mechanical load. To validate the performance of the developed controller, the results obtained with the QLSAF are compared to the results obtained with LSA, the backtracking search algorithm (BSA, the gravitational search algorithm (GSA, the particle swarm optimization (PSO and the proportional integral derivative controllers (PID, respectively. Results show that the QLASF outperforms the other control methods in all of the tested cases in terms of damping capability and transient response under different mechanical loads and speeds.

  15. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry.

    Science.gov (United States)

    Wang, Guochao; Tan, Lilong; Yan, Shuhua

    2018-02-07

    We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He-Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10 -8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.

  16. Lower-Middle Jurassic paleomagnetic data from the Mae Sot area (Thailand): Paleogeographic evolution and deformation history of Southeastern Asia

    Science.gov (United States)

    Yang, Z. Y.; Besse, J.; Sutheetorn, V.; Bassoullet, J. P.; Fontaine, H.; Buffetaut, E.

    1995-12-01

    We have carried out a paleomagnetic study (12 sites, 85 samples) of Early-Middle Jurassic limestones and sandstones from the Mae Sot area of western Thailand. This area is part of the Shan-Thai-Malay (STM) block, and its geological characteristics have led some authors to suggest a Late Jurassic accretion of this region against the rest of Indochina along the Changning-Menglian zone, the latter sometimes being interpreted as a Mesozoic suture. The high-temperature (or high-coercivity) component isolated yields a paleodirection at D = 359.8 °, I = 31.4 ° (α 95 = 5.0 °). The primary nature of the magnetization acquisition is ascertained at a site with reversed polarity and a positive fold test (at the 95% confidence level). Comparison of the Mae Sot paleolatitude and another one from the STM with those recently published for the Simao and Khorat blocks show no significant difference at the 95% level, showing that the STM was situated close to, or had already accreted with, the Simao or Khorat blocks in the Early-Middle Jurassic. Comparison of the latitudes from these blocks with those from China indicates a relative southward motion of 8 ± 4° of Indochina as a single entity relative to China. Most rotations of these regions relative to China are found to be clockwise (between 14 and 75°). These rotations, and most prominently the 1200 ± 500 km post-Cretaceous left-lateral motion inferred for the Red River Fault, provide quantitative estimates of the large amount of extrusion of Indochina with respect to the rest of Asia.

  17. ABSOLUTE NEUTRINO MASSES

    DEFF Research Database (Denmark)

    Schechter, J.; Shahid, M. N.

    2012-01-01

    We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos.......We discuss the possibility of using experiments timing the propagation of neutrino beams over large distances to help determine the absolute masses of the three neutrinos....

  18. Spatial Interpolation of Daily Rainfall Data for Local Climate Impact Assessment over Greater Sydney Region

    Directory of Open Access Journals (Sweden)

    Xihua Yang

    2015-01-01

    Full Text Available This paper presents spatial interpolation techniques to produce finer-scale daily rainfall data from regional climate modeling. Four common interpolation techniques (ANUDEM, Spline, IDW, and Kriging were compared and assessed against station rainfall data and modeled rainfall. The performance was assessed by the mean absolute error (MAE, mean relative error (MRE, root mean squared error (RMSE, and the spatial and temporal distributions. The results indicate that Inverse Distance Weighting (IDW method is slightly better than the other three methods and it is also easy to implement in a geographic information system (GIS. The IDW method was then used to produce forty-year (1990–2009 and 2040–2059 time series rainfall data at daily, monthly, and annual time scales at a ground resolution of 100 m for the Greater Sydney Region (GSR. The downscaled daily rainfall data have been further utilized to predict rainfall erosivity and soil erosion risk and their future changes in GSR to support assessments and planning of climate change impact and adaptation in local scale.

  19. Modeling of surface dust concentrations using neural networks and kriging

    Science.gov (United States)

    Buevich, Alexander G.; Medvedev, Alexander N.; Sergeev, Alexander P.; Tarasov, Dmitry A.; Shichkin, Andrey V.; Sergeeva, Marina V.; Atanasova, T. B.

    2016-12-01

    Creating models which are able to accurately predict the distribution of pollutants based on a limited set of input data is an important task in environmental studies. In the paper two neural approaches: (multilayer perceptron (MLP)) and generalized regression neural network (GRNN)), and two geostatistical approaches: (kriging and cokriging), are using for modeling and forecasting of dust concentrations in snow cover. The area of study is under the influence of dust emissions from a copper quarry and a several industrial companies. The comparison of two mentioned approaches is conducted. Three indices are used as the indicators of the models accuracy: the mean absolute error (MAE), root mean square error (RMSE) and relative root mean square error (RRMSE). Models based on artificial neural networks (ANN) have shown better accuracy. When considering all indices, the most precision model was the GRNN, which uses as input parameters for modeling the coordinates of sampling points and the distance to the probable emissions source. The results of work confirm that trained ANN may be more suitable tool for modeling of dust concentrations in snow cover.

  20. A new method in prediction of TCP phases formation in superalloys

    International Nuclear Information System (INIS)

    Mousavi Anijdan, S.H.; Bahrami, A.

    2005-01-01

    The purpose of this investigation is to develop a model for prediction of topologically closed-packed (TCP) phases formation in superalloys. In this study, artificial neural networks (ANN), using several different network architectures, were used to investigate the complex relationships between TCP phases and chemical composition of superalloys. In order to develop an optimum ANN structure, more than 200 experimental data were used to train and test the neural network. The results of this investigation shows that a multilayer perceptron (MLP) form of the neural networks with one hidden layer and 10 nodes in the hidden layer has the lowest mean absolute error (MAE) and can be accurately used to predict the electron-hole number (N v ) and TCP phases formation in superalloys

  1. Auto-calibration of Systematic Odometry Errors in Mobile Robots

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Andersen, Nils Axel

    1999-01-01

    This paper describes the phenomenon of systematic errors in odometry models in mobile robots and looks at various ways of avoiding it by means of auto-calibration. The systematic errors considered are incorrect knowledge of the wheel base and the gains from encoder readings to wheel displacement....... By auto-calibration we mean a standardized procedure which estimates the uncertainties using only on-board equipment such as encoders, an absolute measurement system and filters; no intervention by operator or off-line data processing is necessary. Results are illustrated by a number of simulations...... and experiments on a mobile robot....

  2. 3D Tendon Strain Estimation Using High-frequency Volumetric Ultrasound Images: A Feasibility Study.

    Science.gov (United States)

    Carvalho, Catarina; Slagmolen, Pieter; Bogaerts, Stijn; Scheys, Lennart; D'hooge, Jan; Peers, Koen; Maes, Frederik; Suetens, Paul

    2018-03-01

    Estimation of strain in tendons for tendinopathy assessment is a hot topic within the sports medicine community. It is believed that, if accurately estimated, existing treatment and rehabilitation protocols can be improved and presymptomatic abnormalities can be detected earlier. State-of-the-art studies present inaccurate and highly variable strain estimates, leaving this problem without solution. Out-of-plane motion, present when acquiring two-dimensional (2D) ultrasound (US) images, is a known problem and may be responsible for such errors. This work investigates the benefit of high-frequency, three-dimensional (3D) US imaging to reduce errors in tendon strain estimation. Volumetric US images were acquired in silico, in vitro, and ex vivo using an innovative acquisition approach that combines the acquisition of 2D high-frequency US images with a mechanical guided system. An affine image registration method was used to estimate global strain. 3D strain estimates were then compared with ground-truth values and with 2D strain estimates. The obtained results for in silico data showed a mean absolute error (MAE) of 0.07%, 0.05%, and 0.27% for 3D estimates along axial, lateral direction, and elevation direction and a respective MAE of 0.21% and 0.29% for 2D strain estimates. Although 3D could outperform 2D, this does not occur in in vitro and ex vivo settings, likely due to 3D acquisition artifacts. Comparison against the state-of-the-art methods showed competitive results. The proposed work shows that 3D strain estimates are more accurate than 2D estimates but acquisition of appropriate 3D US images remains a challenge.

  3. Mapping the EORTC QLQ-C30 onto the EQ-5D-3L: assessing the external validity of existing mapping algorithms.

    Science.gov (United States)

    Doble, Brett; Lorgelly, Paula

    2016-04-01

    To determine the external validity of existing mapping algorithms for predicting EQ-5D-3L utility values from EORTC QLQ-C30 responses and to establish their generalizability in different types of cancer. A main analysis (pooled) sample of 3560 observations (1727 patients) and two disease severity patient samples (496 and 93 patients) with repeated observations over time from Cancer 2015 were used to validate the existing algorithms. Errors were calculated between observed and predicted EQ-5D-3L utility values using a single pooled sample and ten pooled tumour type-specific samples. Predictive accuracy was assessed using mean absolute error (MAE) and standardized root-mean-squared error (RMSE). The association between observed and predicted EQ-5D utility values and other covariates across the distribution was tested using quantile regression. Quality-adjusted life years (QALYs) were calculated using observed and predicted values to test responsiveness. Ten 'preferred' mapping algorithms were identified. Two algorithms estimated via response mapping and ordinary least-squares regression using dummy variables performed well on number of validation criteria, including accurate prediction of the best and worst QLQ-C30 health states, predicted values within the EQ-5D tariff range, relatively small MAEs and RMSEs, and minimal differences between estimated QALYs. Comparison of predictive accuracy across ten tumour type-specific samples highlighted that algorithms are relatively insensitive to grouping by tumour type and affected more by differences in disease severity. Two of the 'preferred' mapping algorithms suggest more accurate predictions, but limitations exist. We recommend extensive scenario analyses if mapped utilities are used in cost-utility analyses.

  4. Plaadid : Jimmy Sommerville "Manage The Damage". Marc Almond "Open All Night". Mishka "Mishka". Ma$e "Double Up". Erinevad esitajad "On The Floor At The Botique : mixed by Lo Fidelity Allstars" / Mart Juur

    Index Scriptorium Estoniae

    Juur, Mart, 1964-

    1999-01-01

    Uutest heliplaatidest : Jimmy Sommerville "Manage The Damage". Marc Almond "Open All Night". Mishka "Mishka". Ma$e "Double Up". Erinevad esitajad "On The Floor At The Botique: mixed by Lo Fidelity Allstars"

  5. Data error effects on net radiation and evapotranspiration estimation

    International Nuclear Information System (INIS)

    Llasat, M.C.; Snyder, R.L.

    1998-01-01

    The objective of this paper is to evaluate the potential error in estimating the net radiation and reference evapotranspiration resulting from errors in the measurement or estimation of weather parameters. A methodology for estimating the net radiation using hourly weather variables measured at a typical agrometeorological station (e.g., solar radiation, temperature and relative humidity) is presented. Then the error propagation analysis is made for net radiation and for reference evapotranspiration. Data from the Raimat weather station, which is located in the Catalonia region of Spain, are used to illustrate the error relationships. The results show that temperature, relative humidity and cloud cover errors have little effect on the net radiation or reference evapotranspiration. A 5°C error in estimating surface temperature leads to errors as big as 30 W m −2 at high temperature. A 4% solar radiation (R s ) error can cause a net radiation error as big as 26 W m −2 when R s ≈ 1000 W m −2 . However, the error is less when cloud cover is calculated as a function of the solar radiation. The absolute error in reference evapotranspiration (ET o ) equals the product of the net radiation error and the radiation term weighting factor [W = Δ(Δ1+γ)] in the ET o equation. Therefore, the ET o error varies between 65 and 85% of the R n error as air temperature increases from about 20° to 40°C. (author)

  6. Absolute nuclear material assay

    Science.gov (United States)

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  7. Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems.

    Science.gov (United States)

    Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S; Agarwal, Dev P

    2015-01-01

    Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data.

  8. Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems

    Directory of Open Access Journals (Sweden)

    Vandana Sakhre

    2015-01-01

    Full Text Available Fuzzy Counter Propagation Neural Network (FCPN controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL. FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN and Back Propagation Network (BPN on the basis of Mean Absolute Error (MAE, Mean Square Error (MSE, Best Fit Rate (BFR, and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO and a single input and single output (SISO gas furnace Box-Jenkins time series data.

  9. A New Hybrid Approach for Wind Speed Prediction Using Fast Block Least Mean Square Algorithm and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Ummuhan Basaran Filik

    2016-01-01

    Full Text Available A new hybrid wind speed prediction approach, which uses fast block least mean square (FBLMS algorithm and artificial neural network (ANN method, is proposed. FBLMS is an adaptive algorithm which has reduced complexity with a very fast convergence rate. A hybrid approach is proposed which uses two powerful methods: FBLMS and ANN method. In order to show the efficiency and accuracy of the proposed approach, seven-year real hourly collected wind speed data sets belonging to Turkish State Meteorological Service of Bozcaada and Eskisehir regions are used. Two different ANN structures are used to compare with this approach. The first six-year data is handled as a train set; the remaining one-year hourly data is handled as test data. Mean absolute error (MAE and root mean square error (RMSE are used for performance evaluations. It is shown for various cases that the performance of the new hybrid approach gives better results than the different conventional ANN structure.

  10. Real-Time and Meter-Scale Absolute Distance Measurement by Frequency-Comb-Referenced Multi-Wavelength Interferometry

    Directory of Open Access Journals (Sweden)

    Guochao Wang

    2018-02-01

    Full Text Available We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He–Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10−8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.

  11. Prediction of matching condition for a microstrip subsystem using artificial neural network and adaptive neuro-fuzzy inference system

    Science.gov (United States)

    Salehi, Mohammad Reza; Noori, Leila; Abiri, Ebrahim

    2016-11-01

    In this paper, a subsystem consisting of a microstrip bandpass filter and a microstrip low noise amplifier (LNA) is designed for WLAN applications. The proposed filter has a small implementation area (49 mm2), small insertion loss (0.08 dB) and wide fractional bandwidth (FBW) (61%). To design the proposed LNA, the compact microstrip cells, an field effect transistor, and only a lumped capacitor are used. It has a low supply voltage and a low return loss (-40 dB) at the operation frequency. The matching condition of the proposed subsystem is predicted using subsystem analysis, artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). To design the proposed filter, the transmission matrix of the proposed resonator is obtained and analysed. The performance of the proposed ANN and ANFIS models is tested using the numerical data by four performance measures, namely the correlation coefficient (CC), the mean absolute error (MAE), the average percentage error (APE) and the root mean square error (RMSE). The obtained results show that these models are in good agreement with the numerical data, and a small error between the predicted values and numerical solution is obtained.

  12. How accurate are pedotransfer functions for bulk density for Brazilian soils?

    Directory of Open Access Journals (Sweden)

    Raquel Stucchi Boschi

    Full Text Available ABSTRACT: The aim of this study was to evaluate the performance of pedotransfer functions (PTFs available in the literature to estimate soil bulk density (ρb in different regions of Brazil, using different metrics. The predictive capacity of 25 PTFs was evaluated using the mean absolute error (MAE, mean error (ME, root mean squared error (RMSE, coefficient of determination (R2 and the regression error characteristic (REC curve. The models performed differently when comparing observed and estimated ρb values. In general, the PTFs showed a performance close to the mean value of the bulk density data, considered as the simplest possible estimation of an attribute and used as a parameter to compare the performance of existing models (null model. The models developed by Benites et al. (2007 (BEN-C and by Manrique and Jones (1991 (M&J-B presented the best results. The separation of data into two layers according to depth (0-10 cm and 10-30 cm demonstrated better performance in the 10-30 cm layer. The REC curve allowed for a simple and visual evaluation of the PTFs.

  13. Data-Driven Machine-Learning Model in District Heating System for Heat Load Prediction: A Comparison Study

    Directory of Open Access Journals (Sweden)

    Fisnik Dalipi

    2016-01-01

    Full Text Available We present our data-driven supervised machine-learning (ML model to predict heat load for buildings in a district heating system (DHS. Even though ML has been used as an approach to heat load prediction in literature, it is hard to select an approach that will qualify as a solution for our case as existing solutions are quite problem specific. For that reason, we compared and evaluated three ML algorithms within a framework on operational data from a DH system in order to generate the required prediction model. The algorithms examined are Support Vector Regression (SVR, Partial Least Square (PLS, and random forest (RF. We use the data collected from buildings at several locations for a period of 29 weeks. Concerning the accuracy of predicting the heat load, we evaluate the performance of the proposed algorithms using mean absolute error (MAE, mean absolute percentage error (MAPE, and correlation coefficient. In order to determine which algorithm had the best accuracy, we conducted performance comparison among these ML algorithms. The comparison of the algorithms indicates that, for DH heat load prediction, SVR method presented in this paper is the most efficient one out of the three also compared to other methods found in the literature.

  14. Temporal Parameters Estimation for Wheelchair Propulsion Using Wearable Sensors

    Directory of Open Access Journals (Sweden)

    Manoela Ojeda

    2014-01-01

    Full Text Available Due to lower limb paralysis, individuals with spinal cord injury (SCI rely on their upper limbs for mobility. The prevalence of upper extremity pain and injury is high among this population. We evaluated the performance of three triaxis accelerometers placed on the upper arm, wrist, and under the wheelchair, to estimate temporal parameters of wheelchair propulsion. Twenty-six participants with SCI were asked to push their wheelchair equipped with a SMARTWheel. The estimated stroke number was compared with the criterion from video observations and the estimated push frequency was compared with the criterion from the SMARTWheel. Mean absolute errors (MAE and mean absolute percentage of error (MAPE were calculated. Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement. Results showed reasonable accuracies especially using the accelerometer placed on the upper arm where the MAPE was 8.0% for stroke number and 12.9% for push frequency. The ICC was 0.994 for stroke number and 0.916 for push frequency. The wrist and seat accelerometer showed lower accuracy with a MAPE for the stroke number of 10.8% and 13.4% and ICC of 0.990 and 0.984, respectively. Results suggested that accelerometers could be an option for monitoring temporal parameters of wheelchair propulsion.

  15. Absolute transition probabilities in the NeI 3p-3s fine structure by beam-gas-dye laser spectroscopy

    International Nuclear Information System (INIS)

    Hartmetz, P.; Schmoranzer, H.

    1983-01-01

    The beam-gas-dye laser two-step excitation technique is further developed and applied to the direct measurement of absolute atomic transition probabilities in the NeI 3p-3s fine-structure transition array with a maximum experimental error of 5%. (orig.)

  16. Thermodynamics of negative absolute pressures

    International Nuclear Information System (INIS)

    Lukacs, B.; Martinas, K.

    1984-03-01

    The authors show that the possibility of negative absolute pressure can be incorporated into the axiomatic thermodynamics, analogously to the negative absolute temperature. There are examples for such systems (GUT, QCD) processing negative absolute pressure in such domains where it can be expected from thermodynamical considerations. (author)

  17. A comparison of multiple indicator kriging and area-to-point Poisson kriging for mapping patterns of herbivore species abundance in Kruger National Park, South Africa.

    Science.gov (United States)

    Kerry, Ruth; Goovaerts, Pierre; Smit, Izak P J; Ingram, Ben R

    Kruger National Park (KNP), South Africa, provides protected habitats for the unique animals of the African savannah. For the past 40 years, annual aerial surveys of herbivores have been conducted to aid management decisions based on (1) the spatial distribution of species throughout the park and (2) total species populations in a year. The surveys are extremely time consuming and costly. For many years, the whole park was surveyed, but in 1998 a transect survey approach was adopted. This is cheaper and less time consuming but leaves gaps in the data spatially. Also the distance method currently employed by the park only gives estimates of total species populations but not their spatial distribution. We compare the ability of multiple indicator kriging and area-to-point Poisson kriging to accurately map species distribution in the park. A leave-one-out cross-validation approach indicates that multiple indicator kriging makes poor estimates of the number of animals, particularly the few large counts, as the indicator variograms for such high thresholds are pure nugget. Poisson kriging was applied to the prediction of two types of abundance data: spatial density and proportion of a given species. Both Poisson approaches had standardized mean absolute errors (St. MAEs) of animal counts at least an order of magnitude lower than multiple indicator kriging. The spatial density, Poisson approach (1), gave the lowest St. MAEs for the most abundant species and the proportion, Poisson approach (2), did for the least abundant species. Incorporating environmental data into Poisson approach (2) further reduced St. MAEs.

  18. The juvenile face as a suitable age indicator in child pornography cases: a pilot study on the reliability of automated and visual estimation approaches.

    Science.gov (United States)

    Ratnayake, M; Obertová, Z; Dose, M; Gabriel, P; Bröker, H M; Brauckmann, M; Barkus, A; Rizgeliene, R; Tutkuviene, J; Ritz-Timme, S; Marasciuolo, L; Gibelli, D; Cattaneo, C

    2014-09-01

    In cases of suspected child pornography, the age of the victim represents a crucial factor for legal prosecution. The conventional methods for age estimation provide unreliable age estimates, particularly if teenage victims are concerned. In this pilot study, the potential of age estimation for screening purposes is explored for juvenile faces. In addition to a visual approach, an automated procedure is introduced, which has the ability to rapidly scan through large numbers of suspicious image data in order to trace juvenile faces. Age estimations were performed by experts, non-experts and the Demonstrator of a developed software on frontal facial images of 50 females aged 10-19 years from Germany, Italy, and Lithuania. To test the accuracy, the mean absolute error (MAE) between the estimates and the real ages was calculated for each examiner and the Demonstrator. The Demonstrator achieved the lowest MAE (1.47 years) for the 50 test images. Decreased image quality had no significant impact on the performance and classification results. The experts delivered slightly less accurate MAE (1.63 years). Throughout the tested age range, both the manual and the automated approach led to reliable age estimates within the limits of natural biological variability. The visual analysis of the face produces reasonably accurate age estimates up to the age of 18 years, which is the legally relevant age threshold for victims in cases of pedo-pornography. This approach can be applied in conjunction with the conventional methods for a preliminary age estimation of juveniles depicted on images.

  19. Neuromuscular function of the quadriceps muscle during isometric maximal, submaximal and submaximal fatiguing voluntary contractions in knee osteoarthrosis patients.

    Directory of Open Access Journals (Sweden)

    Anett Mau-Moeller

    Full Text Available Knee osteoarthrosis (KOA is commonly associated with a dysfunction of the quadriceps muscle which contributes to alterations in motor performance. The underlying neuromuscular mechanisms of muscle dysfunction are not fully understood. The main objective of this study was to analyze how KOA affects neuromuscular function of the quadriceps muscle during different contraction intensities.The following parameters were assessed in 20 patients and 20 healthy controls: (i joint position sense, i.e. position control (mean absolute error, MAE at 30° and 50° of knee flexion, (ii simple reaction time task performance, (iii isometric maximal voluntary torque (IMVT and root mean square of the EMG signal (RMS-EMG, (iv torque control, i.e. accuracy (MAE, absolute fluctuation (standard deviation, SD, relative fluctuation (coefficient of variation, CV and periodicity (mean frequency, MNF of the torque signal at 20%, 40% and 60% IMVT, (v EMG-torque relationship at 20%, 40% and 60% IMVT and (vi performance fatigability, i.e. time to task failure (TTF at 40% IMVT.Compared to the control group, the KOA group displayed: (i significantly higher MAE of the angle signal at 30° (99.3%; P = 0.027 and 50° (147.9%; P < 0.001, (ii no significant differences in reaction time, (iii significantly lower IMVT (-41.6%; P = 0.001 and tendentially lower RMS-EMG of the rectus femoris (-33.7%; P = 0.054, (iv tendentially higher MAE of the torque signal at 20% IMVT (65.9%; P = 0.068, significantly lower SD of the torque signal at all three torque levels and greater MNF at 60% IMVT (44.8%; P = 0.018, (v significantly increased RMS-EMG of the vastus lateralis at 20% (70.8%; P = 0.003 and 40% IMVT (33.3%; P = 0.034, significantly lower RMS-EMG of the biceps femoris at 20% (-63.6%; P = 0.044 and 40% IMVT (-41.3%; P = 0.028 and tendentially lower at 60% IMVT (-24.3%; P = 0.075 and (vi significantly shorter TTF (-51.1%; P = 0.049.KOA is not only associated with a deterioration of IMVT

  20. Magnetic Resonance–Based Automatic Air Segmentation for Generation of Synthetic Computed Tomography Scans in the Head Region

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Weili; Kim, Joshua P. [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan (United States); Kadbi, Mo [Philips Healthcare, Cleveland, Ohio (United States); Movsas, Benjamin; Chetty, Indrin J. [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan (United States); Glide-Hurst, Carri K., E-mail: churst2@hfhs.org [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan (United States)

    2015-11-01

    Purpose: To incorporate a novel imaging sequence for robust air and tissue segmentation using ultrashort echo time (UTE) phase images and to implement an innovative synthetic CT (synCT) solution as a first step toward MR-only radiation therapy treatment planning for brain cancer. Methods and Materials: Ten brain cancer patients were scanned with a UTE/Dixon sequence and other clinical sequences on a 1.0 T open magnet with simulation capabilities. Bone-enhanced images were generated from a weighted combination of water/fat maps derived from Dixon images and inverted UTE images. Automated air segmentation was performed using unwrapped UTE phase maps. Segmentation accuracy was assessed by calculating segmentation errors (true-positive rate, false-positive rate, and Dice similarity indices using CT simulation (CT-SIM) as ground truth. The synCTs were generated using a voxel-based, weighted summation method incorporating T2, fluid attenuated inversion recovery (FLAIR), UTE1, and bone-enhanced images. Mean absolute error (MAE) characterized Hounsfield unit (HU) differences between synCT and CT-SIM. A dosimetry study was conducted, and differences were quantified using γ-analysis and dose-volume histogram analysis. Results: On average, true-positive rate and false-positive rate for the CT and MR-derived air masks were 80.8% ± 5.5% and 25.7% ± 6.9%, respectively. Dice similarity indices values were 0.78 ± 0.04 (range, 0.70-0.83). Full field of view MAE between synCT and CT-SIM was 147.5 ± 8.3 HU (range, 138.3-166.2 HU), with the largest errors occurring at bone–air interfaces (MAE 422.5 ± 33.4 HU for bone and 294.53 ± 90.56 HU for air). Gamma analysis revealed pass rates of 99.4% ± 0.04%, with acceptable treatment plan quality for the cohort. Conclusions: A hybrid MRI phase/magnitude UTE image processing technique was introduced that significantly improved bone and air contrast in MRI. Segmented air masks and bone-enhanced images were integrated

  1. Magnetic Resonance–Based Automatic Air Segmentation for Generation of Synthetic Computed Tomography Scans in the Head Region

    International Nuclear Information System (INIS)

    Zheng, Weili; Kim, Joshua P.; Kadbi, Mo; Movsas, Benjamin; Chetty, Indrin J.; Glide-Hurst, Carri K.

    2015-01-01

    Purpose: To incorporate a novel imaging sequence for robust air and tissue segmentation using ultrashort echo time (UTE) phase images and to implement an innovative synthetic CT (synCT) solution as a first step toward MR-only radiation therapy treatment planning for brain cancer. Methods and Materials: Ten brain cancer patients were scanned with a UTE/Dixon sequence and other clinical sequences on a 1.0 T open magnet with simulation capabilities. Bone-enhanced images were generated from a weighted combination of water/fat maps derived from Dixon images and inverted UTE images. Automated air segmentation was performed using unwrapped UTE phase maps. Segmentation accuracy was assessed by calculating segmentation errors (true-positive rate, false-positive rate, and Dice similarity indices using CT simulation (CT-SIM) as ground truth. The synCTs were generated using a voxel-based, weighted summation method incorporating T2, fluid attenuated inversion recovery (FLAIR), UTE1, and bone-enhanced images. Mean absolute error (MAE) characterized Hounsfield unit (HU) differences between synCT and CT-SIM. A dosimetry study was conducted, and differences were quantified using γ-analysis and dose-volume histogram analysis. Results: On average, true-positive rate and false-positive rate for the CT and MR-derived air masks were 80.8% ± 5.5% and 25.7% ± 6.9%, respectively. Dice similarity indices values were 0.78 ± 0.04 (range, 0.70-0.83). Full field of view MAE between synCT and CT-SIM was 147.5 ± 8.3 HU (range, 138.3-166.2 HU), with the largest errors occurring at bone–air interfaces (MAE 422.5 ± 33.4 HU for bone and 294.53 ± 90.56 HU for air). Gamma analysis revealed pass rates of 99.4% ± 0.04%, with acceptable treatment plan quality for the cohort. Conclusions: A hybrid MRI phase/magnitude UTE image processing technique was introduced that significantly improved bone and air contrast in MRI. Segmented air masks and bone-enhanced images were integrated

  2. A wavelet-coupled support vector machine model for forecasting global incident solar radiation using limited meteorological dataset

    International Nuclear Information System (INIS)

    Deo, Ravinesh C.; Wen, Xiaohu; Qi, Feng

    2016-01-01

    Highlights: • A forecasting model for short- and long-term global incident solar radiation (R_n) has been developed. • The support vector machine and discrete wavelet transformation algorithm has been integrated. • The precision of the wavelet-coupled hybrid model is assessed using several prediction score metrics. • The proposed model is an appealing tool for forecasting R_n in the present study region. - Abstract: A solar radiation forecasting model can be utilized is a scientific contrivance for investigating future viability of solar energy potentials. In this paper, a wavelet-coupled support vector machine (W-SVM) model was adopted to forecast global incident solar radiation based on the sunshine hours (S_t), minimum temperature (T_m_a_x), maximum temperature (T_m_a_x), windspeed (U), evaporation (E) and precipitation (P) as the predictor variables. To ascertain conclusive results, the merit of the W-SVM was benchmarked with the classical SVM model. For daily forecasting, sixteen months of data (01-March-2014 to 30-June-2015) partitioned into the train (65%) and test (35%) set for the three metropolitan stations (Brisbane City, Cairns Aero and Townsville Aero) were utilized. Data were decomposed into their wavelet sub-series by discrete wavelet transformation algorithm and summed up to create new series with one approximation and four levels of detail using Daubechies-2 mother wavelet. For daily forecasting, six model scenarios were formulated where the number of input was increased and the forecast was assessed by statistical metrics (correlation coefficient r; Willmott’s index d; Nash-Sutcliffe coefficient E_N_S; peak deviation P_d_v), distribution statistics and prediction errors (mean absolute error MAE; root mean square error RMSE; mean absolute percentage error MAPE; relative root mean square error RMSE). Results for daily forecasts showed that the W-SVM model outperformed the classical SVM model for optimum input combinations. A sensitivity

  3. Errors and Correction of Precipitation Measurements in China

    Institute of Scientific and Technical Information of China (English)

    REN Zhihua; LI Mingqin

    2007-01-01

    In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.

  4. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    Science.gov (United States)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on

  5. The correction of vibration in frequency scanning interferometry based absolute distance measurement system for dynamic measurements

    Science.gov (United States)

    Lu, Cheng; Liu, Guodong; Liu, Bingguo; Chen, Fengdong; Zhuang, Zhitao; Xu, Xinke; Gan, Yu

    2015-10-01

    Absolute distance measurement systems are of significant interest in the field of metrology, which could improve the manufacturing efficiency and accuracy of large assemblies in fields such as aircraft construction, automotive engineering, and the production of modern windmill blades. Frequency scanning interferometry demonstrates noticeable advantages as an absolute distance measurement system which has a high precision and doesn't depend on a cooperative target. In this paper , the influence of inevitable vibration in the frequency scanning interferometry based absolute distance measurement system is analyzed. The distance spectrum is broadened as the existence of Doppler effect caused by vibration, which will bring in a measurement error more than 103 times bigger than the changes of optical path difference. In order to decrease the influence of vibration, the changes of the optical path difference are monitored by a frequency stabilized laser, which runs parallel to the frequency scanning interferometry. The experiment has verified the effectiveness of this method.

  6. Use of the D-R model to define trends in the emergence of Ceftazidime-resistant Escherichia coli in China.

    Directory of Open Access Journals (Sweden)

    Fan Ding

    Full Text Available OBJECTIVE: To assess the efficacy of the D-R model for defining trends in the appearance of Ceftazidime-resistant Escherichia coli. METHODS: Actual data related to the manifestation of Ceftazidime-resistant E. coli spanning years 1996-2009 were collected from the China National Knowledge Internet. These data originated from 430 publications encompassing 1004 citations of resistance. The GM(1,1 and the novel D-R models were used to fit current data and from this, predict trends in the appearance of the drug-resistant phenotype. The results were evaluated by Relative Standard Error (RSE, Mean Absolute Deviation (MAD and Mean Absolute Error (MAE. RESULTS: Results from the D-R model showed a rapid increase in the appearance of Ceftazidime-resistant E. coli in this region of the world. These results were considered accurate based upon the minor values calculated for RSE, MAD and MAE, and were equivalent to or better than those generated by the GM(1,1 model. CONCLUSION: The D-R model which was originally created to define trends in the transmission of swine viral diseases can be adapted to evaluating trends in the appearance of Ceftazidime-resistant E. coli. Using only a limited amount of data to initiate the study, our predictions closely mirrored the changes in drug resistance rates which showed a steady increase through 2005, a decrease between 2005 and 2008, and a dramatic inflection point and abrupt increase beginning in 2008. This is consistent with a resistance profile where changes in drug intervention temporarily delayed the upward trend in the appearance of the resistant phenotype; however, resistance quickly resumed its upward momentum in 2008 and this change was better predicted using the D-R model. Additional work is needed to determine if this pattern of "increase-control-increase" is indicative of Ceftazidime-resistant E. coli or can be generally ascribed to bacteria acquiring resistance to drugs in the absence of alternative

  7. WE-AB-207A-02: John’s Equation Based Consistency Condition and Incomplete Projection Restoration Upon Circular Orbit CBCT

    International Nuclear Information System (INIS)

    Ma, J; Qi, H; Wu, S; Xu, Y; Zhou, L; Yan, H

    2016-01-01

    Purpose: In transmitted X-ray tomography imaging, projections are sometimes incomplete due to a variety of reasons, such as geometry inaccuracy, defective detector cells, etc. To address this issue, we have derived a direct consistency condition based on John’s Equation, and proposed a method to effectively restore incomplete projections based on this consistency condition. Methods: Through parameter substitutions, we have derived a direct consistency condition equation from John’s equation, in which the left side is only projection derivative of view and the right side is projection derivative of other geometrical parameters. Based on this consistency condition, a projection restoration method is proposed, which includes five steps: 1) Forward projecting reconstructed image and using linear interpolation to estimate the incomplete projections as the initial result; 2) Performing Fourier transform on the projections; 3) Restoring the incomplete frequency data using the consistency condition equation; 4) Performing inverse Fourier transform; 5) Repeat step 2)∼4) until our criteria is met to terminate the iteration. Results: A beam-blocking-based scatter correction case and a bad-pixel correction case were used to demonstrate the efficacy and robustness of our restoration method. The mean absolute error (MAE), signal noise ratio (SNR) and mean square error (MSE) were employed as our evaluation metrics of the reconstructed images. For the scatter correction case, the MAE is reduced from 63.3% to 71.7% with 4 iterations. Compared with the existing Patch’s method, the MAE of our method is further reduced by 8.72%. For the bad-pixel case, the SNR of the reconstructed image by our method is increased from 13.49% to 21.48%, with the MSE being decreased by 45.95%, compared with linear interpolation method. Conclusion: Our studies have demonstrated that our restoration method based on the new consistency condition could effectively restore the incomplete projections

  8. Daily Suspended Sediment Discharge Prediction Using Multiple Linear Regression and Artificial Neural Network

    Science.gov (United States)

    Uca; Toriman, Ekhwan; Jaafar, Othman; Maru, Rosmini; Arfan, Amal; Saleh Ahmar, Ansari

    2018-01-01

    Prediction of suspended sediment discharge in a catchments area is very important because it can be used to evaluation the erosion hazard, management of its water resources, water quality, hydrology project management (dams, reservoirs, and irrigation) and to determine the extent of the damage that occurred in the catchments. Multiple Linear Regression analysis and artificial neural network can be used to predict the amount of daily suspended sediment discharge. Regression analysis using the least square method, whereas artificial neural networks using Radial Basis Function (RBF) and feedforward multilayer perceptron with three learning algorithms namely Levenberg-Marquardt (LM), Scaled Conjugate Descent (SCD) and Broyden-Fletcher-Goldfarb-Shanno Quasi-Newton (BFGS). The number neuron of hidden layer is three to sixteen, while in output layer only one neuron because only one output target. The mean absolute error (MAE), root mean square error (RMSE), coefficient of determination (R2 ) and coefficient of efficiency (CE) of the multiple linear regression (MLRg) value Model 2 (6 input variable independent) has the lowest the value of MAE and RMSE (0.0000002 and 13.6039) and highest R2 and CE (0.9971 and 0.9971). When compared between LM, SCG and RBF, the BFGS model structure 3-7-1 is the better and more accurate to prediction suspended sediment discharge in Jenderam catchment. The performance value in testing process, MAE and RMSE (13.5769 and 17.9011) is smallest, meanwhile R2 and CE (0.9999 and 0.9998) is the highest if it compared with the another BFGS Quasi-Newton model (6-3-1, 9-10-1 and 12-12-1). Based on the performance statistics value, MLRg, LM, SCG, BFGS and RBF suitable and accurately for prediction by modeling the non-linear complex behavior of suspended sediment responses to rainfall, water depth and discharge. The comparison between artificial neural network (ANN) and MLRg, the MLRg Model 2 accurately for to prediction suspended sediment discharge (kg

  9. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering

    Science.gov (United States)

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293

  10. Stellar Atmospheric Parameterization Based on Deep Learning

    Science.gov (United States)

    Pan, Ru-yang; Li, Xiang-ru

    2017-07-01

    Deep learning is a typical learning method widely studied in the fields of machine learning, pattern recognition, and artificial intelligence. This work investigates the problem of stellar atmospheric parameterization by constructing a deep neural network with five layers, and the node number in each layer of the network is respectively 3821-500-100-50-1. The proposed scheme is verified on both the real spectra measured by the Sloan Digital Sky Survey (SDSS) and the theoretic spectra computed with the Kurucz's New Opacity Distribution Function (NEWODF) model, to make an automatic estimation for three physical parameters: the effective temperature (Teff), surface gravitational acceleration (lg g), and metallic abundance (Fe/H). The results show that the stacked autoencoder deep neural network has a better accuracy for the estimation. On the SDSS spectra, the mean absolute errors (MAEs) are 79.95 for Teff/K, 0.0058 for (lg Teff/K), 0.1706 for lg (g/(cm·s-2)), and 0.1294 dex for the [Fe/H], respectively; On the theoretic spectra, the MAEs are 15.34 for Teff/K, 0.0011 for lg (Teff/K), 0.0214 for lg(g/(cm · s-2)), and 0.0121 dex for [Fe/H], respectively.

  11. Computational Depth of Anesthesia via Multiple Vital Signs Based on Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Muammar Sadrawi

    2015-01-01

    Full Text Available This study evaluated the depth of anesthesia (DoA index using artificial neural networks (ANN which is performed as the modeling technique. Totally 63-patient data is addressed, for both modeling and testing of 17 and 46 patients, respectively. The empirical mode decomposition (EMD is utilized to purify between the electroencephalography (EEG signal and the noise. The filtered EEG signal is subsequently extracted to achieve a sample entropy index by every 5-second signal. Then, it is combined with other mean values of vital signs, that is, electromyography (EMG, heart rate (HR, pulse, systolic blood pressure (SBP, diastolic blood pressure (DBP, and signal quality index (SQI to evaluate the DoA index as the input. The 5 doctor scores are averaged to obtain an output index. The mean absolute error (MAE is utilized as the performance evaluation. 10-fold cross-validation is performed in order to generalize the model. The ANN model is compared with the bispectral index (BIS. The results show that the ANN is able to produce lower MAE than BIS. For the correlation coefficient, ANN also has higher value than BIS tested on the 46-patient testing data. Sensitivity analysis and cross-validation method are applied in advance. The results state that EMG has the most effecting parameter, significantly.

  12. Automated absolute activation analysis with californium-252 sources

    International Nuclear Information System (INIS)

    MacMurdo, K.W.; Bowman, W.W.

    1978-09-01

    A 100-mg 252 Cf neutron activation analysis facility is used routinely at the Savannah River Laboratory for multielement analysis of many solid and liquid samples. An absolute analysis technique converts counting data directly to elemental concentration without the use of classical comparative standards and flux monitors. With the totally automated pneumatic sample transfer system, cyclic irradiation-decay-count regimes can be pre-selected for up to 40 samples, and samples can be analyzed with the facility unattended. An automatic data control system starts and stops a high-resolution gamma-ray spectrometer and/or a delayed-neutron detector; the system also stores data and controls output modes. Gamma ray data are reduced by three main programs in the IBM 360/195 computer: the 4096-channel spectrum and pertinent experimental timing, counting, and sample data are stored on magnetic tape; the spectrum is then reduced to a list of significant photopeak energies, integrated areas, and their associated statistical errors; and the third program assigns gamma ray photopeaks to the appropriate neutron activation product(s) by comparing photopeak energies to tabulated gamma ray energies. Photopeak areas are then converted to elemental concentration by using experimental timing and sample data, calculated elemental neutron capture rates, absolute detector efficiencies, and absolute spectroscopic decay data. Calculational procedures have been developed so that fissile material can be analyzed by cyclic neutron activation and delayed-neutron counting procedures. These calculations are based on a 6 half-life group model of delayed neutron emission; calculations include corrections for delayed neutron interference from 17 O. Detection sensitivities of 239 Pu were demonstrated with 15-g samples at a throughput of up to 140 per day. Over 40 elements can be detected at the sub-ppM level

  13. Danish Towns during Absolutism

    DEFF Research Database (Denmark)

    This anthology, No. 4 in the Danish Urban Studies Series, presents in English recent significant research on Denmark's urban development during the Age of Absolutism, 1660-1848, and features 13 articles written by leading Danish urban historians. The years of Absolutism were marked by a general...

  14. Comparative Time Series Analysis of Aerosol Optical Depth over Sites in United States and China Using ARIMA Modeling

    Science.gov (United States)

    Li, X.; Zhang, C.; Li, W.

    2017-12-01

    Long-term spatiotemporal analysis and modeling of aerosol optical depth (AOD) distribution is of paramount importance to study radiative forcing, climate change, and human health. This study is focused on the trends and variations of AOD over six stations located in United States and China during 2003 to 2015, using satellite-retrieved Moderate Resolution Imaging Spectrometer (MODIS) Collection 6 retrievals and ground measurements derived from Aerosol Robotic NETwork (AERONET). An autoregressive integrated moving average (ARIMA) model is applied to simulate and predict AOD values. The R2, adjusted R2, Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Bayesian Information Criterion (BIC) are used as indices to select the best fitted model. Results show that there is a persistent decreasing trend in AOD for both MODIS data and AERONET data over three stations. Monthly and seasonal AOD variations reveal consistent aerosol patterns over stations along mid-latitudes. Regional differences impacted by climatology and land cover types are observed for the selected stations. Statistical validation of time series models indicates that the non-seasonal ARIMA model performs better for AERONET AOD data than for MODIS AOD data over most stations, suggesting the method works better for data with higher quality. By contrast, the seasonal ARIMA model reproduces the seasonal variations of MODIS AOD data much more precisely. Overall, the reasonably predicted results indicate the applicability and feasibility of the stochastic ARIMA modeling technique to forecast future and missing AOD values.

  15. PM10 Analysis for Three Industrialized Areas using Extreme Value

    International Nuclear Information System (INIS)

    Hasfazilah Ahmat; Ahmad Shukri Yahaya; Nor Azam Ramli; Hasfazilah Ahmat

    2015-01-01

    One of the concerns of the air pollution studies is to compute the concentrations of one or more pollutants' species in space and time in relation to the independent variables, for instance emissions into the atmosphere, meteorological factors and parameters. One of the most significant statistical disciplines developed for the applied sciences and many other disciplines for the last few decades is the extreme value theory (EVT). This study assesses the use of extreme value distributions of the two-parameter Gumbel, two and three-parameter Weibull, Generalized Extreme Value (GEV) and two and three-parameter Generalized Pareto Distribution (GPD) on the maximum concentration of daily PM10 data recorded in the year 2010 - 2012 in Pasir Gudang, Johor; Bukit Rambai, Melaka; and Nilai, Negeri Sembilan. Parameters for all distributions are estimated using the Method of Moments (MOM) and Maximum Likelihood Estimator (MLE). Six performance indicators namely; the accuracy measures which include predictive accuracy (PA), Coefficient of Determination (R2), Index of Agreement (IA) and error measures that consist of Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and Normalized Absolute Error (NAE) are used to find the goodness-of-fit of the distribution. The best distribution is selected based on the highest accuracy measures and the smallest error measures. The results showed that the GEV is the best fit for daily maximum concentration for PM10 for all monitoring stations. The analysis also demonstrates that the estimated numbers of days in which the concentration of PM10 exceeded the Malaysian Ambient Air Quality Guidelines (MAAQG) of 150 mg/ m"3 are between 1/2 and 11/2 days. (author)

  16. Linear and nonlinear dynamic systems in financial time series prediction

    Directory of Open Access Journals (Sweden)

    Salim Lahmiri

    2012-10-01

    Full Text Available Autoregressive moving average (ARMA process and dynamic neural networks namely the nonlinear autoregressive moving average with exogenous inputs (NARX are compared by evaluating their ability to predict financial time series; for instance the S&P500 returns. Two classes of ARMA are considered. The first one is the standard ARMA model which is a linear static system. The second one uses Kalman filter (KF to estimate and predict ARMA coefficients. This model is a linear dynamic system. The forecasting ability of each system is evaluated by means of mean absolute error (MAE and mean absolute deviation (MAD statistics. Simulation results indicate that the ARMA-KF system performs better than the standard ARMA alone. Thus, introducing dynamics into the ARMA process improves the forecasting accuracy. In addition, the ARMA-KF outperformed the NARX. This result may suggest that the linear component found in the S&P500 return series is more dominant than the nonlinear part. In sum, we conclude that introducing dynamics into the ARMA process provides an effective system for S&P500 time series prediction.

  17. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  18. Parameter Optimisation and Uncertainty Analysis in Visual MODFLOW based Flow Model for predicting the groundwater head in an Eastern Indian Aquifer

    Science.gov (United States)

    Mohanty, B.; Jena, S.; Panda, R. K.

    2016-12-01

    The overexploitation of groundwater elicited in abandoning several shallow tube wells in the study Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is indispensable for the effective planning and management of the water resources. The basic intent of this study is to develop a 3-D groundwater flow model of the study basin using the Visual MODFLOW Flex 2014.2 package and successfully calibrate and validate the model using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (McMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE), Coefficient of Determination (R2), Mean Absolute Error (MAE), Mean Percent Deviation (Dv) and Root Mean Squared Error (RMSE) were adopted as criteria of model evaluation during calibration and validation of the developed model. NSE, R2, MAE, Dv and RMSE values for groundwater flow model during calibration and validation were in acceptable range. Also, the McMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.

  19. Assessing the accuracy of ANFIS, EEMD-GRNN, PCR, and MLR models in predicting PM2.5

    Science.gov (United States)

    Ausati, Shadi; Amanollahi, Jamil

    2016-10-01

    Since Sanandaj is considered one of polluted cities of Iran, prediction of any type of pollution especially prediction of suspended particles of PM2.5, which are the cause of many diseases, could contribute to health of society by timely announcements and prior to increase of PM2.5. In order to predict PM2.5 concentration in the Sanandaj air the hybrid models consisting of an ensemble empirical mode decomposition and general regression neural network (EEMD-GRNN), Adaptive Neuro-Fuzzy Inference System (ANFIS), principal component regression (PCR), and linear model such as multiple liner regression (MLR) model were used. In these models the data of suspended particles of PM2.5 were the dependent variable and the data related to air quality including PM2.5, PM10, SO2, NO2, CO, O3 and meteorological data including average minimum temperature (Min T), average maximum temperature (Max T), average atmospheric pressure (AP), daily total precipitation (TP), daily relative humidity level of the air (RH) and daily wind speed (WS) for the year 2014 in Sanandaj were the independent variables. Among the used models, EEMD-GRNN model with values of R2 = 0.90, root mean square error (RMSE) = 4.9218 and mean absolute error (MAE) = 3.4644 in the training phase and with values of R2 = 0.79, RMSE = 5.0324 and MAE = 3.2565 in the testing phase, exhibited the best function in predicting this phenomenon. It can be concluded that hybrid models have accurate results to predict PM2.5 concentration compared with linear model.

  20. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  1. Machine learning techniques in disease forecasting: a case study on rice blast prediction

    Directory of Open Access Journals (Sweden)

    Kapoor Amar S

    2006-11-01

    Full Text Available Abstract Background Diverse modeling approaches viz. neural networks and multiple regression have been followed to date for disease prediction in plant populations. However, due to their inability to predict value of unknown data points and longer training times, there is need for exploiting new prediction softwares for better understanding of plant-pathogen-environment relationships. Further, there is no online tool available which can help the plant researchers or farmers in timely application of control measures. This paper introduces a new prediction approach based on support vector machines for developing weather-based prediction models of plant diseases. Results Six significant weather variables were selected as predictor variables. Two series of models (cross-location and cross-year were developed and validated using a five-fold cross validation procedure. For cross-year models, the conventional multiple regression (REG approach achieved an average correlation coefficient (r of 0.50, which increased to 0.60 and percent mean absolute error (%MAE decreased from 65.42 to 52.24 when back-propagation neural network (BPNN was used. With generalized regression neural network (GRNN, the r increased to 0.70 and %MAE also improved to 46.30, which further increased to r = 0.77 and %MAE = 36.66 when support vector machine (SVM based method was used. Similarly, cross-location validation achieved r = 0.48, 0.56 and 0.66 using REG, BPNN and GRNN respectively, with their corresponding %MAE as 77.54, 66.11 and 58.26. The SVM-based method outperformed all the three approaches by further increasing r to 0.74 with improvement in %MAE to 44.12. Overall, this SVM-based prediction approach will open new vistas in the area of forecasting plant diseases of various crops. Conclusion Our case study demonstrated that SVM is better than existing machine learning techniques and conventional REG approaches in forecasting plant diseases. In this direction, we have also

  2. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  3. DI3 - A New Procedure for Absolute Directional Measurements

    Directory of Open Access Journals (Sweden)

    A Geese

    2011-06-01

    Full Text Available The standard observatory procedure for determining a geomagnetic field's declination and inclination absolutely is the DI-flux measurement. The instrument consists of a non-magnetic theodolite equipped with a single-axis fluxgate magnetometer. Additionally, a scalar magnetometer is needed to provide all three components of the field. Using only 12 measurement steps, all systematic errors can be accounted for, but if only one of the readings is wrong, the whole measurement has to be rejected. We use a three-component sensor on top of the theodolites telescope. By performing more measurement steps, we gain much better control of the whole procedure: As the magnetometer can be fully calibrated by rotating about two independent directions, every combined reading of magnetometer output and theodolite angles provides the absolute field vector. We predefined a set of angle positions that the observer has to try to achieve. To further simplify the measurement procedure, the observer is guided by a pocket pc, in which he has only to confirm the theodolite position. The magnetic field is then stored automatically, together with the horizontal and vertical angles. The DI3 measurement is periodically performed at the Niemegk Observatory, allowing for a direct comparison with the traditional measurements.

  4. On Selection of the Probability Distribution for Representing the Maximum Annual Wind Speed in East Cairo, Egypt

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh. I.; El-Hemamy, S.T.

    2013-01-01

    The main objective of this paper is to identify an appropriate probability model and best plotting position formula which represent the maximum annual wind speed in east Cairo. This model can be used to estimate the extreme wind speed and return period at a particular site as well as to determine the radioactive release distribution in case of accident occurrence at a nuclear power plant. Wind speed probabilities can be estimated by using probability distributions. An accurate determination of probability distribution for maximum wind speed data is very important in expecting the extreme value . The probability plots of the maximum annual wind speed (MAWS) in east Cairo are fitted to six major statistical distributions namely: Gumbel, Weibull, Normal, Log-Normal, Logistic and Log- Logistic distribution, while eight plotting positions of Hosking and Wallis, Hazen, Gringorten, Cunnane, Blom, Filliben, Benard and Weibull are used for determining exceedance of their probabilities. A proper probability distribution for representing the MAWS is selected by the statistical test criteria in frequency analysis. Therefore, the best plotting position formula which can be used to select appropriate probability model representing the MAWS data must be determined. The statistical test criteria which represented in: the probability plot correlation coefficient (PPCC), the root mean square error (RMSE), the relative root mean square error (RRMSE) and the maximum absolute error (MAE) are used to select the appropriate probability position and distribution. The data obtained show that the maximum annual wind speed in east Cairo vary from 44.3 Km/h to 96.1 Km/h within duration of 39 years . Weibull plotting position combined with Normal distribution gave the highest fit, most reliable, accurate predictions and determination of the wind speed in the study area having the highest value of PPCC and lowest values of RMSE, RRMSE and MAE

  5. Stepped-wedge cluster randomised controlled trial to assess the effectiveness of an electronic medication management system to reduce medication errors, adverse drug events and average length of stay at two paediatric hospitals: a study protocol.

    Science.gov (United States)

    Westbrook, J I; Li, L; Raban, M Z; Baysari, M T; Mumford, V; Prgomet, M; Georgiou, A; Kim, T; Lake, R; McCullagh, C; Dalla-Pozza, L; Karnon, J; O'Brien, T A; Ambler, G; Day, R; Cowell, C T; Gazarian, M; Worthington, R; Lehmann, C U; White, L; Barbaric, D; Gardo, A; Kelly, M; Kennedy, P

    2016-10-21

    Medication errors are the most frequent cause of preventable harm in hospitals. Medication management in paediatric patients is particularly complex and consequently potential for harms are greater than in adults. Electronic medication management (eMM) systems are heralded as a highly effective intervention to reduce adverse drug events (ADEs), yet internationally evidence of their effectiveness in paediatric populations is limited. This study will assess the effectiveness of an eMM system to reduce medication errors, ADEs and length of stay (LOS). The study will also investigate system impact on clinical work processes. A stepped-wedge cluster randomised controlled trial (SWCRCT) will measure changes pre-eMM and post-eMM system implementation in prescribing and medication administration error (MAE) rates, potential and actual ADEs, and average LOS. In stage 1, 8 wards within the first paediatric hospital will be randomised to receive the eMM system 1 week apart. In stage 2, the second paediatric hospital will randomise implementation of a modified eMM and outcomes will be assessed. Prescribing errors will be identified through record reviews, and MAEs through direct observation of nurses and record reviews. Actual and potential severity will be assigned. Outcomes will be assessed at the patient-level using mixed models, taking into account correlation of admissions within wards and multiple admissions for the same patient, with adjustment for potential confounders. Interviews and direct observation of clinicians will investigate the effects of the system on workflow. Data from site 1 will be used to develop improvements in the eMM and implemented at site 2, where the SWCRCT design will be repeated (stage 2). The research has been approved by the Human Research Ethics Committee of the Sydney Children's Hospitals Network and Macquarie University. Results will be reported through academic journals and seminar and conference presentations. Australian New Zealand

  6. Mapping of Daily Mean Air Temperature in Agricultural Regions Using Daytime and Nighttime Land Surface Temperatures Derived from TERRA and AQUA MODIS Data

    Directory of Open Access Journals (Sweden)

    Ran Huang

    2015-07-01

    Full Text Available Air temperature is one of the most important factors in crop growth monitoring and simulation. In the present study, we estimated and mapped daily mean air temperature using daytime and nighttime land surface temperatures (LSTs derived from TERRA and AQUA MODIS data. Linear regression models were calibrated using LSTs from 2003 to 2011 and validated using LST data from 2012 to 2013, combined with meteorological station data. The results show that these models can provide a robust estimation of measured daily mean air temperature and that models that only accounted for meteorological data from rural regions performed best. Daily mean air temperature maps were generated from each of four MODIS LST products and merged using different strategies that combined the four MODIS products in different orders when data from one product was unavailable for a pixel. The annual average spatial coverage increased from 20.28% to 55.46% in 2012 and 28.31% to 44.92% in 2013.The root-mean-square and mean absolute errors (RMSE and MAE for the optimal image merging strategy were 2.41 and 1.84, respectively. Compared with the least-effective strategy, the RMSE and MAE decreased by 17.2% and 17.8%, respectively. The interpolation algorithm uses the available pixels from images with consecutive dates in a sliding-window mode. The most appropriate window size was selected based on the absolute spatial bias in the study area. With an optimal window size of 33 × 33 pixels, this approach increased data coverage by up to 76.99% in 2012 and 89.67% in 2013.

  7. Tinker-OpenMM: Absolute and relative alchemical free energies using AMOEBA on GPUs.

    Science.gov (United States)

    Harger, Matthew; Li, Daniel; Wang, Zhi; Dalby, Kevin; Lagardère, Louis; Piquemal, Jean-Philip; Ponder, Jay; Ren, Pengyu

    2017-09-05

    The capabilities of the polarizable force fields for alchemical free energy calculations have been limited by the high computational cost and complexity of the underlying potential energy functions. In this work, we present a GPU-based general alchemical free energy simulation platform for polarizable potential AMOEBA. Tinker-OpenMM, the OpenMM implementation of the AMOEBA simulation engine has been modified to enable both absolute and relative alchemical simulations on GPUs, which leads to a ∼200-fold improvement in simulation speed over a single CPU core. We show that free energy values calculated using this platform agree with the results of Tinker simulations for the hydration of organic compounds and binding of host-guest systems within the statistical errors. In addition to absolute binding, we designed a relative alchemical approach for computing relative binding affinities of ligands to the same host, where a special path was applied to avoid numerical instability due to polarization between the different ligands that bind to the same site. This scheme is general and does not require ligands to have similar scaffolds. We show that relative hydration and binding free energy calculated using this approach match those computed from the absolute free energy approach. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. Proton spectroscopic imaging of polyacrylamide gel dosimeters for absolute radiation dosimetry

    International Nuclear Information System (INIS)

    Murphy, P.S.; Schwarz, A.J.; Leach, M.O.

    2000-01-01

    Proton spectroscopy has been evaluated as a method for quantifying radiation induced changes in polyacrylamide gel dosimeters. A calibration was first performed using BANG-type gel samples receiving uniform doses of 6 MV photons from 0 to 9 Gy in 1 Gy intervals. The peak integral of the acrylic protons belonging to acrylamide and methylenebisacrylamide normalized to the water signal was plotted against absorbed dose. Response was approximately linear within the range 0-7 Gy. A large gel phantom irradiated with three, coplanar 3x3cm square fields to 5.74 Gy at isocentre was then imaged with an echo-filter technique to map the distribution of monomers directly. The image, normalized to the water signal, was converted into an absolute dose map. At the isocentre the measured dose was 5.69 Gy (SD = 0.09) which was in good agreement with the planned dose. The measured dose distribution elsewhere in the sample shows greater errors. A T 2 derived dose map demonstrated a better relative distribution but gave an overestimate of the dose at isocentre of 18%. The data indicate that MR measurements of monomer concentration can complement T 2 -based measurements and can be used to verify absolute dose. Compared with the more usual T 2 measurements for assessing gel polymerization, monomer concentration analysis is less sensitive to parameters such as gel pH and temperature, which can cause ambiguous relaxation time measurements and erroneous absolute dose calculations. (author)

  9. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    Science.gov (United States)

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  10. Optimal design of the absolute positioning sensor for a high-speed maglev train and research on its fault diagnosis.

    Science.gov (United States)

    Zhang, Dapeng; Long, Zhiqiang; Xue, Song; Zhang, Junge

    2012-01-01

    This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.

  11. Optimal Design of the Absolute Positioning Sensor for a High-Speed Maglev Train and Research on Its Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Junge Zhang

    2012-08-01

    Full Text Available This paper studies an absolute positioning sensor for a high-speed maglev train and its fault diagnosis method. The absolute positioning sensor is an important sensor for the high-speed maglev train to accomplish its synchronous traction. It is used to calibrate the error of the relative positioning sensor which is used to provide the magnetic phase signal. On the basis of the analysis for the principle of the absolute positioning sensor, the paper describes the design of the sending and receiving coils and realizes the hardware and the software for the sensor. In order to enhance the reliability of the sensor, a support vector machine is used to recognize the fault characters, and the signal flow method is used to locate the faulty parts. The diagnosis information not only can be sent to an upper center control computer to evaluate the reliability of the sensors, but also can realize on-line diagnosis for debugging and the quick detection when the maglev train is off-line. The absolute positioning sensor we study has been used in the actual project.

  12. Is adult gait less susceptible than paediatric gait to hip joint centre regression equation error?

    Science.gov (United States)

    Kiernan, D; Hosking, J; O'Brien, T

    2016-03-01

    Hip joint centre (HJC) regression equation error during paediatric gait has recently been shown to have clinical significance. In relation to adult gait, it has been inferred that comparable errors with children in absolute HJC position may in fact result in less significant kinematic and kinetic error. This study investigated the clinical agreement of three commonly used regression equation sets (Bell et al., Davis et al. and Orthotrak) for adult subjects against the equations of Harrington et al. The relationship between HJC position error and subject size was also investigated for the Davis et al. set. Full 3-dimensional gait analysis was performed on 12 healthy adult subjects with data for each set compared to Harrington et al. The Gait Profile Score, Gait Variable Score and GDI-kinetic were used to assess clinical significance while differences in HJC position between the Davis and Harrington sets were compared to leg length and subject height using regression analysis. A number of statistically significant differences were present in absolute HJC position. However, all sets fell below the clinically significant thresholds (GPS <1.6°, GDI-Kinetic <3.6 points). Linear regression revealed a statistically significant relationship for both increasing leg length and increasing subject height with decreasing error in anterior/posterior and superior/inferior directions. Results confirm a negligible clinical error for adult subjects suggesting that any of the examined sets could be used interchangeably. Decreasing error with both increasing leg length and increasing subject height suggests that the Davis set should be used cautiously on smaller subjects. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. An Empirical Analysis for the Prediction of a Financial Crisis in Turkey through the Use of Forecast Error Measures

    Directory of Open Access Journals (Sweden)

    Seyma Caliskan Cavdar

    2015-08-01

    Full Text Available In this study, we try to examine whether the forecast errors obtained by the ANN models affect the breakout of financial crises. Additionally, we try to investigate how much the asymmetric information and forecast errors are reflected on the output values. In our study, we used the exchange rate of USD/TRY (USD, the Borsa Istanbul 100 Index (BIST, and gold price (GP as our output variables of our Artificial Neural Network (ANN models. We observe that the predicted ANN model has a strong explanation capability for the 2001 and 2008 crises. Our calculations of some symmetry measures such as mean absolute percentage error (MAPE, symmetric mean absolute percentage error (sMAPE, and Shannon entropy (SE, clearly demonstrate the degree of asymmetric information and the deterioration of the financial system prior to, during, and after the financial crisis. We found that the asymmetric information prior to crisis is larger as compared to other periods. This situation can be interpreted as early warning signals before the potential crises. This evidence seems to favor an asymmetric information view of financial crises.

  14. Wind speed forecasting in three different regions of Mexico, using a hybrid ARIMA-ANN model

    Energy Technology Data Exchange (ETDEWEB)

    Cadenas, Erasmo [Facultad de Ingenieria Mecanica, Universidad Michoacana de San Nicolas de Hidalgo, Santiago Tapia No. 403, Centro (Mexico); Rivera, Wilfrido [Centro de Ivestigacion en Energia, Universidad Nacional Autonoma de Mexico, Apartado Postal 34, Temixco 62580, Morelos (Mexico)

    2010-12-15

    In this paper the wind speed forecasting in the Isla de Cedros in Baja California, in the Cerro de la Virgen in Zacatecas and in Holbox in Quintana Roo is presented. The time series utilized are average hourly wind speed data obtained directly from the measurements realized in the different sites during about one month. In order to do wind speed forecasting Hybrid models consisting of Autoregressive Integrated Moving Average (ARIMA) models and Artificial Neural Network (ANN) models were developed. The ARIMA models were first used to do the wind speed forecasting of the time series and then with the obtained errors ANN were built taking into account the nonlinear tendencies that the ARIMA technique could not identify, reducing with this the final errors. Once the Hybrid models were developed 48 data out of sample for each one of the sites were used to do the wind speed forecasting and the results were compared with the ARIMA and the ANN models working separately. Statistical error measures such as the mean error (ME), the mean square error (MSE) and the mean absolute error (MAE) were calculated to compare the three methods. The results showed that the Hybrid models predict the wind velocities with a higher accuracy than the ARIMA and ANN models in the three examined sites. (author)

  15. Exploring Mean Annual Precipitation Values (2003–2012 in a Specific Area (36°N–43°N, 113°E–120°E Using Meteorological, Elevational, and the Nearest Distance to Coastline Variables

    Directory of Open Access Journals (Sweden)

    Fushen Zhang

    2016-01-01

    Full Text Available Gathering very accurate spatially explicit data related to the distribution of mean annual precipitation is required when laying the groundwork for the prevention and mitigation of water-related disasters. In this study, four Bayesian maximum entropy (BME models were compared to estimate the spatial distribution of mean annual precipitation of the selected areas. Meteorological data from 48 meteorological stations were used, and spatial correlations between three meteorological factors and two topological factors were analyzed to improve the mapping results including annual precipitation, average temperature, average water vapor pressure, elevation, and distance to coastline. Some missing annual precipitation data were estimated based on their historical probability distribution and were assimilated as soft data in the BME method. Based on this, the univariate BME, multivariate BME, univariate BME with soft data, and multivariate BME with soft data analysis methods were compared. The estimation accuracy was assessed by cross-validation with the mean error (ME, mean absolute error (MAE, and root mean square error (RMSE. The results showed that multivariate BME with soft data outperformed the other methods, indicating that adding the spatial correlations between multivariate factors and soft data can help improve the estimation performance.

  16. Prediction of the residual strength of clay using functional networks

    Directory of Open Access Journals (Sweden)

    S.Z. Khan

    2016-01-01

    Full Text Available Landslides are common natural hazards occurring in most parts of the world and have considerable adverse economic effects. Residual shear strength of clay is one of the most important factors in the determination of stability of slopes or landslides. This effect is more pronounced in sensitive clays which show large changes in shear strength from peak to residual states. This study analyses the prediction of the residual strength of clay based on a new prediction model, functional networks (FN using data available in the literature. The performance of FN was compared with support vector machine (SVM and artificial neural network (ANN based on statistical parameters like correlation coefficient (R, Nash--Sutcliff coefficient of efficiency (E, absolute average error (AAE, maximum average error (MAE and root mean square error (RMSE. Based on R and E parameters, FN is found to be a better prediction tool than ANN for the given data. However, the R and E values for FN are less than SVM. A prediction equation is presented that can be used by practicing geotechnical engineers. A sensitivity analysis is carried out to ascertain the importance of various inputs in the prediction of the output.

  17. The importance of intra-hospital pharmacovigilance in the detection of medication errors

    Science.gov (United States)

    Villegas, Francisco; Figueroa-Montero, David; Barbero-Becerra, Varenka; Juárez-Hernández, Eva; Uribe, Misael; Chávez-Tapia, Norberto; González-Chon, Octavio

    2018-01-01

    Hospitalized patients are susceptible to medication errors, which represent between the fourth and the sixth cause of death. The department of intra-hospital pharmacovigilance intervenes in the entire process of medication with the purpose to prevent, repair and assess damages. To analyze medication errors reported by Mexican Fundación Clínica Médica Sur pharmacovigilance system and their impact on patients. Prospective study carried out from 2012 to 2015, where medication prescriptions given to patients were recorded. Owing to heterogeneity, data were described as absolute numbers in a logarithmic scale. 292 932 prescriptions of 56 368 patients were analyzed, and 8.9% of medication errors were identified. The treating physician was responsible of 83.32% of medication errors, residents of 6.71% and interns of 0.09%. No error caused permanent damage or death. This is the pharmacovigilance study with the largest sample size reported. Copyright: © 2018 SecretarÍa de Salud.

  18. Absolute beam current monitoring in endstation c

    International Nuclear Information System (INIS)

    Bochna, C.

    1995-01-01

    The first few experiments at CEBAF require approximately 1% absolute measurements of beam currents expected to range from 10-25μA. This represents errors of 100-250 nA. The initial complement of beam current monitors are of the non intercepting type. CEBAF accelerator division has provided a stripline monitor and a cavity monitor, and the authors have installed an Unser monitor (parametric current transformer or PCT). After calibrating the Unser monitor with a precision current reference, the authors plan to transfer this calibration using CW beam to the stripline monitors and cavity monitors. It is important that this be done fairly rapidly because while the gain of the Unser monitor is quite stable, the offset may drift on the order of .5μA per hour. A summary of what the authors have learned about the linearity, zero drift, and gain drift of each type of current monitor will be presented

  19. Near threshold absolute TDCS: First results

    International Nuclear Information System (INIS)

    Roesel, T.; Schlemmer, P.; Roeder, J.; Frost, L.; Jung, K.; Ehrhardt, H.

    1992-01-01

    A new method, and first results for an impact energy 2 eV above the threshold of ionisation of helium, are presented for the measurement of absolute triple differential cross sections (TDCS) in a crossed beam experiment. The method is based upon measurement of beam/target overlap densities using known absolute total ionisation cross sections and of detection efficiencies using known absolute double differential cross sections (DDCS). For the present work the necessary absolute DDCS for 1 eV electrons had also to be measured. Results are presented for several different coplanar kinematics and are compared with recent DWBA calculations. (orig.)

  20. Absolute entropy of ions in methanol

    International Nuclear Information System (INIS)

    Abakshin, V.A.; Kobenin, V.A.; Krestov, G.A.

    1978-01-01

    By measuring the initial thermoelectromotive forces of chains with bromo-silver electrodes in tetraalkylammonium bromide solutions the absolute entropy of bromide-ion in methanol is determined in the 298.15-318.15 K range. The anti Ssub(Brsup(-))sup(0) = 9.8 entropy units value is used for calculation of the absolute partial molar entropy of alkali metal ions and halogenide ions. It has been found that, absolute entropy of Cs + =12.0 entropy units, I - =14.0 entropy units. The obtained ion absolute entropies in methanol at 298.15 K within 1-2 entropy units is in an agreement with published data

  1. Diversity and Distribution of Aquatic Insects in Streams of the Mae Klong Watershed, Western Thailand

    Directory of Open Access Journals (Sweden)

    Witwisitpong Maneechan

    2015-01-01

    Full Text Available The distribution and diversity of aquatic insects and water quality variables were studied among three streams of the Mae Klong Watershed. In each stream, two sites were sampled. Aquatic insects and water quality variables were randomly sampled seven times in February, May, September, and December 2010 and in January, April, and May 2011. Overall, 11,153 individuals belonging to 64 families and nine orders were examined. Among the aquatic insects collected from the three streams, the order Trichoptera was most diverse in number of individuals, followed by Ephemeroptera, Hemiptera, Odonata, Coleoptera, Diptera, Plecoptera, Megaloptera, and Lepidoptera. The highest Shannon index of diversity of 2.934 and 3.2 was recorded in Huai Kayeng stream and the lowest was in Huai Pakkok stream (2.68 and 2.62. The high diversity of insect fauna in streams is an indication of larger microhabitat diversity and better water quality conditions prevailing in the streams. The evenness value was recorded as high in most sites. The high species diversity and evenness in almost all sites indicated good water quality.

  2. A hybrid ARIMA and neural network model applied to forecast catch volumes of Selar crumenophthalmus

    Science.gov (United States)

    Aquino, Ronald L.; Alcantara, Nialle Loui Mar T.; Addawe, Rizavel C.

    2017-11-01

    The Selar crumenophthalmus with the English name big-eyed scad fish, locally known as matang-baka, is one of the fishes commonly caught along the waters of La Union, Philippines. The study deals with the forecasting of catch volumes of big-eyed scad fish for commercial consumption. The data used are quarterly caught volumes of big-eyed scad fish from 2002 to first quarter of 2017. This actual data is available from the open stat database published by the Philippine Statistics Authority (PSA)whose task is to collect, compiles, analyzes and publish information concerning different aspects of the Philippine setting. Autoregressive Integrated Moving Average (ARIMA) models, Artificial Neural Network (ANN) model and the Hybrid model consisting of ARIMA and ANN were developed to forecast catch volumes of big-eyed scad fish. Statistical errors such as Mean Absolute Errors (MAE) and Root Mean Square Errors (RMSE) were computed and compared to choose the most suitable model for forecasting the catch volume for the next few quarters. A comparison of the results of each model and corresponding statistical errors reveals that the hybrid model, ARIMA-ANN (2,1,2)(6:3:1), is the most suitable model to forecast the catch volumes of the big-eyed scad fish for the next few quarters.

  3. The systematic error of temperature noise correlation measurement method and self-calibration

    International Nuclear Information System (INIS)

    Tian Hong; Tong Yunxian

    1993-04-01

    The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments

  4. Optimization of sample absorbance for quantitative analysis in the presence of pathlength error in the IR and NIR regions

    International Nuclear Information System (INIS)

    Hirschfeld, T.; Honigs, D.; Hieftje, G.

    1985-01-01

    Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable

  5. Comparison of different interpolation methods for spatial distribution of soil organic carbon and some soil properties in the Black Sea backward region of Turkey

    Science.gov (United States)

    Göl, Ceyhun; Bulut, Sinan; Bolat, Ferhat

    2017-10-01

    The purpose of this research is to compare the spatial variability of soil organic carbon (SOC) in four adjacent land uses including the cultivated area, the grassland area, the plantation area and the natural forest area in the semi - arid region of Black Sea backward region of Turkey. Some of the soil properties, including total nitrogen, SOC, soil organic matter, and bulk density were measured on a grid with a 50 m sampling distance on the top soil (0-15 cm depth). Accordingly, a total of 120 samples were taken from the four adjacent land uses. Data was analyzed using geostatistical methods. The methods used were: Block kriging (BK), co - kriging (CK) with organic matter, total nitrogen and bulk density as auxiliary variables and inverse distance weighting (IDW) methods with the power of 1, 2 and 4. The methods were compared using a performance criteria that included root mean square error (RMSE), mean absolute error (MAE) and the coefficient of correlation (r). The one - way ANOVA test showed that differences between the natural (0.6653 ± 0.2901) - plantation forest (0.7109 ± 0.2729) areas and the grassland (1.3964 ± 0.6828) - cultivated areas (1.5851 ± 0.5541) were statistically significant at 0.05 level (F = 28.462). The best model for describing spatially variation of SOC was CK with the lowest error criteria (RMSE = 0.3342, MAE = 0.2292) and the highest coefficient of correlation (r = 0.84). The spatial structure of SOC could be well described by the spherical model. The nugget effect indicated that SOC was moderately dependent on the study area. The error distributions of the model showed that the improved model was unbiased in predicting the spatial distribution of SOC. This study's results revealed that an explanatory variable linked SOC increased success of spatial interpolation methods. In subsequent studies, this case should be taken into account for reaching more accurate outputs.

  6. Pertinence analysis of intensity-modulated radiation therapy dosimetry error and parameters of beams

    International Nuclear Information System (INIS)

    Chi Zifeng; Liu Dan; Cao Yankun; Li Runxiao; Han Chun

    2012-01-01

    Objective: To study the relationship between parameter settings in the intensity-modulated radiation therapy (IMRT) planning in order to explore the effect of parameters on absolute dose verification. Methods: Forty-three esophageal carcinoma cases were optimized with Pinnacle 7.6c by experienced physicist using appropriate optimization parameters and dose constraints with a number of iterations to meet the clinical acceptance criteria. The plans were copied to water-phantom, 0.13 cc ion Farmer chamber and DOSE1 dosimeter was used to measure the absolute dose. The statistical data of the parameters of beams for the 43 cases were collected, and the relationships among them were analyzed. The statistical data of the dosimetry error were collected, and comparative analysis was made for the relation between the parameters of beams and ion chamber absolute dose verification results. Results: The parameters of beams were correlated among each other. Obvious affiliation existed between the dose accuracy and parameter settings. When the beam segment number of IMRT plan was more than 80, the dose deviation would be greater than 3%; however, if the beam segment number was less than 80, the dose deviation was smaller than 3%. When the beam segment number was more than 100, part of the dose deviation of this plan was greater than 4%. On the contrary, if the beam segment number was less than 100, the dose deviation was smaller than 4% definitely. Conclusions: In order to decrease the absolute dose verification error, less beam angles and less beam segments are needed and the beam segment number should be controlled within the range of 80. (authors)

  7. Application of a Combined Model with Autoregressive Integrated Moving Average (ARIMA and Generalized Regression Neural Network (GRNN in Forecasting Hepatitis Incidence in Heng County, China.

    Directory of Open Access Journals (Sweden)

    Wudi Wei

    Full Text Available Hepatitis is a serious public health problem with increasing cases and property damage in Heng County. It is necessary to develop a model to predict the hepatitis epidemic that could be useful for preventing this disease.The autoregressive integrated moving average (ARIMA model and the generalized regression neural network (GRNN model were used to fit the incidence data from the Heng County CDC (Center for Disease Control and Prevention from January 2005 to December 2012. Then, the ARIMA-GRNN hybrid model was developed. The incidence data from January 2013 to December 2013 were used to validate the models. Several parameters, including mean absolute error (MAE, root mean square error (RMSE, mean absolute percentage error (MAPE and mean square error (MSE, were used to compare the performance among the three models.The morbidity of hepatitis from Jan 2005 to Dec 2012 has seasonal variation and slightly rising trend. The ARIMA(0,1,2(1,1,112 model was the most appropriate one with the residual test showing a white noise sequence. The smoothing factor of the basic GRNN model and the combined model was 1.8 and 0.07, respectively. The four parameters of the hybrid model were lower than those of the two single models in the validation. The parameters values of the GRNN model were the lowest in the fitting of the three models.The hybrid ARIMA-GRNN model showed better hepatitis incidence forecasting in Heng County than the single ARIMA model and the basic GRNN model. It is a potential decision-supportive tool for controlling hepatitis in Heng County.

  8. Comparison of the CME-associated shock arrival times at the earth using the WSA-ENLIL model with three cone models

    Science.gov (United States)

    Jang, S.; Moon, Y.; Na, H.

    2012-12-01

    We have made a comparison of CME-associated shock arrival times at the earth based on the WSA-ENLIL model with three cone models using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters from Michalek et al. (2007) as well as their associated interplanetary (IP) shocks. For this study we consider three different cone models (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine CME cone parameters (radial velocity, angular width and source location), which are used for input parameters of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the elliptical cone model is 10 hours, which is about 2 hours smaller than those of the other models. However, this value is still larger than that (8.7 hours) of an empirical model by Kim et al. (2007). We are investigating several possibilities on relatively large errors of the WSA-ENLIL cone model, which may be caused by CME-CME interaction, background solar wind speed, and/or CME density enhancement.

  9. Modeling Spatial Distribution of Some Contamination within the Lower Reaches of Diyala River Using IDW Interpolation

    Directory of Open Access Journals (Sweden)

    Huda M. Madhloom

    2017-12-01

    Full Text Available The aim of this research was to simulate the water quality along the lower course of the Diyala River using Geographic Information Systems (GIS techniques. For this purpose, the samples were taken at 24 sites along the study area. The parameters: total dissolved solids (T.D.S, total suspended solids (T.S.S, iron (Fe, copper (Cu, chromium (Cr, and manganese (Mn were considered. Water samples were collected on a monthly basis for a duration of five years. The adopted analyzing approach was tested by calculating the mean absolute error (MAE and the correlation coefficient (R between observed water samples and predicted results. The result showed a percentage error less than 10% and significant correlation at R > 89% for all pollutant indicators. It was concluded that the accuracy of the applied model to simulate the river pollutants can decrease the number of monitoring station to 50%. Additionally, a distribution map for the concentrations’ results indicated that many of the major pollution indicators did not satisfy the river water quality standards.

  10. Assessing the suitability of soft computing approaches for forest fires prediction

    Directory of Open Access Journals (Sweden)

    Samaher Al_Janabi

    2018-07-01

    Full Text Available Forest fires present one of the main causes of environmental hazards that have many negative results in different aspect of life. Therefore, early prediction, fast detection and rapid action are the key elements for controlling such phenomenon and saving lives. Through this work, 517 different entries were selected at different times for montesinho natural park (MNP in Portugal to determine the best predictor that has the ability to detect forest fires, The principle component analysis (PCA was applied to find the critical patterns and particle swarm optimization (PSO technique was used to segment the fire regions (clusters. In the next stage, five soft computing (SC Techniques based on neural network were used in parallel to identify the best technique that would potentially give more accurate and optimum results in predicting of forest fires, these techniques namely; cascade correlation network (CCN, multilayer perceptron neural network (MPNN, polynomial neural network (PNN, radial basis function (RBF and support vector machine (SVM In the final stage, the predictors and their performance were evaluated based on five quality measures including root mean squared error (RMSE, mean squared error (MSE, relative absolute error (RAE, mean absolute error (MAE and information gain (IG. The results indicate that SVM technique was more effective and efficient than the RBF, MPNN, PNN and CCN predictors. The results also show that the SVM algorithm provides more precise predictions compared with other predictors with small estimation error. The obtained results confirm that the SVM improves the prediction accuracy and suitable for forest fires prediction compared to other methods. Keywords: Forest fires, Soft computing, Prediction, Principle component analysis, Particle swarm optimization, Cascade correlation network, Multilayer perceptron neural network, Polynomial neural networks, Radial basis function, Support vector machine

  11. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600-MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600-MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV

  12. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurements of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV. (auth)

  13. First Absolutely Calibrated Localized Measurements of Ion Velocity in the MST in Locked and Rotating Plasmas

    Science.gov (United States)

    Baltzer, M.; Craig, D.; den Hartog, D. J.; Nornberg, M. D.; Munaretto, S.

    2015-11-01

    An Ion Doppler Spectrometer (IDS) is used on MST for high time-resolution passive and active measurements of impurity ion emission. Absolutely calibrated measurements of flow are difficult because the spectrometer records data within 0.3 nm of the C+5 line of interest, and commercial calibration lamps do not produce lines in this narrow range . A novel optical system was designed to absolutely calibrate the IDS. The device uses an UV LED to produce a broad emission curve in the desired region. A Fabry-Perot etalon filters this light, cutting transmittance peaks into the pattern of the LED emission. An optical train of fused silica lenses focuses the light into the IDS with f/4. A holographic diffuser blurs the light cone to increase homogeneity. Using this light source, the absolute Doppler shift of ion emissions can be measured in MST plasmas. In combination with charge exchange recombination spectroscopy, localized ion velocities can now be measured. Previously, a time-averaged measurement along the chord bisecting the poloidal plane was used to calibrate the IDS; the quality of these central chord calibrations can be characterized with our absolute calibration. Calibration errors may also be quantified and minimized by optimizing the curve-fitting process. Preliminary measurements of toroidal velocity in locked and rotating plasmas will be shown. This work has been supported by the US DOE.

  14. Validation of satellite based precipitation over diverse topography of Pakistan

    Science.gov (United States)

    Iqbal, Muhammad Farooq; Athar, H.

    2018-03-01

    This study evaluates the Tropical Rainfall Measuring Mission (TRMM) Multi-Satellite Precipitation Analysis (TMPA) product data with 0.25° × 0.25° spatial and post-real-time 3 h temporal resolution using point-based Surface Precipitation Gauge (SPG) data from 40 stations, for the period 1998-2013, and using gridded Asian Precipitation ˗ Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE) data abbreviated as APH data with 0.25° × 0.25° spatial and daily temporal resolution for the period 1998-2007, over vulnerable and data sparse regions of Pakistan (24-37° N and 62-75° E). To evaluate the performance of TMPA relative to SPG and APH, four commonly used statistical indicator metrics including Mean Error (ME), Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Correlation Coefficient (CC) are employed on daily, monthly, seasonal as well as on annual timescales. The TMPA slightly overestimated both SPG and APH at daily, monthly, and annual timescales, however close results were obtained between TMPA and SPG as compared to those between TMPA and APH, on the same timescale. The TMPA overestimated both SPG and APH during the Pre-Monsoon and Monsoon seasons, whereas it underestimated during the Post-Monsoon and Winter seasons, with different magnitudes. Agreement between TMPA and SPG was good in plain and medium elevation regions, whereas TMPA overestimated APH in 31 stations. The magnitudes of MAE and RMSE were high at daily timescale as compared to monthly and annual timescales. Relatively large MAE was observed in stations located over high elevation regions, whereas minor MAE was recorded in plain area stations at daily, monthly, and annual timescales. A strong positive linear relationship between TMPA and SPG was established at monthly (0.98), seasonal (0.93 to 0.98) and annual (0.97) timescales. Precipitation increased with the increase of elevation, and not only elevation but latitude also affected the

  15. ACCESS, Absolute Color Calibration Experiment for Standard Stars: Integration, Test, and Ground Performance

    Science.gov (United States)

    Kaiser, Mary Elizabeth; Morris, Matthew; Aldoroty, Lauren; Kurucz, Robert; McCandliss, Stephan; Rauscher, Bernard; Kimble, Randy; Kruk, Jeffrey; Wright, Edward L.; Feldman, Paul; Riess, Adam; Gardner, Jonathon; Bohlin, Ralph; Deustua, Susana; Dixon, Van; Sahnow, David J.; Perlmutter, Saul

    2018-01-01

    Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. Systematic errors associated with astrophysical data used to probe fundamental astrophysical questions, such as SNeIa observations used to constrain dark energy theories, now exceed the statistical errors associated with merged databases of these measurements. ACCESS, “Absolute Color Calibration Experiment for Standard Stars”, is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35‑1.7μm bandpass. To achieve this goal ACCESS (1) observes HST/ Calspec stars (2) above the atmosphere to eliminate telluric spectral contaminants (e.g. OH) (3) using a single optical path and (HgCdTe) detector (4) that is calibrated to NIST laboratory standards and (5) monitored on the ground and in-flight using a on-board calibration monitor. The observations are (6) cross-checked and extended through the generation of stellar atmosphere models for the targets. The ACCESS telescope and spectrograph have been designed, fabricated, and integrated. Subsystems have been tested. Performance results for subsystems, operations testing, and the integrated spectrograph will be presented. NASA sounding rocket grant NNX17AC83G supports this work.

  16. Projective absoluteness for Sacks forcing

    NARCIS (Netherlands)

    Ikegami, D.

    2009-01-01

    We show that Sigma(1)(3)-absoluteness for Sacks forcing is equivalent to the nonexistence of a Delta(1)(2) Bernstein set. We also show that Sacks forcing is the weakest forcing notion among all of the preorders that add a new real with respect to Sigma(1)(3) forcing absoluteness.

  17. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System.

    Directory of Open Access Journals (Sweden)

    Jinjun Tang

    Full Text Available Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN, two learning processes are proposed: (1 a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2 a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE, root mean square error (RMSE, and mean absolute relative error (MARE are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR, instantaneous model (IM, linear model (LM, neural network (NN, and cumulative plots (CP.

  18. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System.

    Science.gov (United States)

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP).

  19. Short-Term Electricity-Load Forecasting Using a TSK-Based Extreme Learning Machine with Knowledge Representation

    Directory of Open Access Journals (Sweden)

    Chan-Uk Yeom

    2017-10-01

    Full Text Available This paper discusses short-term electricity-load forecasting using an extreme learning machine (ELM with automatic knowledge representation from a given input-output data set. For this purpose, we use a Takagi-Sugeno-Kang (TSK-based ELM to develop a systematic approach to generating if-then rules, while the conventional ELM operates without knowledge information. The TSK-ELM design includes a two-phase development. First, we generate an initial random-partition matrix and estimate cluster centers for random clustering. The obtained cluster centers are used to determine the premise parameters of fuzzy if-then rules. Next, the linear weights of the TSK fuzzy type are estimated using the least squares estimate (LSE method. These linear weights are used as the consequent parameters in the TSK-ELM design. The experiments were performed on short-term electricity-load data for forecasting. The electricity-load data were used to forecast hourly day-ahead loads given temperature forecasts; holiday information; and historical loads from the New England ISO. In order to quantify the performance of the forecaster, we use metrics and statistical characteristics such as root mean squared error (RMSE as well as mean absolute error (MAE, mean absolute percent error (MAPE, and R-squared, respectively. The experimental results revealed that the proposed method showed good performance when compared with a conventional ELM with four activation functions such sigmoid, sine, radial basis function, and rectified linear unit (ReLU. It possessed superior prediction performance and knowledge information and a small number of rules.

  20. Best of both worlds: combining pharma data and state of the art modeling technology to improve in Silico pKa prediction.

    Science.gov (United States)

    Fraczkiewicz, Robert; Lobell, Mario; Göller, Andreas H; Krenz, Ursula; Schoenneis, Rolf; Clark, Robert D; Hillisch, Alexander

    2015-02-23

    In a unique collaboration between a software company and a pharmaceutical company, we were able to develop a new in silico pKa prediction tool with outstanding prediction quality. An existing pKa prediction method from Simulations Plus based on artificial neural network ensembles (ANNE), microstates analysis, and literature data was retrained with a large homogeneous data set of drug-like molecules from Bayer. The new model was thus built with curated sets of ∼14,000 literature pKa values (∼11,000 compounds, representing literature chemical space) and ∼19,500 pKa values experimentally determined at Bayer Pharma (∼16,000 compounds, representing industry chemical space). Model validation was performed with several test sets consisting of a total of ∼31,000 new pKa values measured at Bayer. For the largest and most difficult test set with >16,000 pKa values that were not used for training, the original model achieved a mean absolute error (MAE) of 0.72, root-mean-square error (RMSE) of 0.94, and squared correlation coefficient (R(2)) of 0.87. The new model achieves significantly improved prediction statistics, with MAE = 0.50, RMSE = 0.67, and R(2) = 0.93. It is commercially available as part of the Simulations Plus ADMET Predictor release 7.0. Good predictions are only of value when delivered effectively to those who can use them. The new pKa prediction model has been integrated into Pipeline Pilot and the PharmacophorInformatics (PIx) platform used by scientists at Bayer Pharma. Different output formats allow customized application by medicinal chemists, physical chemists, and computational chemists.

  1. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    Science.gov (United States)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  2. Estimation of surface air temperature over central and eastern Eurasia from MODIS land surface temperature

    International Nuclear Information System (INIS)

    Shen Suhung; Leptoukh, Gregory G

    2011-01-01

    Surface air temperature (T a ) is a critical variable in the energy and water cycle of the Earth–atmosphere system and is a key input element for hydrology and land surface models. This is a preliminary study to evaluate estimation of T a from satellite remotely sensed land surface temperature (T s ) by using MODIS-Terra data over two Eurasia regions: northern China and fUSSR. High correlations are observed in both regions between station-measured T a and MODIS T s . The relationships between the maximum T a and daytime T s depend significantly on land cover types, but the minimum T a and nighttime T s have little dependence on the land cover types. The largest difference between maximum T a and daytime T s appears over the barren and sparsely vegetated area during the summer time. Using a linear regression method, the daily maximum T a were estimated from 1 km resolution MODIS T s under clear-sky conditions with coefficients calculated based on land cover types, while the minimum T a were estimated without considering land cover types. The uncertainty, mean absolute error (MAE), of the estimated maximum T a varies from 2.4 °C over closed shrublands to 3.2 °C over grasslands, and the MAE of the estimated minimum T a is about 3.0 °C.

  3. New principle for measuring arterial blood oxygenation, enabling motion-robust remote monitoring.

    Science.gov (United States)

    van Gastel, Mark; Stuijk, Sander; de Haan, Gerard

    2016-12-07

    Finger-oximeters are ubiquitously used for patient monitoring in hospitals worldwide. Recently, remote measurement of arterial blood oxygenation (SpO 2 ) with a camera has been demonstrated. Both contact and remote measurements, however, require the subject to remain static for accurate SpO 2 values. This is due to the use of the common ratio-of-ratios measurement principle that measures the relative pulsatility at different wavelengths. Since the amplitudes are small, they are easily corrupted by motion-induced variations. We introduce a new principle that allows accurate remote measurements even during significant subject motion. We demonstrate the main advantage of the principle, i.e. that the optimal signature remains the same even when the SNR of the PPG signal drops significantly due to motion or limited measurement area. The evaluation uses recordings with breath-holding events, which induce hypoxemia in healthy moving subjects. The events lead to clinically relevant SpO 2 levels in the range 80-100%. The new principle is shown to greatly outperform current remote ratio-of-ratios based methods. The mean-absolute SpO 2 -error (MAE) is about 2 percentage-points during head movements, where the benchmark method shows a MAE of 24 percentage-points. Consequently, we claim ours to be the first method to reliably measure SpO 2 remotely during significant subject motion.

  4. Prediction of Beck Depression Inventory (BDI-II) Score Using Acoustic Measurements in a Sample of Iium Engineering Students

    Science.gov (United States)

    Fikri Zanil, Muhamad; Nur Wahidah Nik Hashim, Nik; Azam, Huda

    2017-11-01

    Psychiatrist currently relies on questionnaires and interviews for psychological assessment. These conservative methods often miss true positives and might lead to death, especially in cases where a patient might be experiencing suicidal predisposition but was only diagnosed as major depressive disorder (MDD). With modern technology, an assessment tool might aid psychiatrist with a more accurate diagnosis and thus hope to reduce casualty. This project will explore on the relationship between speech features of spoken audio signal (reading) in Bahasa Malaysia with the Beck Depression Inventory scores. The speech features used in this project were Power Spectral Density (PSD), Mel-frequency Ceptral Coefficients (MFCC), Transition Parameter, formant and pitch. According to analysis, the optimum combination of speech features to predict BDI-II scores include PSD, MFCC and Transition Parameters. The linear regression approach with sequential forward/backward method was used to predict the BDI-II scores using reading speech. The result showed 0.4096 mean absolute error (MAE) for female reading speech. For male, the BDI-II scores successfully predicted 100% less than 1 scores difference with MAE of 0.098437. A prediction system called Depression Severity Evaluator (DSE) was developed. The DSE managed to predict one out of five subjects. Although the prediction rate was low, the system precisely predict the score within the maximum difference of 4.93 for each person. This demonstrates that the scores are not random numbers.

  5. Investigation of Pear Drying Performance by Different Methods and Regression of Convective Heat Transfer Coefficient with Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Mehmet Das

    2018-01-01

    Full Text Available In this study, an air heated solar collector (AHSC dryer was designed to determine the drying characteristics of the pear. Flat pear slices of 10 mm thickness were used in the experiments. The pears were dried both in the AHSC dryer and under the sun. Panel glass temperature, panel floor temperature, panel inlet temperature, panel outlet temperature, drying cabinet inlet temperature, drying cabinet outlet temperature, drying cabinet temperature, drying cabinet moisture, solar radiation, pear internal temperature, air velocity and mass loss of pear were measured at 30 min intervals. Experiments were carried out during the periods of June 2017 in Elazig, Turkey. The experiments started at 8:00 a.m. and continued till 18:00. The experiments were continued until the weight changes in the pear slices stopped. Wet basis moisture content (MCw, dry basis moisture content (MCd, adjustable moisture ratio (MR, drying rate (DR, and convective heat transfer coefficient (hc were calculated with both in the AHSC dryer and the open sun drying experiment data. It was found that the values of hc in both drying systems with a range 12.4 and 20.8 W/m2 °C. Three different kernel models were used in the support vector machine (SVM regression to construct the predictive model of the calculated hc values for both systems. The mean absolute error (MAE, root mean squared error (RMSE, relative absolute error (RAE and root relative absolute error (RRAE analysis were performed to indicate the predictive model’s accuracy. As a result, the rate of drying of the pear was examined for both systems and it was observed that the pear had dried earlier in the AHSC drying system. A predictive model was obtained using the SVM regression for the calculated hc values for the pear in the AHSC drying system. The normalized polynomial kernel was determined as the best kernel model in SVM for estimating the hc values.

  6. Very short-term reactive forecasting of the solar ultraviolet index using an extreme learning machine integrated with the solar zenith angle.

    Science.gov (United States)

    Deo, Ravinesh C; Downs, Nathan; Parisi, Alfio V; Adamowski, Jan F; Quilty, John M

    2017-05-01

    Exposure to erythemally-effective solar ultraviolet radiation (UVR) that contributes to malignant keratinocyte cancers and associated health-risk is best mitigated through innovative decision-support systems, with global solar UV index (UVI) forecast necessary to inform real-time sun-protection behaviour recommendations. It follows that the UVI forecasting models are useful tools for such decision-making. In this study, a model for computationally-efficient data-driven forecasting of diffuse and global very short-term reactive (VSTR) (10-min lead-time) UVI, enhanced by drawing on the solar zenith angle (θ s ) data, was developed using an extreme learning machine (ELM) algorithm. An ELM algorithm typically serves to address complex and ill-defined forecasting problems. UV spectroradiometer situated in Toowoomba, Australia measured daily cycles (0500-1700h) of UVI over the austral summer period. After trialling activations functions based on sine, hard limit, logarithmic and tangent sigmoid and triangular and radial basis networks for best results, an optimal ELM architecture utilising logarithmic sigmoid equation in hidden layer, with lagged combinations of θ s as the predictor data was developed. ELM's performance was evaluated using statistical metrics: correlation coefficient (r), Willmott's Index (WI), Nash-Sutcliffe efficiency coefficient (E NS ), root mean square error (RMSE), and mean absolute error (MAE) between observed and forecasted UVI. Using these metrics, the ELM model's performance was compared to that of existing methods: multivariate adaptive regression spline (MARS), M5 Model Tree, and a semi-empirical (Pro6UV) clear sky model. Based on RMSE and MAE values, the ELM model (0.255, 0.346, respectively) outperformed the MARS (0.310, 0.438) and M5 Model Tree (0.346, 0.466) models. Concurring with these metrics, the Willmott's Index for the ELM, MARS and M5 Model Tree models were 0.966, 0.942 and 0.934, respectively. About 57% of the ELM model

  7. Evaluation of a model to Simulate Net Radiation Over a Vineyar cv. Cabernet Sauvignon Evaluación de un Modelo para Simular el Flujo de Radiación Neta Sobre un Viñedo cv. Cabernet Sauvignon

    Directory of Open Access Journals (Sweden)

    Marcos Carrasco

    2008-06-01

    Full Text Available Net radiation (Rn is the main energy balance component controlling evaporation and transpiration processes. In this regard, this study evaluated two models to estimate Rno above a commercial vineyard (Vitis vinifera cv. Cabernet Sauvignon located in Pencahue Valley, Maule Region (35º22’ S; 71°47’ Wl; 75 m.a.s.l.. An automatic meteorological station (AMS was installed in the central part of the vineyard and used to measure Rn, solar radiation (Rsi, air temperature (Ta, canopy temperature (Tf and relative humidity (RH. On a 30 min interval, results indicated that model Rne1 (assuming Ta ≠ Tf and model Rne2 (assuming Ta = Tf were able to estimate Rn with a mean absolute error (MAE of less than 40 W m-2 and root mean square error (RMSE of less than 61 W m-2. On daily intervals, the two models estimated Rno with MAE and RMSE values of less than 1.68 and 1.75 MJ m-2 d-1, respectively. In global terms, the models presented errors below 9 and 11% on 30 min and daily intervals, respectively. Furthermore, this study indicated that the incorporation of canopy temperature did not improve the Rno estimation substantially, in spite of having a temperature gradient (dT = Tf - Ta between -3 and to 4ºC. These results suggest that an Rne2 model could be used to estimate Rno using Rsi, Ta and RH measurements.El flujo de radiación neta (Rn es el principal componente del balance de energía que determina los procesos de evaporación y transpiración. En este contexto, este estudio evaluó dos modelos para estimar Rno sobre un viñedo (Vitis vinifera L. cv. Cabernet Sauvignon comercial ubicado en el Valle de Pencahue, Región del Maule (35º22’ S; 71º47’ Oeste; 75 m.s.n.m.. Para esto, se ubicó una estación meteorológica automática (AME en la parte central del viñedo para medir Rn, radiación solar (Rsi, temperatura del aire (Ta, temperatura del dosel (Tf y humedad relativa (HR. En intervalos de tiempo de 30 min, los resultados indicaron que el

  8. Impact and quantification of the sources of error in DNA pooling designs.

    Science.gov (United States)

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  9. Error and objectivity: cognitive illusions and qualitative research.

    Science.gov (United States)

    Paley, John

    2005-07-01

    Psychological research has shown that cognitive illusions, of which visual illusions are just a special case, are systematic and pervasive, raising epistemological questions about how error in all forms of research can be identified and eliminated. The quantitative sciences make use of statistical techniques for this purpose, but it is not clear what the qualitative equivalent is, particularly in view of widespread scepticism about validity and objectivity. I argue that, in the light of cognitive psychology, the 'error question' cannot be dismissed as a positivist obsession, and that the concepts of truth and objectivity are unavoidable. However, they constitute only a 'minimal realism', which does not necessarily bring a commitment to 'absolute' truth, certainty, correspondence, causation, reductionism, or universal laws in its wake. The assumption that it does reflects a misreading of positivism and, ironically, precipitates a 'crisis of legitimation and representation', as described by constructivist authors.

  10. The approach of Bayesian model indicates media awareness of medical errors

    Science.gov (United States)

    Ravichandran, K.; Arulchelvan, S.

    2016-06-01

    This research study brings out the factors behind the increase in medical malpractices in the Indian subcontinent in the present day environment and impacts of television media awareness towards it. Increased media reporting of medical malpractices and errors lead to hospitals taking corrective action and improve the quality of medical services that they provide. The model of Cultivation Theory can be used to measure the influence of media in creating awareness of medical errors. The patient's perceptions of various errors rendered by the medical industry from different parts of India were taken up for this study. Bayesian method was used for data analysis and it gives absolute values to indicate satisfaction of the recommended values. To find out the impact of maintaining medical records of a family online by the family doctor in reducing medical malpractices which creates the importance of service quality in medical industry through the ICT.

  11. Variance computations for functional of absolute risk estimates.

    Science.gov (United States)

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  12. Absolute determination of the deuterium content of heavy water, measurement of absolute density

    International Nuclear Information System (INIS)

    Ceccaldi, M.; Riedinger, M.; Menache, M.

    1975-01-01

    The absolute density of two heavy water samples rich in deuterium (with a grade higher than 99.9%) was determined with the hydrostatic method. The exact isotopic composition of this water (hydrogen and oxygen isotopes) was very carefully studied. A theoretical estimate enabled us to get the absolute density value of isotopically pure D 2 16 O. This value was found to be 1104.750 kg.m -3 at t 68 =22.3 0 C and under the pressure of one atmosphere. (orig.) [de

  13. Data Fusion of Gridded Snow Products Enhanced with Terrain Covariates and a Simple Snow Model

    Science.gov (United States)

    Snauffer, A. M.; Hsieh, W. W.; Cannon, A. J.

    2017-12-01

    Hydrologic planning requires accurate estimates of regional snow water equivalent (SWE), particularly areas with hydrologic regimes dominated by spring melt. While numerous gridded data products provide such estimates, accurate representations are particularly challenging under conditions of mountainous terrain, heavy forest cover and large snow accumulations, contexts which in many ways define the province of British Columbia (BC), Canada. One promising avenue of improving SWE estimates is a data fusion approach which combines field observations with gridded SWE products and relevant covariates. A base artificial neural network (ANN) was constructed using three of the best performing gridded SWE products over BC (ERA-Interim/Land, MERRA and GLDAS-2) and simple location and time covariates. This base ANN was then enhanced to include terrain covariates (slope, aspect and Terrain Roughness Index, TRI) as well as a simple 1-layer energy balance snow model driven by gridded bias-corrected ANUSPLIN temperature and precipitation values. The ANN enhanced with all aforementioned covariates performed better than the base ANN, but most of the skill improvement was attributable to the snow model with very little contribution from the terrain covariates. The enhanced ANN improved station mean absolute error (MAE) by an average of 53% relative to the composing gridded products over the province. Interannual peak SWE correlation coefficient was found to be 0.78, an improvement of 0.05 to 0.18 over the composing products. This nonlinear approach outperformed a comparable multiple linear regression (MLR) model by 22% in MAE and 0.04 in interannual correlation. The enhanced ANN has also been shown to estimate better than the Variable Infiltration Capacity (VIC) hydrologic model calibrated and run for four BC watersheds, improving MAE by 22% and correlation by 0.05. The performance improvements of the enhanced ANN are statistically significant at the 5% level across the province and

  14. Scheimpflug camera combined with placido-disk corneal topography and optical biometry for intraocular lens power calculation.

    Science.gov (United States)

    Kirgiz, Ahmet; Atalay, Kurşat; Kaldirim, Havva; Cabuk, Kubra Serefoglu; Akdemir, Mehmet Orcun; Taskapili, Muhittin

    2017-08-01

    The purpose of this study was to compare the keratometry (K) values obtained by the Scheimpflug camera combined with placido-disk corneal topography (Sirius) and optical biometry (Lenstar) for intraocular lens (IOL) power calculation before the cataract surgery, and to evaluate the accuracy of postoperative refraction. 50 eyes of 40 patients were scheduled to have phacoemulsification with the implantation of a posterior chamber intraocular lens. The IOL power was calculated using the SRK/T formula with Lenstar K and K readings from Sirius. Simulated K (SimK), K at 3-, 5-, and 7-mm zones from Sirius were compared with Lenstar K readings. The accuracy of these parameters was determined by calculating the mean absolute error (MAE). The mean Lenstar K value was 44.05 diopters (D) ±1.93 (SD) and SimK, K at 3-, 5-, and 7-mm zones were 43.85 ± 1.91, 43.88 ± 1.9, 43.84 ± 1.9, 43.66 ± 1.85 D, respectively. There was no statistically significant difference between the K readings (P = 0.901). When Lenstar was used for the corneal power measurements, MAE was 0.42 ± 0.33 D, but when simK of Sirius was used, it was 0.37 ± 0.32 D (the lowest MAE (0.36 ± 0.32 D) was achieved as a result of 5 mm K measurement), but it was not statistically significant (P = 0.892). Of all the K readings of Sirius and Lenstar, Sirius 5-mm zone K readings were the best in predicting a more precise IOL power. The corneal power measurements with the Scheimpflug camera combined with placido-disk corneal topography can be safely used for IOL power calculation.

  15. Benchmarking Continuum Solvent Models for Keto-Enol Tautomerizations.

    Science.gov (United States)

    McCann, Billy W; McFarland, Stuart; Acevedo, Orlando

    2015-08-13

    Experimental free energies of tautomerization, ΔGT, were used to benchmark the gas-phase predictions of 17 different quantum mechanical methods and eight basis sets for seven keto-enol tautomer pairs dominated by their enolic form. The G4 method and M06/6-31+G(d,p) yielded the most accurate results, with mean absolute errors (MAE's) of 0.95 and 0.71 kcal/mol, respectively. Using these two theory levels, the solution-phase ΔGT values for 23 unique tautomer pairs composed of aliphatic ketones, β-dicarbonyls, and heterocycles were computed in multiple protic and aprotic solvents. The continuum solvation models, namely, polarizable continuum model (PCM), polarizable conductor calculation model (CPCM), and universal solvation model (SMD), gave relatively similar MAE's of ∼1.6-1.7 kcal/mol for G4 and ∼1.9-2.0 kcal/mol with M06/6-31+G(d,p). Partitioning the tautomer pairs into their respective molecular types, that is, aliphatic ketones, β-dicarbonyls, and heterocycles, and separating out the aqueous versus nonaqueous results finds G4/PCM utilizing the UA0 cavity to be the overall most accurate combination. Free energies of activation, ΔG(‡), for the base-catalyzed keto-enol interconversion of 2-nitrocyclohexanone were also computed using six bases and five solvents. The M06/6-31+G(d,p) reproduced the ΔG(‡) with MAE's of 1.5 and 1.8 kcal/mol using CPCM and SMD, respectively, for all combinations of base and solvent. That specific enolization was previously proposed to proceed via a concerted mechanism in less polar solvents but shift to a stepwise mechanism in more polar solvents. However, the current calculations suggest that the stepwise mechanism operates in all solvents.

  16. Absolute Position Sensing Based on a Robust Differential Capacitive Sensor with a Grounded Shield Window

    Directory of Open Access Journals (Sweden)

    Yang Bai

    2016-05-01

    Full Text Available A simple differential capacitive sensor is provided in this paper to measure the absolute positions of length measuring systems. By utilizing a shield window inside the differential capacitor, the measurement range and linearity range of the sensor can reach several millimeters. What is more interesting is that this differential capacitive sensor is only sensitive to one translational degree of freedom (DOF movement, and immune to the vibration along the other two translational DOFs. In the experiment, we used a novel circuit based on an AC capacitance bridge to directly measure the differential capacitance value. The experimental result shows that this differential capacitive sensor has a sensitivity of 2 × 10−4 pF/μm with 0.08 μm resolution. The measurement range of this differential capacitive sensor is 6 mm, and the linearity error are less than 0.01% over the whole absolute position measurement range.

  17. The absolute environmental performance of buildings

    DEFF Research Database (Denmark)

    Brejnrod, Kathrine Nykjær; Kalbar, Pradip; Petersen, Steffen

    2017-01-01

    Our paper presents a novel approach for absolute sustainability assessment of a building's environmental performance. It is demonstrated how the absolute sustainable share of the earth carrying capacity of a specific building type can be estimated using carrying capacity based normalization factors....... A building is considered absolute sustainable if its annual environmental burden is less than its share of the earth environmental carrying capacity. Two case buildings – a standard house and an upcycled single-family house located in Denmark – were assessed according to this approach and both were found...... to exceed the target values of three (almost four) of the eleven impact categories included in the study. The worst-case excess was for the case building, representing prevalent Danish building practices, which utilized 1563% of the Climate Change carrying capacity. Four paths to reach absolute...

  18. Absolute Summ

    Science.gov (United States)

    Phillips, Alfred, Jr.

    Summ means the entirety of the multiverse. It seems clear, from the inflation theories of A. Guth and others, that the creation of many universes is plausible. We argue that Absolute cosmological ideas, not unlike those of I. Newton, may be consistent with dynamic multiverse creations. As suggested in W. Heisenberg's uncertainty principle, and with the Anthropic Principle defended by S. Hawking, et al., human consciousness, buttressed by findings of neuroscience, may have to be considered in our models. Predictability, as A. Einstein realized with Invariants and General Relativity, may be required for new ideas to be part of physics. We present here a two postulate model geared to an Absolute Summ. The seedbed of this work is part of Akhnaton's philosophy (see S. Freud, Moses and Monotheism). Most important, however, is that the structure of human consciousness, manifest in Kenya's Rift Valley 200,000 years ago as Homo sapiens, who were the culmination of the six million year co-creation process of Hominins and Nature in Africa, allows us to do the physics that we do. .

  19. Absolute flux scale for radioastronomy

    International Nuclear Information System (INIS)

    Ivanov, V.P.; Stankevich, K.S.

    1986-01-01

    The authors propose and provide support for a new absolute flux scale for radio astronomy, which is not encumbered with the inadequacies of the previous scales. In constructing it the method of relative spectra was used (a powerful tool for choosing reference spectra). A review is given of previous flux scales. The authors compare the AIS scale with the scale they propose. Both scales are based on absolute measurements by the ''artificial moon'' method, and they are practically coincident in the range from 0.96 to 6 GHz. At frequencies above 6 GHz, 0.96 GHz, the AIS scale is overestimated because of incorrect extrapolation of the spectra of the primary and secondary standards. The major results which have emerged from this review of absolute scales in radio astronomy are summarized

  20. A global algorithm for estimating Absolute Salinity

    Science.gov (United States)

    McDougall, T. J.; Jackett, D. R.; Millero, F. J.; Pawlowicz, R.; Barker, P. M.

    2012-12-01

    The International Thermodynamic Equation of Seawater - 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density) than does Practical Salinity. When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic), Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg-1 in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p) in the world ocean. To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811). In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally).

  1. Relative and Absolute Reliability of Timed Up and Go Test in Community Dwelling Older Adult and Healthy Young People

    Directory of Open Access Journals (Sweden)

    Farhad Azadi

    2014-01-01

    Full Text Available Objectives: Relative and absolute reliability are psychometric properties of the test that many clinical decisions are based on them. In many cases, only relative reliability takes into consideration while the absolute reliability is also very important. Methods & Materials: Eleven community-dwelling older adults aged 65 years and older (69.64±3.58 and 20 healthy young in the age range 20 to 35 years (28.80±4.15 using three versions of Timed Up and Go test were evaluated twice with an interval of 2 to 5 days. Results: Generally, the non-homogeneity of the study population was stratified to increase the Intra-class Correlation Coefficient (ICC this coefficient in elderly people is greater than young people and with a secondary task is reduced. In This study, absolute reliability indices using different data sources and equations lead to in more or less similar results. At general, in test–retest situations, the elderly more than the young people must be changed to be interpreted as a real change, not random. The random error contribution is slightly greater in elderly than young and with a secondary task is increased.It seems, heterogeneity leads to moderation in absolute reliability indices. Conclusion: In relative reliability studies, researchers and clinicians should pay attention to factors such as homogeneity of population and etc. As well as, absolute reliability beside relative reliability is needed and necessary in clinical decision making.

  2. EIT Imaging of admittivities with a D-bar method and spatial prior: experimental results for absolute and difference imaging.

    Science.gov (United States)

    Hamilton, S J

    2017-05-22

    Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.

  3. Production of the Bioactive Compounds Violacein and Indolmycin Is Conditional in a maeA Mutant of Pseudoalteromonas luteoviolacea S4054 Lacking the Malic Enzyme

    DEFF Research Database (Denmark)

    Schmidt Thøgersen, Mariane; Delpin, Marina; Melchiorsen, Jette

    2016-01-01

    cluster was not interrupted by the transposon; instead the insertion was located to the maeA gene encoding the malic enzyme. Supernatant of the mutant strain inhibited Vibrio anguillarum and Staphylococcus aureus in well diffusion assays and in MIC assays at the same level as the wild type strain...... of violacein and indolmycin may be metabolically linked and that yet unidentified antibacterial compound(s) may be play a role in the antibacterial activity of P. luteoviolacea....

  4. Analysis of multi-scale chaotic characteristics of wind power based on Hilbert–Huang transform and Hurst analysis

    International Nuclear Information System (INIS)

    Liang, Zhengtang; Liang, Jun; Zhang, Li; Wang, Chengfu; Yun, Zhihao; Zhang, Xu

    2015-01-01

    ; the Meso-scale subsequence which possesses the greatest variance contribution rate and the maximum largest Lyapunov exponent, is the dominant factor driving the fluctuation and dynamic behavior of wind power; (3) the short-term predictions of these three subsequences based on extreme learning machine (ELM) and least-squares support vector machine (LSSVM) models have validated the above analysis results, which show that the number of steps of look-ahead predictability have pursued an ordinal trend in term of the Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) and the prediction error contribution rate of the Meso-scale subsequence is the maximum. Furthermore, the short-term wind power forecasting of 6-step-ahead based on the multi-scale analysis is performed by EMD-LSSVM + ELM and the normalized Mean Absolute Error (nMAE) and normalized Root Mean Square Error (nRMSE) have been decreased by 49.45% and 44.30% compared with those of LSSVM, and 37.96% and 27.12% compared with those of EMD-LSSVM, respectively.

  5. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer.

    Science.gov (United States)

    Rendón-Medina, Marco A; Andrade-Delgado, Laura; Telich-Tarriba, Jose E; Fuente-Del-Campo, Antonio; Altamirano-Arcos, Carlos A

    2018-01-01

    Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively ( P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.

  6. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer

    Directory of Open Access Journals (Sweden)

    Marco A. Rendón-Medina

    2018-01-01

    Full Text Available Summary:. Rapid prototyping models (RPMs had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co, with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively (P = 0.96. Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.

  7. Intranasal Pharmacokinetic Data for Triptans Such as Sumatriptan and Zolmitriptan Can Render Area Under the Curve (AUC) Predictions for the Oral Route: Strategy Development and Application.

    Science.gov (United States)

    Srinivas, Nuggehally R; Syed, Muzeeb

    2016-01-01

    Limited pharmacokinetic sampling strategy may be useful for predicting the area under the curve (AUC) for triptans and may have clinical utility as a prospective tool for prediction. Using appropriate intranasal pharmacokinetic data, a Cmax vs. AUC relationship was established by linear regression models for sumatriptan and zolmitriptan. The predictions of the AUC values were performed using published mean/median Cmax data and appropriate regression lines. The quotient of observed and predicted values rendered fold-difference calculation. The mean absolute error (MAE), mean positive error (MPE), mean negative error (MNE), root mean square error (RMSE), correlation coefficient (r), and the goodness of the AUC fold prediction were used to evaluate the two triptans. Also, data from the mean concentration profiles at time points of 1 hour (sumatriptan) and 3 hours (zolmitriptan) were used for the AUC prediction. The Cmax vs. AUC models displayed excellent correlation for both sumatriptan (r = .9997; P AUCs (83%-85%) were within 0.76-1.25-fold difference using the regression model. The prediction of AUC values for sumatriptan or zolmitriptan using the concentration data that reflected the Tmax occurrence were in the proximity of the reported values. In summary, the Cmax vs. AUC models exhibited strong correlations for sumatriptan and zolmitriptan. The usefulness of the prediction of the AUC values was established by a rigorous statistical approach.

  8. Laboratory Study of Quaternary Sediment Resistivity Related to Groundwater Contamination at Mae-Hia Landfill, Mueang District, Chiang Mai Province

    Science.gov (United States)

    Sichan, N.

    2007-12-01

    This study was aimed to understand the nature of the resistivity value of the sediment when it is contaminated, in order to use the information solving the obscure interpretation in the field. The pilot laboratory experiments were designed to simulate various degree of contamination and degree of saturation then observe the resulting changes in resistivity. The study was expected to get a better understanding of how various physical parameters effect the resistivity values in term of mathematic function. And also expected to apply those obtained function to a practical quantitatively interpretation. The sediment underlying the Mae-Hia Landfill consists of clay-rich material, with interfingerings of colluvium and sandy alluvium. A systematic study identified four kinds of sediment, sand, clayey sand, sandy clay, and clay. Representative sediment and leachate samples were taken from the field and returned to the laboratory. Both the physical and chemical properties of the sediments and leachate were analyzed to delineate the necessary parameters that could be used in Archie's equation. Sediment samples were mixed with various concentration of leachate solutions. Then the resistivity values were measured at various controlled steps in the saturation degree in a well- calibrated six-electrode model resistivity box. The measured resistivity values for sand, clayey sand, sandy clay when fully and partly saturated were collected, then plotted and fitted to Archie's equation, to obtain a mathematical relationship between bulk resistivity, porosity, saturation degree and resistivity of pore fluid. The results fit well to Archie's equation, and it was possible to determine all the unknown parameters representative of the sediment samples. For sand, clayey sand, sandy clay, and clay, the formation resistivity factors (F) are 2.90, 5.77, 7.85, and 7.85 with the products of cementation factor (m) and the pore geometry factors (a) (in term of -am) are 1.49, -1.63, -1.92, -2

  9. A global algorithm for estimating Absolute Salinity

    Directory of Open Access Journals (Sweden)

    T. J. McDougall

    2012-12-01

    Full Text Available The International Thermodynamic Equation of Seawater – 2010 has defined the thermodynamic properties of seawater in terms of a new salinity variable, Absolute Salinity, which takes into account the spatial variation of the composition of seawater. Absolute Salinity more accurately reflects the effects of the dissolved material in seawater on the thermodynamic properties (particularly density than does Practical Salinity.

    When a seawater sample has standard composition (i.e. the ratios of the constituents of sea salt are the same as those of surface water of the North Atlantic, Practical Salinity can be used to accurately evaluate the thermodynamic properties of seawater. When seawater is not of standard composition, Practical Salinity alone is not sufficient and the Absolute Salinity Anomaly needs to be estimated; this anomaly is as large as 0.025 g kg−1 in the northernmost North Pacific. Here we provide an algorithm for estimating Absolute Salinity Anomaly for any location (x, y, p in the world ocean.

    To develop this algorithm, we used the Absolute Salinity Anomaly that is found by comparing the density calculated from Practical Salinity to the density measured in the laboratory. These estimates of Absolute Salinity Anomaly however are limited to the number of available observations (namely 811. In order to provide a practical method that can be used at any location in the world ocean, we take advantage of approximate relationships between Absolute Salinity Anomaly and silicate concentrations (which are available globally.

  10. Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor.

    Science.gov (United States)

    Biagi, Lyvia; Ramkissoon, Charrise M; Facchinetti, Andrea; Leal, Yenny; Vehi, Josep

    2017-06-12

    Continuous glucose monitors (CGMs) are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL) sensor and compare it with the Dexcom SEVEN PLUS (7P), G4 PLATINUM (G4P), and advanced G4 for Artificial Pancreas studies (G4AP) systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D), in which CGMs were worn in parallel and blood glucose (BG) samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD) of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.

  11. NDE errors and their propagation in sizing and growth estimates

    International Nuclear Information System (INIS)

    Horn, D.; Obrutsky, L.; Lakhan, R.

    2009-01-01

    The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in

  12. Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment

    International Nuclear Information System (INIS)

    Liu, Ming; Cygler, Joanna; Vandervoort, Eric

    2016-01-01

    The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and the current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99 th percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.

  13. Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Ming [Carleton University (Canada); Cygler, Joanna [The Ottawa Hospital Cancer Centre, Carleton University, Ottawa University (Canada); Vandervoort, Eric [The Ottawa Hospital Cancer Centre, Ottawa University (Canada)

    2016-08-15

    The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and the current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99{sup th} percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.

  14. Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps.

    Science.gov (United States)

    Liu, Xiaohong; Huang, Shujun; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian

    2017-05-06

    The refractive index of a lens varies for different wavelengths of light, and thus the same incident light with different wavelengths has different outgoing light. This characteristic of lenses causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces image quality. Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These pixel deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration.

  15. Invariant and Absolute Invariant Means of Double Sequences

    Directory of Open Access Journals (Sweden)

    Abdullah Alotaibi

    2012-01-01

    Full Text Available We examine some properties of the invariant mean, define the concepts of strong σ-convergence and absolute σ-convergence for double sequences, and determine the associated sublinear functionals. We also define the absolute invariant mean through which the space of absolutely σ-convergent double sequences is characterized.

  16. Forecasting air quality time series using deep learning.

    Science.gov (United States)

    Freeman, Brian S; Taylor, Graham; Gharabaghi, Bahram; Thé, Jesse

    2018-04-13

    This paper presents one of the first applications of deep learning (DL) techniques to predict air pollution time series. Air quality management relies extensively on time series data captured at air monitoring stations as the basis of identifying population exposure to airborne pollutants and determining compliance with local ambient air standards. In this paper, 8 hr averaged surface ozone (O 3 ) concentrations were predicted using deep learning consisting of a recurrent neural network (RNN) with long short-term memory (LSTM). Hourly air quality and meteorological data were used to train and forecast values up to 72 hours with low error rates. The LSTM was able to forecast the duration of continuous O 3 exceedances as well. Prior to training the network, the dataset was reviewed for missing data and outliers. Missing data were imputed using a novel technique that averaged gaps less than eight time steps with incremental steps based on first-order differences of neighboring time periods. Data were then used to train decision trees to evaluate input feature importance over different time prediction horizons. The number of features used to train the LSTM model was reduced from 25 features to 5 features, resulting in improved accuracy as measured by Mean Absolute Error (MAE). Parameter sensitivity analysis identified look-back nodes associated with the RNN proved to be a significant source of error if not aligned with the prediction horizon. Overall, MAE's less than 2 were calculated for predictions out to 72 hours. Novel deep learning techniques were used to train an 8-hour averaged ozone forecast model. Missing data and outliers within the captured data set were replaced using a new imputation method that generated calculated values closer to the expected value based on the time and season. Decision trees were used to identify input variables with the greatest importance. The methods presented in this paper allow air managers to forecast long range air pollution

  17. The stars: an absolute radiometric reference for the on-orbit calibration of PLEIADES-HR satellites

    Science.gov (United States)

    Meygret, Aimé; Blanchet, Gwendoline; Mounier, Flore; Buil, Christian

    2017-09-01

    The accurate on-orbit radiometric calibration of optical sensors has become a challenge for space agencies who gather their effort through international working groups such as CEOS/WGCV or GSICS with the objective to insure the consistency of space measurements and to reach an absolute accuracy compatible with more and more demanding scientific needs. Different targets are traditionally used for calibration depending on the sensor or spacecraft specificities: from on-board calibration systems to ground targets, they all take advantage of our capacity to characterize and model them. But achieving the in-flight stability of a diffuser panel is always a challenge while the calibration over ground targets is often limited by their BDRF characterization and the atmosphere variability. Thanks to their agility, some satellites have the capability to view extra-terrestrial targets such as the moon or stars. The moon is widely used for calibration and its albedo is known through ROLO (RObotic Lunar Observatory) USGS model but with a poor absolute accuracy limiting its use to sensor drift monitoring or cross-calibration. Although the spectral irradiance of some stars is known with a very high accuracy, it was not really shown that they could provide an absolute reference for remote sensors calibration. This paper shows that high resolution optical sensors can be calibrated with a high absolute accuracy using stars. The agile-body PLEIADES 1A satellite is used for this demonstration. The star based calibration principle is described and the results are provided for different stars, each one being acquired several times. These results are compared to the official calibration provided by ground targets and the main error contributors are discussed.

  18. Timing of metamorphism of the Lansang gneiss and implications for left-lateral motion along the Mae Ping (Wang Chao) strike-slip fault, Thailand

    Science.gov (United States)

    Palin, R. M.; Searle, M. P.; Morley, C. K.; Charusiri, P.; Horstwood, M. S. A.; Roberts, N. M. W.

    2013-10-01

    The Mae Ping fault (MPF), western Thailand, exhibits dominantly left-lateral strike-slip motion and stretches for >600 km, reportedly branching off the right-lateral Sagaing fault in Myanmar and extending southeast towards Cambodia. Previous studies have suggested that the fault assisted the large-scale extrusion of Sundaland that occurred during the Late Eocene-Early Oligocene, with a geological offset of ˜120-150 km estimated from displaced high-grade gneisses and granites of the Chiang Mai-Lincang belt. Exposures of high-grade orthogneiss in the Lansang National Park, part of this belt, locally contain strong mylonitic textures and are bounded by strike-slip ductile shear zones and brittle faults. Geochronological analysis of monazite from a sample of sheared biotite-K-feldspar orthogneiss suggests two episodes of crystallization, with core regions documenting Th-Pb ages between c. 123 and c. 114 Ma and rim regions documenting a significantly younger age range between c. 45-37 Ma. These data are interpreted to represent possible magmatic protolith emplacement for the Lansang orthogneiss during the Early Cretaceous, with a later episode of metamorphism occurring during the Eocene. Textural relationships provided by in situ analysis suggest that ductile shearing along the MPF occurred during the latter stages of, or after, this metamorphic event. In addition, monazite analyzed from an undeformed garnet-two-mica granite dyke intruding metamorphic units at Bhumipol Lake outside of the Mae Ping shear zone produced a Th-Pb age of 66.2 ± 1.6 Ma. This age is interpreted to date the timing of dyke emplacement, implying that the MPF cuts through earlier formed magmatic and high-grade metamorphic rocks. These new data, when combined with regional mapping and earlier geochronological work, show that neither metamorphism, nor regional cooling, was directly related to strike-slip motion.

  19. Estimating nonrigid motion from inconsistent intensity with robust shape features

    International Nuclear Information System (INIS)

    Liu, Wenyang; Ruan, Dan

    2013-01-01

    Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided

  20. Modeling the effects of ultrasound power and reactor dimension on the biodiesel production yield: Comparison of prediction abilities between response surface methodology (RSM) and adaptive neuro-fuzzy inference system (ANFIS)

    International Nuclear Information System (INIS)

    Mostafaei, Mostafa; Javadikia, Hossein; Naderloo, Leila

    2016-01-01

    Biodiesel is as an alternative petro-diesel fuel produced from the renewable resources. The use of novel technologies such as ultrasound technology for biodiesel production intensifies the reaction and reduces the process cost. The present study is aimed to evaluate and compare the prediction and simulating efficiency of the response surface methodology (RSM) and adaptive Neuro-fuzzy inference system (ANFIS) approaches for modeling the transesterification yield achieved in ultrasonic reactor. The influence of independent variables (reactor diameter, liquid height and ultrasound intensity) on the conversion of fatty acid methyl esters (FAME) was investigated by Box-Behnken design of RSM and two ANFIS approaches (hybrid and back-propagation optimization methods). All models were compared statistically based on the training and validation data set by the coefficient of determination (R2), root mean squares error (RMSE), mean absolute percentage error (MAPE), mean absolute error (MAE) and mean relative percent deviation (MRPD). The calculated R2 for RSM and two ANFIS models were 0.9669, 0.9812 and 0.9808, respectively. All models indicated good predictions, however, the ANFIS models were more precise compared to the RSM model, which proves that the ANFIS is a powerful tool for modeling and optimizing FAME production in ultrasound reactor. - Highlights: • The ultrasound assisted FAME conversion was modelled using RSM and ANFIS approaches. • The scatter diagrams indicate the models accurately predicted the reaction yield. • The ANFIS model (hybrid) has higher R"2 (0.9812) compared to the RSM model. • The predicted deviations and residual values are relatively small for ANFIS model. • ANFIS model was more accurate for predicting ultrasound assisted FAME conversion.

  1. An absolute calibration system for millimeter-accuracy APOLLO measurements

    Science.gov (United States)

    Adelberger, E. G.; Battat, J. B. R.; Birkmeier, K. J.; Colmenares, N. R.; Davis, R.; Hoyle, C. D.; Huang, L. R.; McMillan, R. J.; Murphy, T. W., Jr.; Schlerman, E.; Skrobol, C.; Stubbs, C. W.; Zach, A.

    2017-12-01

    Lunar laser ranging provides a number of leading experimental tests of gravitation—important in our quest to unify general relativity and the standard model of physics. The apache point observatory lunar laser-ranging operation (APOLLO) has for years achieved median range precision at the  ∼2 mm level. Yet residuals in model-measurement comparisons are an order-of-magnitude larger, raising the question of whether the ranging data are not nearly as accurate as they are precise, or if the models are incomplete or ill-conditioned. This paper describes a new absolute calibration system (ACS) intended both as a tool for exposing and eliminating sources of systematic error, and also as a means to directly calibrate ranging data in situ. The system consists of a high-repetition-rate (80 MHz) laser emitting short (motivating continued work on model capabilities. The ACS provides the means to deliver APOLLO data both accurate and precise below the 2 mm level.

  2. Absolute measurement of a tritium standard

    International Nuclear Information System (INIS)

    Hadzisehovic, M.; Mocilnik, I.; Buraei, K.; Pongrac, S.; Milojevic, A.

    1978-01-01

    For the determination of a tritium absolute activity standard, a method of internal gas counting has been used. The procedure involves water reduction by uranium and zinc further the measurement of the absolute disintegration rate of tritium per unit of the effective volume of the counter by a compensation method. Criteria for the choice of methods and procedures concerning the determination and measurement of gaseous 3 H yield, parameters of gaseous hydrogen, sample mass of HTO and the absolute disintegration rate of tritium are discussed. In order to obtain gaseous sources of 3 H (and 2 H), the same reversible chemical reaction was used, namely, the water - uranium hydride - hydrogen system. This reaction was proved to be quantitative above 500 deg C by measuring the yield of the gas obtained and the absolute activity of an HTO standard. A brief description of the measuring apparatus is given, as well as a critical discussion of the brass counter quality and the possibility of obtaining equal working conditions at the counter ends. (T.G.)

  3. Cryogenic, Absolute, High Pressure Sensor

    Science.gov (United States)

    Chapman, John J. (Inventor); Shams. Qamar A. (Inventor); Powers, William T. (Inventor)

    2001-01-01

    A pressure sensor is provided for cryogenic, high pressure applications. A highly doped silicon piezoresistive pressure sensor is bonded to a silicon substrate in an absolute pressure sensing configuration. The absolute pressure sensor is bonded to an aluminum nitride substrate. Aluminum nitride has appropriate coefficient of thermal expansion for use with highly doped silicon at cryogenic temperatures. A group of sensors, either two sensors on two substrates or four sensors on a single substrate are packaged in a pressure vessel.

  4. Diabetes and quality of life: Comparing results from utility instruments and Diabetes-39.

    Science.gov (United States)

    Chen, Gang; Iezzi, Angelo; McKie, John; Khan, Munir A; Richardson, Jeff

    2015-08-01

    To compare the Diabetes-39 (D-39) with six multi-attribute utility (MAU) instruments (15D, AQoL-8D, EQ-5D, HUI3, QWB, and SF-6D), and to develop mapping algorithms which could be used to transform the D-39 scores into the MAU scores. Self-reported diabetes sufferers (N=924) and members of the healthy public (N=1760), aged 18 years and over, were recruited from 6 countries (Australia 18%, USA 18%, UK 17%, Canada 16%, Norway 16%, and Germany 15%). Apart from the QWB which was distributed normally, non-parametric rank tests were used to compare subgroup utilities and D-39 scores. Mapping algorithms were estimated using ordinary least squares (OLS) and generalised linear models (GLM). MAU instruments discriminated between diabetes patients and the healthy public; however, utilities varied between instruments. The 15D, SF-6D, AQoL-8D had the strongest correlations with the D-39. Except for the HUI3, there were significant differences by gender. Mapping algorithms based on the OLS estimator consistently gave better goodness-of-fit results. The mean absolute error (MAE) values ranged from 0.061 to 0.147, the root mean square error (RMSE) values 0.083 to 0.198, and the R-square statistics 0.428 and 0.610. Based on MAE and RMSE values the preferred mapping is D-39 into 15D. R-square statistics and the range of predicted utilities indicate the preferred mapping is D-39 into AQoL-8D. Utilities estimated from different MAU instruments differ significantly and the outcome of a study could depend upon the instrument used. The algorithms reported in this paper enable D-39 data to be mapped into utilities predicted from any of six instruments. This provides choice for those conducting cost-utility analyses. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. COORDINATE TRANSFORMATION USING FEATHERSTONE AND VANÍČEK PROPOSED APPROACH - A CASE STUDY OF GHANA GEODETIC REFERENCE NETWORK

    Directory of Open Access Journals (Sweden)

    Yao Yevenyo Ziggah

    2017-03-01

    Full Text Available Most developing countries like Ghana are yet to adopt the geocentric datum for its surveying and mapping purposes. It is well known and documented that non-geocentric datums based on its establishment have more distortions in height compared with satellite datums. Most authors have argued that combining such height with horizontal positions (latitude and longitude in the transformation process could introduce unwanted distortions to the network. This is because the local geodetic height in most cases is assumed to be determined to a lower accuracy compared with the horizontal positions. In the light of this, a transformation model was proposed by Featherstone and Vaníček (1999 which avoids the use of height in both global and local datums in coordinate transformation. It was confirmed that adopting such a method reduces the effect of distortions caused by geodetic height on the transformation parameters estimated. Therefore, this paper applied Featherstone and Vaníček (FV model for the first time to a set of common points coordinates in Ghana geodetic reference network. The FV model was used to transform coordinates from global datum (WGS84 to local datum (Accra datum. The results obtained based on the Root Mean Square Error (RMSE and Mean Absolute Error (MAE in both Eastings and Northings were satisfactory. Thus, a RMSE value of 0.66 m and 0.96 m were obtained for the Eastings and Northings while 0.76 m and 0.73 m were the MAE values achieved. Also, the FV model attained a transformation accuracy of 0.49 m. Hence, this study will serve as a preliminary investigation in avoiding the use of height in coordinate transformation within Ghana’s geodetic reference network.

  6. A developmental study of latent absolute pitch memory.

    Science.gov (United States)

    Jakubowski, Kelly; Müllensiefen, Daniel; Stewart, Lauren

    2017-03-01

    The ability to recall the absolute pitch level of familiar music (latent absolute pitch memory) is widespread in adults, in contrast to the rare ability to label single pitches without a reference tone (overt absolute pitch memory). The present research investigated the developmental profile of latent absolute pitch (AP) memory and explored individual differences related to this ability. In two experiments, 288 children from 4 to12 years of age performed significantly above chance at recognizing the absolute pitch level of familiar melodies. No age-related improvement or decline, nor effects of musical training, gender, or familiarity with the stimuli were found in regard to latent AP task performance. These findings suggest that latent AP memory is a stable ability that is developed from as early as age 4 and persists into adulthood.

  7. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    Science.gov (United States)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  8. A Simultaneously Calibration Approach for Installation and Attitude Errors of an INS/GPS/LDS Target Tracker

    Directory of Open Access Journals (Sweden)

    Jianhua Cheng

    2015-02-01

    Full Text Available To obtain the absolute position of a target is one of the basic topics for non-cooperated target tracking problems. In this paper, we present a simultaneously calibration method for an Inertial navigation system (INS/Global position system (GPS/Laser distance scanner (LDS integrated system based target positioning approach. The INS/GPS integrated system provides the attitude and position of observer, and LDS offers the distance between the observer and the target. The two most significant errors are taken into jointly consideration and analyzed: (1 the attitude measure error of INS/GPS; (2 the installation error between INS/GPS and LDS subsystems. Consequently, a INS/GPS/LDS based target positioning approach considering these two errors is proposed. In order to improve the performance of this approach, a novel calibration method is designed to simultaneously estimate and compensate these two main errors. Finally, simulations are conducted to access the performance of the proposed target positioning approach and the designed simultaneously calibration method.

  9. A simultaneously calibration approach for installation and attitude errors of an INS/GPS/LDS target tracker.

    Science.gov (United States)

    Cheng, Jianhua; Chen, Daidai; Sun, Xiangyu; Wang, Tongda

    2015-02-04

    To obtain the absolute position of a target is one of the basic topics for non-cooperated target tracking problems. In this paper, we present a simultaneously calibration method for an Inertial navigation system (INS)/Global position system (GPS)/Laser distance scanner (LDS) integrated system based target positioning approach. The INS/GPS integrated system provides the attitude and position of observer, and LDS offers the distance between the observer and the target. The two most significant errors are taken into jointly consideration and analyzed: (1) the attitude measure error of INS/GPS; (2) the installation error between INS/GPS and LDS subsystems. Consequently, a INS/GPS/LDS based target positioning approach considering these two errors is proposed. In order to improve the performance of this approach, a novel calibration method is designed to simultaneously estimate and compensate these two main errors. Finally, simulations are conducted to access the performance of the proposed target positioning approach and the designed simultaneously calibration method.

  10. Advancing Absolute Calibration for JWST and Other Applications

    Science.gov (United States)

    Rieke, George; Bohlin, Ralph; Boyajian, Tabetha; Carey, Sean; Casagrande, Luca; Deustua, Susana; Gordon, Karl; Kraemer, Kathleen; Marengo, Massimo; Schlawin, Everett; Su, Kate; Sloan, Greg; Volk, Kevin

    2017-10-01

    We propose to exploit the unique optical stability of the Spitzer telescope, along with that of IRAC, to (1) transfer the accurate absolute calibration obtained with MSX on very bright stars directly to two reference stars within the dynamic range of the JWST imagers (and of other modern instrumentation); (2) establish a second accurate absolute calibration based on the absolutely calibrated spectrum of the sun, transferred onto the astronomical system via alpha Cen A; and (3) provide accurate infrared measurements for the 11 (of 15) highest priority stars with no such data but with accurate interferometrically measured diameters, allowing us to optimize determinations of effective temperatures using the infrared flux method and thus to extend the accurate absolute calibration spectrally. This program is integral to plans for an accurate absolute calibration of JWST and will also provide a valuable Spitzer legacy.

  11. Forecasting typhoid fever incidence in the Cordillera administrative region in the Philippines using seasonal ARIMA models

    Science.gov (United States)

    Cawiding, Olive R.; Natividad, Gina May R.; Bato, Crisostomo V.; Addawe, Rizavel C.

    2017-11-01

    The prevalence of typhoid fever in developing countries such as the Philippines calls for a need for accurate forecasting of the disease. This will be of great assistance in strategic disease prevention. This paper presents a development of useful models that predict the behavior of typhoid fever incidence based on the monthly incidence in the provinces of the Cordillera Administrative Region from 2010 to 2015 using univariate time series analysis. The data used was obtained from the Cordillera Office of the Department of Health (DOH-CAR). Seasonal autoregressive moving average (SARIMA) models were used to incorporate the seasonality of the data. A comparison of the results of the obtained models revealed that the SARIMA (1,1,7)(0,0,1)12 with a fixed coefficient at the seventh lag produces the smallest root mean square error (RMSE), mean absolute error (MAE), Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC). The model suggested that for the year 2016, the number of cases would increase from the months of July to September and have a drop in December. This was then validated using the data collected from January 2016 to December 2016.

  12. Refined Diebold-Mariano Test Methods for the Evaluation of Wind Power Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hao Chen

    2014-07-01

    Full Text Available The scientific evaluation methodology for the forecast accuracy of wind power forecasting models is an important issue in the domain of wind power forecasting. However, traditional forecast evaluation criteria, such as Mean Squared Error (MSE and Mean Absolute Error (MAE, have limitations in application to some degree. In this paper, a modern evaluation criterion, the Diebold-Mariano (DM test, is introduced. The DM test can discriminate the significant differences of forecasting accuracy between different models based on the scheme of quantitative analysis. Furthermore, the augmented DM test with rolling windows approach is proposed to give a more strict forecasting evaluation. By extending the loss function to an asymmetric structure, the asymmetric DM test is proposed. Case study indicates that the evaluation criteria based on DM test can relieve the influence of random sample disturbance. Moreover, the proposed augmented DM test can provide more evidence when the cost of changing models is expensive, and the proposed asymmetric DM test can add in the asymmetric factor, and provide practical evaluation of wind power forecasting models. It is concluded that the two refined DM tests can provide reference to the comprehensive evaluation for wind power forecasting models.

  13. Estimation of Surface Air Temperature Over Central and Eastern Eurasia from MODIS Land Surface Temperature

    Science.gov (United States)

    Shen, Suhung; Leptoukh, Gregory G.

    2011-01-01

    Surface air temperature (T(sub a)) is a critical variable in the energy and water cycle of the Earth.atmosphere system and is a key input element for hydrology and land surface models. This is a preliminary study to evaluate estimation of T(sub a) from satellite remotely sensed land surface temperature (T(sub s)) by using MODIS-Terra data over two Eurasia regions: northern China and fUSSR. High correlations are observed in both regions between station-measured T(sub a) and MODIS T(sub s). The relationships between the maximum T(sub a) and daytime T(sub s) depend significantly on land cover types, but the minimum T(sub a) and nighttime T(sub s) have little dependence on the land cover types. The largest difference between maximum T(sub a) and daytime T(sub s) appears over the barren and sparsely vegetated area during the summer time. Using a linear regression method, the daily maximum T(sub a) were estimated from 1 km resolution MODIS T(sub s) under clear-sky conditions with coefficients calculated based on land cover types, while the minimum T(sub a) were estimated without considering land cover types. The uncertainty, mean absolute error (MAE), of the estimated maximum T(sub a) varies from 2.4 C over closed shrublands to 3.2 C over grasslands, and the MAE of the estimated minimum Ta is about 3.0 C.

  14. Predicting CT Image From MRI Data Through Feature Matching With Learned Nonlinear Local Descriptors.

    Science.gov (United States)

    Yang, Wei; Zhong, Liming; Chen, Yang; Lin, Liyan; Lu, Zhentai; Liu, Shupeng; Wu, Yao; Feng, Qianjin; Chen, Wufan

    2018-04-01

    Attenuation correction for positron-emission tomography (PET)/magnetic resonance (MR) hybrid imaging systems and dose planning for MR-based radiation therapy remain challenging due to insufficient high-energy photon attenuation information. We present a novel approach that uses the learned nonlinear local descriptors and feature matching to predict pseudo computed tomography (pCT) images from T1-weighted and T2-weighted magnetic resonance imaging (MRI) data. The nonlinear local descriptors are obtained by projecting the linear descriptors into the nonlinear high-dimensional space using an explicit feature map and low-rank approximation with supervised manifold regularization. The nearest neighbors of each local descriptor in the input MR images are searched in a constrained spatial range of the MR images among the training dataset. Then the pCT patches are estimated through k-nearest neighbor regression. The proposed method for pCT prediction is quantitatively analyzed on a dataset consisting of paired brain MRI and CT images from 13 subjects. Our method generates pCT images with a mean absolute error (MAE) of 75.25 ± 18.05 Hounsfield units, a peak signal-to-noise ratio of 30.87 ± 1.15 dB, a relative MAE of 1.56 ± 0.5% in PET attenuation correction, and a dose relative structure volume difference of 0.055 ± 0.107% in , as compared with true CT. The experimental results also show that our method outperforms four state-of-the-art methods.

  15. DNA methylation-based forensic age prediction using artificial neural networks and next generation sequencing.

    Science.gov (United States)

    Vidaki, Athina; Ballard, David; Aliferi, Anastasia; Miller, Thomas H; Barron, Leon P; Syndercombe Court, Denise

    2017-05-01

    The ability to estimate the age of the donor from recovered biological material at a crime scene can be of substantial value in forensic investigations. Aging can be complex and is associated with various molecular modifications in cells that accumulate over a person's lifetime including epigenetic patterns. The aim of this study was to use age-specific DNA methylation patterns to generate an accurate model for the prediction of chronological age using data from whole blood. In total, 45 age-associated CpG sites were selected based on their reported age coefficients in a previous extensive study and investigated using publicly available methylation data obtained from 1156 whole blood samples (aged 2-90 years) analysed with Illumina's genome-wide methylation platforms (27K/450K). Applying stepwise regression for variable selection, 23 of these CpG sites were identified that could significantly contribute to age prediction modelling and multiple regression analysis carried out with these markers provided an accurate prediction of age (R 2 =0.92, mean absolute error (MAE)=4.6 years). However, applying machine learning, and more specifically a generalised regression neural network model, the age prediction significantly improved (R 2 =0.96) with a MAE=3.3 years for the training set and 4.4 years for a blind test set of 231 cases. The machine learning approach used 16 CpG sites, located in 16 different genomic regions, with the top 3 predictors of age belonged to the genes NHLRC1, SCGN and CSNK1D. The proposed model was further tested using independent cohorts of 53 monozygotic twins (MAE=7.1 years) and a cohort of 1011 disease state individuals (MAE=7.2 years). Furthermore, we highlighted the age markers' potential applicability in samples other than blood by predicting age with similar accuracy in 265 saliva samples (R 2 =0.96) with a MAE=3.2 years (training set) and 4.0 years (blind test). In an attempt to create a sensitive and accurate age prediction test, a next

  16. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. PERBANDINGAN ANALISIS LEAST ABSOLUTE SHRINKAGE AND SELECTION OPERATOR DAN PARTIAL LEAST SQUARES (Studi Kasus: Data Microarray

    Directory of Open Access Journals (Sweden)

    KADEK DWI FARMANI

    2012-09-01

    Full Text Available Linear regression analysis is one of the parametric statistical methods which utilize the relationship between two or more quantitative variables. In linear regression analysis, there are several assumptions that must be met that is normal distribution of errors, there is no correlation between the error and error variance is constant and homogent. There are some constraints that caused the assumption can not be met, for example, the correlation between independent variables (multicollinearity, constraints on the number of data and independent variables are obtained. When the number of samples obtained less than the number of independent variables, then the data is called the microarray data. Least Absolute shrinkage and Selection Operator (LASSO and Partial Least Squares (PLS is a statistical method that can be used to overcome the microarray, overfitting, and multicollinearity. From the above description, it is necessary to study with the intention of comparing LASSO and PLS method. This study uses coronary heart and stroke patients data which is a microarray data and contain multicollinearity. With these two characteristics of the data that most have a weak correlation between independent variables, LASSO method produces a better model than PLS seen from the large RMSEP.

  18. Absolute pitch among students at the Shanghai Conservatory of Music: a large-scale direct-test study.

    Science.gov (United States)

    Deutsch, Diana; Li, Xiaonuo; Shen, Jing

    2013-11-01

    This paper reports a large-scale direct-test study of absolute pitch (AP) in students at the Shanghai Conservatory of Music. Overall note-naming scores were very high, with high scores correlating positively with early onset of musical training. Students who had begun training at age ≤5 yr scored 83% correct not allowing for semitone errors and 90% correct allowing for semitone errors. Performance levels were higher for white key pitches than for black key pitches. This effect was greater for orchestral performers than for pianists, indicating that it cannot be attributed to early training on the piano. Rather, accuracy in identifying notes of different names (C, C#, D, etc.) correlated with their frequency of occurrence in a large sample of music taken from the Western tonal repertoire. There was also an effect of pitch range, so that performance on tones in the two-octave range beginning on Middle C was higher than on tones in the octave below Middle C. In addition, semitone errors tended to be on the sharp side. The evidence also ran counter to the hypothesis, previously advanced by others, that the note A plays a special role in pitch identification judgments.

  19. NGS Absolute Gravity Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NGS Absolute Gravity data (78 stations) was received in July 1993. Principal gravity parameters include Gravity Value, Uncertainty, and Vertical Gradient. The...

  20. THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX

    International Nuclear Information System (INIS)

    Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward; Szczygieł, Dorota M.; Gould, Andrew; Sneden, Christopher; Dong, Subo

    2013-01-01

    We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M V,RRc = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M V,RRab = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M V,RRc = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W π , W θ , W z ) = (12.0, –209.9, 3.0) km s –1 in the radial, rotational, and vertical directions with dispersions (σ W π ,σ W θ ,σ W z ) = (150.4, 106.1, 96.0) km s -1 . For the disk, we find (W π , W θ , W z ) = (13.0, –42.0, –27.3) km s –1 relative to the Sun with dispersions (σ W π ,σ W θ ,σ W z ) = (67.7,59.2,54.9) km s -1 . Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4

  1. Absolute isotopic abundances of Ti in meteorites

    International Nuclear Information System (INIS)

    Niederer, F.R.; Papanastassiou, D.A.; Wasserburg, G.J.

    1985-01-01

    The absolute isotope abundance of Ti has been determined in Ca-Al-rich inclusions from the Allende and Leoville meteorites and in samples of whole meteorites. The absolute Ti isotope abundances differ by a significant mass dependent isotope fractionation transformation from the previously reported abundances, which were normalized for fractionation using 46 Ti/ 48 Ti. Therefore, the absolute compositions define distinct nucleosynthetic components from those previously identified or reflect the existence of significant mass dependent isotope fractionation in nature. We provide a general formalism for determining the possible isotope compositions of the exotic Ti from the measured composition, for different values of isotope fractionation in nature and for different mixing ratios of the exotic and normal components. The absolute Ti and Ca isotopic compositions still support the correlation of 50 Ti and 48 Ca effects in the FUN inclusions and imply contributions from neutron-rich equilibrium or quasi-equilibrium nucleosynthesis. The present identification of endemic effects at 46 Ti, for the absolute composition, implies a shortfall of an explosive-oxygen component or reflects significant isotope fractionation. Additional nucleosynthetic components are required by 47 Ti and 49 Ti effects. Components are also defined in which 48 Ti is enhanced. Results are given and discussed. (author)

  2. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  3. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    Science.gov (United States)

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  4. Deteksi Jarak Lokasi Gangguan Pada Saluran Transmisi 500 Kv Cilegon Baru - Cibinong Menggunakan Adaptive Neuro Fuzzy Inference System (ANFIS

    Directory of Open Access Journals (Sweden)

    Muhamad Otong

    2017-06-01

    Full Text Available Pada saluran transmisi diperlukan metode deteksi lokasi gangguan yang akurat dan cepat untuk mengurangi waktu pencarian, sehingga mempercepat proses perbaikan. Dengan menggunakan kombinasi metode Transformasi Park dan Adaptive Neuro Fuzzy Inference System (ANFIS, dapat dideteksi jarak lokasi gangguan secara langsung setelah terjadinya gangguan dengan cara menganalisa gelombang berjalan pada saluran transmisi. Saat terjadi gangguan, akan menyebabkan timbulnya gelombang berjalan yang berupa tegangan dan arus. Tegangan dan arus ini akan ditransformasikan oleh transformasi park pada kedua ujung saluran untuk mendapatkan waktu kedatangan gelombang berjalan, yang mana terdapat perbedaan waktu pada tiap ujung saluran dikarenakan adanya perbedaan jarak yang ada. Perbedaan waktu ini akan di input kedalam ANFIS untuk mendapatkan jarak lokasi gangguan. Dengan membandingkan jumlah nilai keanggotaan dan pemilihan input, maka diperoleh desain ANFIS terbaik adalah dengan jumlah nilai keanggotaan (MF 5 serta input perbedaan waktu ∆tV dan ∆tI (V dan I dengan nilai Mean Absolute Error (MAE sebesar 1,33.

  5. Absolute earthquake locations using 3-D versus 1-D velocity models below a local seismic network: example from the Pyrenees

    Science.gov (United States)

    Theunissen, T.; Chevrot, S.; Sylvander, M.; Monteiller, V.; Calvet, M.; Villaseñor, A.; Benahmed, S.; Pauchet, H.; Grimaud, F.

    2018-03-01

    Local seismic networks are usually designed so that earthquakes are located inside them (primary azimuthal gap 180° and distance to the first station higher than 15 km). Errors on velocity models and accuracy of absolute earthquake locations are assessed based on a reference data set made of active seismic, quarry blasts and passive temporary experiments. Solutions and uncertainties are estimated using the probabilistic approach of the NonLinLoc (NLLoc) software based on Equal Differential Time. Some updates have been added to NLLoc to better focus on the final solution (outlier exclusion, multiscale grid search, S-phases weighting). Errors in the probabilistic approach are defined to take into account errors on velocity models and on arrival times. The seismicity in the final 3-D catalogue is located with a horizontal uncertainty of about 2.0 ± 1.9 km and a vertical uncertainty of about 3.0 ± 2.0 km.

  6. Investigating Absolute Value: A Real World Application

    Science.gov (United States)

    Kidd, Margaret; Pagni, David

    2009-01-01

    Making connections between various representations is important in mathematics. In this article, the authors discuss the numeric, algebraic, and graphical representations of sums of absolute values of linear functions. The initial explanations are accessible to all students who have experience graphing and who understand that absolute value simply…

  7. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  8. A novel setup for the determination of absolute cross sections for low-energy electron induced strand breaks in oligonucleotides - The effect of the radiosensitizer 5-fluorouracil

    International Nuclear Information System (INIS)

    Rackwitz, J.; Rankovic, M.L.; Milosavljevic, A.R.; Bald, I.

    2017-01-01

    Low-energy electrons (LEEs) play an important role in DNA radiation damage. Here we present a method to quantify LEE induced strand breakage in well-defined oligonucleotide single strands in terms of absolute cross sections. An LEE irradiation setup covering electron energies <500 eV is constructed and optimized to irradiate DNA origami triangles carrying well-defined oligonucleotide target strands. Measurements are presented for 10.0 and 5.5 eV for different oligonucleotide targets. The determination of absolute strand break cross sections is performed by atomic force microscopy analysis. An accurate fluence determination ensures small margins of error of the determined absolute single strand break cross sections σ SSB . In this way, the influence of sequence modification with the radiosensitive 5-Fluorouracil ( 5F U) is studied using an absolute and relative data analysis. We demonstrate an increase in the strand break yields of 5F U containing oligonucleotides by a factor of 1.5 to 1.6 compared with non-modified oligonucleotide sequences when irradiated with 10 eV electrons. (authors)

  9. Approach To Absolute Zero

    Indian Academy of Sciences (India)

    more and more difficult to remove heat as one approaches absolute zero. This is the ... A new and active branch of engineering ... This temperature is called the critical temperature, Te' For sulfur dioxide the critical ..... adsorbent charcoal.

  10. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  11. Reducing errors benefits the field-based learning of a fundamental movement skill in children.

    Science.gov (United States)

    Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W

    2013-03-01

    Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.

  12. Absolute spectrophotometry of Nova Cygni 1975

    International Nuclear Information System (INIS)

    Kontizas, E.; Kontizas, M.; Smyth, M.J.

    1976-01-01

    Radiometric photoelectric spectrophotometry of Nova Cygni 1975 was carried out on 1975 August 31, September 2, 3. α Lyr was used as reference star and its absolute spectral energy distribution was used to reduce the spectrophotometry of the nova to absolute units. Emission strengths of Hα, Hβ, Hγ (in W cm -2 ) were derived. The Balmer decrement Hα:Hβ:Hγ was compared with theory, and found to deviate less than had been reported for an earlier nova. (author)

  13. The Pragmatics of "Unruly" Dative Absolutes in Early Slavic

    Directory of Open Access Journals (Sweden)

    Daniel E. Collins

    2011-08-01

    Full Text Available This chapter examines some uses of the dative absolute in Old Church Slavonic and in early recensional Slavonic texts that depart from notions of how Indo-European absolute constructions should behave, either because they have subjects coreferential with the (putative main-clause subjects or because they function as if they were main clauses in their own right. Such "noncanonical" absolutes have generally been written off as mechanistic translations or as mistakes by scribes who did not understand the proper uses of the construction. In reality, the problem is not with literalistic translators or incompetent scribes but with the definition of the construction itself; it is quite possible to redefine the Early Slavic dative absolute in a way that accounts for the supposedly deviant cases. While the absolute is generally dependent semantically on an adjacent unit of discourse, it should not always be regarded as subordinated syntactically. There are good grounds for viewing some absolutes not as dependent clauses but as independent sentences whose collateral character is an issue not of syntax but of the pragmatics of discourse.

  14. Absolutyzm i pluralizm (ABSOLUTISM AND PLURALISM

    Directory of Open Access Journals (Sweden)

    Renata Ziemińska

    2005-06-01

    Full Text Available Alethic absolutism is a thesis that propositions can not be more or less true, that they are true or false for ever (if true at all and that their truth is independent on any circumstances of their assertion. In negative version, easier to defend, alethic absolutism claims the very same proposition can not be both true and false relative to circumstances of its assertion. Simple alethic pluralism is a thesis that we have many concepts of truth. It is a very good way to dissolve the controversy between alethic relativism and absolutism. Many philosophical concepts of truth are the best reason for such pluralism. If concept is meaning of a name, we have many concepts of truth because the name 'truth' was understood in many ways. The variety of meanings however can be superficial. Under it we can find one idea of truth expressed in correspondence truism or schema (T. The content of the truism is too poor to be content of anyone concept of truth, so it usually is connected with some picture of the world (ontology and we have so many concepts of truth as many pictures of the world. The authoress proposes the hierarchical pluralism with privileged classic (or correspondence in weak sense concept of truth as absolute property.Other author's publications:

  15. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  16. Introducing the Mean Absolute Deviation "Effect" Size

    Science.gov (United States)

    Gorard, Stephen

    2015-01-01

    This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…

  17. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  18. Absolute gravity measurements at three sites characterized by different environmental conditions using two portable ballistic gravimeters

    Science.gov (United States)

    Greco, Filippo; Biolcati, Emanuele; Pistorio, Antonio; D'Agostino, Giancarlo; Germak, Alessandro; Origlia, Claudio; Del Negro, Ciro

    2015-03-01

    The performances of two absolute gravimeters at three different sites in Italy between 2009 and 2011 is presented. The measurements of the gravity acceleration g were performed using the absolute gravimeters Micro-g LaCoste FG5#238 and the INRiM prototype IMGC-02, which represent the state of the art in ballistic gravimeter technology (relative uncertainty of a few parts in 109). For the comparison, the measured g values were reported at the same height by means of the vertical gravity gradient estimated at each site with relative gravimeters. The consistency and reliability of the gravity observations, as well as the performance and efficiency of the instruments, were assessed by measurements made in sites characterized by different logistics and environmental conditions. Furthermore, the various factors affecting the measurements and their uncertainty were thoroughly investigated. The measurements showed good agreement, with the minimum and maximum differences being 4.0 and 8.3 μGal. The normalized errors are very much lower than 1, ranging between 0.06 and 0.45, confirming the compatibility between the results. This excellent agreement can be attributed to several factors, including the good working order of gravimeters and the correct setup and use of the instruments in different conditions. These results can contribute to the standardization of absolute gravity surveys largely for applications in geophysics, volcanology and other branches of geosciences, allowing achieving a good trade-off between uncertainty and efficiency of gravity measurements.

  19. Semi-empirical model for retrieval of soil moisture using RISAT-1 C-Band SAR data over a sub-tropical semi-arid area of Rewari district, Haryana (India)

    Science.gov (United States)

    Rawat, Kishan Singh; Sehgal, Vinay Kumar; Pradhan, Sanatan; Ray, Shibendu S.

    2018-03-01

    We have estimated soil moisture (SM) by using circular horizontal polarization backscattering coefficient (σ o_{RH}), differences of circular vertical and horizontal σ o (σ o_{RV} {-} σ o_{RH}) from FRS-1 data of Radar Imaging Satellite (RISAT-1) and surface roughness in terms of RMS height ({RMS}_{height}). We examined the performance of FRS-1 in retrieving SM under wheat crop at tillering stage. Results revealed that it is possible to develop a good semi-empirical model (SEM) to estimate SM of the upper soil layer using RISAT-1 SAR data rather than using existing empirical model based on only single parameter, i.e., σ o. Near surface SM measurements were related to σ o_{RH}, σ o_{RV} {-} σ o_{RH} derived using 5.35 GHz (C-band) image of RISAT-1 and {RMS}_{height}. The roughness component derived in terms of {RMS}_{height} showed a good positive correlation with σ o_{RV} {-} σ o_{RH} (R2 = 0.65). By considering all the major influencing factors (σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}), an SEM was developed where SM (volumetric) predicted values depend on σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}. This SEM showed R2 of 0.87 and adjusted R2 of 0.85, multiple R=0.94 and with standard error of 0.05 at 95% confidence level. Validation of the SM derived from semi-empirical model with observed measurement ({SM}_{Observed}) showed root mean square error (RMSE) = 0.06, relative-RMSE (R-RMSE) = 0.18, mean absolute error (MAE) = 0.04, normalized RMSE (NRMSE) = 0.17, Nash-Sutcliffe efficiency (NSE) = 0.91 ({≈ } 1), index of agreement (d) = 1, coefficient of determination (R2) = 0.87, mean bias error (MBE) = 0.04, standard error of estimate (SEE) = 0.10, volume error (VE) = 0.15, variance of the distribution of differences ({S}d2) = 0.004. The developed SEM showed better performance in estimating SM than Topp empirical model which is based only on σ o. By using the developed SEM, top soil SM can be estimated with low mean absolute

  20. Modelling hourly dissolved oxygen concentration (DO) using dynamic evolving neural-fuzzy inference system (DENFIS)-based approach: case study of Klamath River at Miller Island Boat Ramp, OR, USA.

    Science.gov (United States)

    Heddam, Salim

    2014-01-01

    In this study, we present application of an artificial intelligence (AI) technique model called dynamic evolving neural-fuzzy inference system (DENFIS) based on an evolving clustering method (ECM), for modelling dissolved oxygen concentration in a river. To demonstrate the forecasting capability of DENFIS, a one year period from 1 January 2009 to 30 December 2009, of hourly experimental water quality data collected by the United States Geological Survey (USGS Station No: 420853121505500) station at Klamath River at Miller Island Boat Ramp, OR, USA, were used for model development. Two DENFIS-based models are presented and compared. The two DENFIS systems are: (1) offline-based system named DENFIS-OF, and (2) online-based system, named DENFIS-ON. The input variables used for the two models are water pH, temperature, specific conductance, and sensor depth. The performances of the models are evaluated using root mean square errors (RMSE), mean absolute error (MAE), Willmott index of agreement (d) and correlation coefficient (CC) statistics. The lowest root mean square error and highest correlation coefficient values were obtained with the DENFIS-ON method. The results obtained with DENFIS models are compared with linear (multiple linear regression, MLR) and nonlinear (multi-layer perceptron neural networks, MLPNN) methods. This study demonstrates that DENFIS-ON investigated herein outperforms all the proposed techniques for DO modelling.

  1. Modeling of surface dust concentration in snow cover at industrial area using neural networks and kriging

    Science.gov (United States)

    Sergeev, A. P.; Tarasov, D. A.; Buevich, A. G.; Shichkin, A. V.; Tyagunov, A. G.; Medvedev, A. N.

    2017-06-01

    Modeling of spatial distribution of pollutants in the urbanized territories is difficult, especially if there are multiple emission sources. When monitoring such territories, it is often impossible to arrange the necessary detailed sampling. Because of this, the usual methods of analysis and forecasting based on geostatistics are often less effective. Approaches based on artificial neural networks (ANNs) demonstrate the best results under these circumstances. This study compares two models based on ANNs, which are multilayer perceptron (MLP) and generalized regression neural networks (GRNNs) with the base geostatistical method - kriging. Models of the spatial dust distribution in the snow cover around the existing copper quarry and in the area of emissions of a nickel factory were created. To assess the effectiveness of the models three indices were used: the mean absolute error (MAE), the root-mean-square error (RMSE), and the relative root-mean-square error (RRMSE). Taking into account all indices the model of GRNN proved to be the most accurate which included coordinates of the sampling points and the distance to the likely emission source as input parameters for the modeling. Maps of spatial dust distribution in the snow cover were created in the study area. It has been shown that the models based on ANNs were more accurate than the kriging, particularly in the context of a limited data set.

  2. Response surface and neural network based predictive models of cutting temperature in hard turning

    Directory of Open Access Journals (Sweden)

    Mozammel Mia

    2016-11-01

    Full Text Available The present study aimed to develop the predictive models of average tool-workpiece interface temperature in hard turning of AISI 1060 steels by coated carbide insert. The Response Surface Methodology (RSM and Artificial Neural Network (ANN were employed to predict the temperature in respect of cutting speed, feed rate and material hardness. The number and orientation of the experimental trials, conducted in both dry and high pressure coolant (HPC environments, were planned using full factorial design. The temperature was measured by using the tool-work thermocouple. In RSM model, two quadratic equations of temperature were derived from experimental data. The analysis of variance (ANOVA and mean absolute percentage error (MAPE were performed to suffice the adequacy of the models. In ANN model, 80% data were used to train and 20% data were employed for testing. Like RSM, herein, the error analysis was also conducted. The accuracy of the RSM and ANN model was found to be ⩾99%. The ANN models exhibit an error of ∼5% MAE for testing data. The regression coefficient was found to be greater than 99.9% for both dry and HPC. Both these models are acceptable, although the ANN model demonstrated a higher accuracy. These models, if employed, are expected to provide a better control of cutting temperature in turning of hardened steel.

  3. THE ABSOLUTE MAGNITUDE OF RRc VARIABLES FROM STATISTICAL PARALLAX

    Energy Technology Data Exchange (ETDEWEB)

    Kollmeier, Juna A.; Burns, Christopher R.; Thompson, Ian B.; Preston, George W.; Crane, Jeffrey D.; Madore, Barry F.; Morrell, Nidia; Prieto, José L.; Shectman, Stephen; Simon, Joshua D.; Villanueva, Edward [Observatories of the Carnegie Institution of Washington, 813 Santa Barbara Street, Pasadena, CA 91101 (United States); Szczygieł, Dorota M.; Gould, Andrew [Department of Astronomy, The Ohio State University, 4051 McPherson Laboratory, Columbus, OH 43210 (United States); Sneden, Christopher [Department of Astronomy, University of Texas at Austin, TX 78712 (United States); Dong, Subo [Institute for Advanced Study, 500 Einstein Drive, Princeton, NJ 08540 (United States)

    2013-09-20

    We present the first definitive measurement of the absolute magnitude of RR Lyrae c-type variable stars (RRc) determined purely from statistical parallax. We use a sample of 242 RRc variables selected from the All Sky Automated Survey for which high-quality light curves, photometry, and proper motions are available. We obtain high-resolution echelle spectra for these objects to determine radial velocities and abundances as part of the Carnegie RR Lyrae Survey. We find that M{sub V,RRc} = 0.59 ± 0.10 at a mean metallicity of [Fe/H] = –1.59. This is to be compared with previous estimates for RRab stars (M{sub V,RRab} = 0.76 ± 0.12) and the only direct measurement of an RRc absolute magnitude (RZ Cephei, M{sub V,RRc} = 0.27 ± 0.17). We find the bulk velocity of the halo relative to the Sun to be (W{sub π}, W{sub θ}, W{sub z} ) = (12.0, –209.9, 3.0) km s{sup –1} in the radial, rotational, and vertical directions with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (150.4, 106.1, 96.0) km s{sup -1}. For the disk, we find (W{sub π}, W{sub θ}, W{sub z} ) = (13.0, –42.0, –27.3) km s{sup –1} relative to the Sun with dispersions (σ{sub W{sub π}},σ{sub W{sub θ}},σ{sub W{sub z}}) = (67.7,59.2,54.9) km s{sup -1}. Finally, as a byproduct of our statistical framework, we are able to demonstrate that UCAC2 proper-motion errors are significantly overestimated as verified by UCAC4.

  4. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Adaptive neuro-fuzzy inference system for temperature and humidity profile retrieval from microwave radiometer observations

    Science.gov (United States)

    Ramesh, K.; Kesarkar, A. P.; Bhate, J.; Venkat Ratnam, M.; Jayaraman, A.

    2015-01-01

    The retrieval of accurate profiles of temperature and water vapour is important for the study of atmospheric convection. Recent development in computational techniques motivated us to use adaptive techniques in the retrieval algorithms. In this work, we have used an adaptive neuro-fuzzy inference system (ANFIS) to retrieve profiles of temperature and humidity up to 10 km over the tropical station Gadanki (13.5° N, 79.2° E), India. ANFIS is trained by using observations of temperature and humidity measurements by co-located Meisei GPS radiosonde (henceforth referred to as radiosonde) and microwave brightness temperatures observed by radiometrics multichannel microwave radiometer MP3000 (MWR). ANFIS is trained by considering these observations during rainy and non-rainy days (ANFIS(RD + NRD)) and during non-rainy days only (ANFIS(NRD)). The comparison of ANFIS(RD + NRD) and ANFIS(NRD) profiles with independent radiosonde observations and profiles retrieved using multivariate linear regression (MVLR: RD + NRD and NRD) and artificial neural network (ANN) indicated that the errors in the ANFIS(RD + NRD) are less compared to other retrieval methods. The Pearson product movement correlation coefficient (r) between retrieved and observed profiles is more than 92% for temperature profiles for all techniques and more than 99% for the ANFIS(RD + NRD) technique Therefore this new techniques is relatively better for the retrieval of temperature profiles. The comparison of bias, mean absolute error (MAE), RMSE and symmetric mean absolute percentage error (SMAPE) of retrieved temperature and relative humidity (RH) profiles using ANN and ANFIS also indicated that profiles retrieved using ANFIS(RD + NRD) are significantly better compared to the ANN technique. The analysis of profiles concludes that retrieved profiles using ANFIS techniques have improved the temperature retrievals substantially; however, the retrieval of RH by all techniques considered in this paper (ANN, MVLR and

  6. Absolute calibration in vivo measurement systems

    International Nuclear Information System (INIS)

    Kruchten, D.A.; Hickman, D.P.

    1991-02-01

    Lawrence Livermore National Laboratory (LLNL) is currently investigating a new method for obtaining absolute calibration factors for radiation measurement systems used to measure internally deposited radionuclides in vivo. Absolute calibration of in vivo measurement systems will eliminate the need to generate a series of human surrogate structures (i.e., phantoms) for calibrating in vivo measurement systems. The absolute calibration of in vivo measurement systems utilizes magnetic resonance imaging (MRI) to define physiological structure, size, and composition. The MRI image provides a digitized representation of the physiological structure, which allows for any mathematical distribution of radionuclides within the body. Using Monte Carlo transport codes, the emission spectrum from the body is predicted. The in vivo measurement equipment is calibrated using the Monte Carlo code and adjusting for the intrinsic properties of the detection system. The calibration factors are verified using measurements of existing phantoms and previously obtained measurements of human volunteers. 8 refs

  7. Incorrect Weighting of Absolute Performance in Self-Assessment

    Science.gov (United States)

    Jeffrey, Scott A.; Cozzarin, Brian

    Students spend much of their life in an attempt to assess their aptitude for numerous tasks. For example, they expend a great deal of effort to determine their academic standing given a distribution of grades. This research finds that students use their absolute performance, or percentage correct as a yardstick for their self-assessment, even when relative standing is much more informative. An experiment shows that this reliance on absolute performance for self-evaluation causes a misallocation of time and financial resources. Reasons for this inappropriate responsiveness to absolute performance are explored.

  8. Novel approach for streamflow forecasting using a hybrid ANFIS-FFA model

    Science.gov (United States)

    Yaseen, Zaher Mundher; Ebtehaj, Isa; Bonakdari, Hossein; Deo, Ravinesh C.; Danandeh Mehr, Ali; Mohtar, Wan Hanna Melini Wan; Diop, Lamine; El-shafie, Ahmed; Singh, Vijay P.

    2017-11-01

    The present study proposes a new hybrid evolutionary Adaptive Neuro-Fuzzy Inference Systems (ANFIS) approach for monthly streamflow forecasting. The proposed method is a novel combination of the ANFIS model with the firefly algorithm as an optimizer tool to construct a hybrid ANFIS-FFA model. The results of the ANFIS-FFA model is compared with the classical ANFIS model, which utilizes the fuzzy c-means (FCM) clustering method in the Fuzzy Inference Systems (FIS) generation. The historical monthly streamflow data for Pahang River, which is a major river system in Malaysia that characterized by highly stochastic hydrological patterns, is used in the study. Sixteen different input combinations with one to five time-lagged input variables are incorporated into the ANFIS-FFA and ANFIS models to consider the antecedent seasonal variations in historical streamflow data. The mean absolute error (MAE), root mean square error (RMSE) and correlation coefficient (r) are used to evaluate the forecasting performance of ANFIS-FFA model. In conjunction with these metrics, the refined Willmott's Index (Drefined), Nash-Sutcliffe coefficient (ENS) and Legates and McCabes Index (ELM) are also utilized as the normalized goodness-of-fit metrics. Comparison of the results reveals that the FFA is able to improve the forecasting accuracy of the hybrid ANFIS-FFA model (r = 1; RMSE = 0.984; MAE = 0.364; ENS = 1; ELM = 0.988; Drefined = 0.994) applied for the monthly streamflow forecasting in comparison with the traditional ANFIS model (r = 0.998; RMSE = 3.276; MAE = 1.553; ENS = 0.995; ELM = 0.950; Drefined = 0.975). The results also show that the ANFIS-FFA is not only superior to the ANFIS model but also exhibits a parsimonious modelling framework for streamflow forecasting by incorporating a smaller number of input variables required to yield the comparatively better performance. It is construed that the FFA optimizer can thus surpass the accuracy of the traditional ANFIS model in general

  9. Local electronic structure at organic–metal interface studied by UPS, MAES, and first-principles calculation

    Energy Technology Data Exchange (ETDEWEB)

    Aoki, M., E-mail: cmaoki@mail.ecc.u-tokyo.ac.jp; Masuda, S.

    2015-10-01

    Understanding and controlling local electronic structures at organic–metal interfaces are crucial for fabricating novel organic-based electronics, as in the case of heterojunctions in semiconductor devices. Here, we report recent studies of valence electronic states at organic–metal interfaces (especially those near the Fermi level of a metal substrate) by the combined analysis of ultraviolet photoemission spectroscopy (UPS), metastable atom electron spectroscopy (MAES), and first-principles calculations. New electronic states in the HOMO (highest occupied molecular orbital)–LUMO (lowest unoccupied molecular orbital) gap formed at an organic–metal interface are classified as a chemisorption-induced gap state (CIGS) and a complex-based gap state (CBGS). The CIGS is further characterized by an asymptotic feature of the metal wave function in the chemisorbed species. The CIGSs in alkanethiolates on Pt(1 1 1) and C{sub 60} on Pt(1 1 1) can be regarded as damping and propagating types, respectively. The CBGSs in K-doped dibenzopentacene (DBP) are composed of DBP-derived MOs and K sp states and distributed over the complex film. No metallic structures were found in the K{sub 1}DBP and K{sub 3}DBP phases, suggesting that they are Mott–Hubbard insulators due to strong electron correlation. The local electronic structures of a pentacene film bridged by Au electrodes under bias voltages were examined by an FET-like specimen. The pentacene-derived bands were steeply shifted at the positively biased electrode, reflecting the p-type character of the film.

  10. Modeling the Biomechanics of Swine Mastication – An Inverse Dynamics Approach

    Science.gov (United States)

    Basafa, Ehsan; Murphy, Ryan J.; Gordon, Chad R.; Armand, Mehran

    2014-01-01

    A novel reconstructive alternative for patients with severe facial structural deformity is Le Fort-based, face-jaw-teeth transplantation (FJTT). To date, however, only ten surgeries have included underlying skeletal and jaw-teeth components, all yielding sub-optimal results and a need for a subsequent revision surgery, due to size mismatch and lack of precise planning. Numerous studies have proven swine to be appropriate candidates for translational studies including pre-operative planning of transplantation. An important aspect of planning FJTT is determining the optimal muscle attachment sites on the recipient’s jaw, which requires a clear understanding of mastication and bite mechanics in relation to the new donated upper and/or lower jaw. A segmented CT scan coupled with data taken from literature defined a biomechanical model of mandible and jaw muscles of a swine. The model was driven using tracked motion and external force data of one cycle of chewing published earlier, and predicted the muscle activation patterns as well as temporomandibular joint (TMJ) reaction forces and condylar motions. Two methods, polynomial and min/max optimization, were used for solving the muscle recruitment problem. Similar performances were observed between the two methods. On average, there was a mean absolute error (MAE) of <0.08 between the predicted and measured activation levels of all muscles, and an MAE of <7N for TMJ reaction forces. Simulated activations qualitatively followed the same patterns as the reference data and there was very good agreement for simulated TMJ forces. The polynomial optimization produced a smoother output, suggesting that it is more suitable for studying such motions. Average MAE for condylar motion was 1.2mm, which reduced to 0.37mm when the input incisor motion was scaled to reflect the possible size mismatch between the current and original swine models. Results support the hypothesis that the model can be used for planning of facial

  11. Modeling the biomechanics of swine mastication--an inverse dynamics approach.

    Science.gov (United States)

    Basafa, Ehsan; Murphy, Ryan J; Gordon, Chad R; Armand, Mehran

    2014-08-22

    A novel reconstructive alternative for patients with severe facial structural deformity is Le Fort-based, face-jaw-teeth transplantation (FJTT). To date, however, only ten surgeries have included underlying skeletal and jaw-teeth components, all yielding sub-optimal results and a need for a subsequent revision surgery, due to size mismatch and lack of precise planning. Numerous studies have proven swine to be appropriate candidates for translational studies including pre-operative planning of transplantation. An important aspect of planning FJTT is determining the optimal muscle attachment sites on the recipient's jaw, which requires a clear understanding of mastication and bite mechanics in relation to the new donated upper and/or lower jaw. A segmented CT scan coupled with data taken from literature defined a biomechanical model of mandible and jaw muscles of a swine. The model was driven using tracked motion and external force data of one cycle of chewing published earlier, and predicted the muscle activation patterns as well as temporomandibular joint (TMJ) reaction forces and condylar motions. Two methods, polynomial and min/max optimization, were used for solving the muscle recruitment problem. Similar performances were observed between the two methods. On average, there was a mean absolute error (MAE) of <0.08 between the predicted and measured activation levels of all muscles, and an MAE of <7 N for TMJ reaction forces. Simulated activations qualitatively followed the same patterns as the reference data and there was very good agreement for simulated TMJ forces. The polynomial optimization produced a smoother output, suggesting that it is more suitable for studying such motions. Average MAE for condylar motion was 1.2mm, which reduced to 0.37 mm when the input incisor motion was scaled to reflect the possible size mismatch between the current and original swine models. Results support the hypothesis that the model can be used for planning of facial

  12. Critical Test of Some Computational Chemistry Methods for Prediction of Gas-Phase Acidities and Basicities.

    Science.gov (United States)

    Toomsalu, Eve; Koppel, Ilmar A; Burk, Peeter

    2013-09-10

    Gas-phase acidities and basicities were calculated for 64 neutral bases (covering the scale from 139.9 kcal/mol to 251.9 kcal/mol) and 53 neutral acids (covering the scale from 299.5 kcal/mol to 411.7 kcal/mol). The following methods were used: AM1, PM3, PM6, PDDG, G2, G2MP2, G3, G3MP2, G4, G4MP2, CBS-QB3, B1B95, B2PLYP, B2PLYPD, B3LYP, B3PW91, B97D, B98, BLYP, BMK, BP86, CAM-B3LYP, HSEh1PBE, M06, M062X, M06HF, M06L, mPW2PLYP, mPW2PLYPD, O3LYP, OLYP, PBE1PBE, PBEPBE, tHCTHhyb, TPSSh, VSXC, X3LYP. The addition of the Grimmes empirical dispersion correction (D) to B2PLYP and mPW2PLYP was evaluated, and it was found that adding this correction gave more-accurate results when considering acidities. Calculations with B3LYP, B97D, BLYP, B2PLYPD, and PBE1PBE methods were carried out with five basis sets (6-311G**, 6-311+G**, TZVP, cc-pVTZ, and aug-cc-pVTZ) to evaluate the effect of basis sets on the accuracy of calculations. It was found that the best basis sets when considering accuracy of results and needed time were 6-311+G** and TZVP. Among semiempirical methods AM1 had the best ability to reproduce experimental acidities and basicities (the mean absolute error (mae) was 7.3 kcal/mol). Among DFT methods the best method considering accuracy, robustness, and computation time was PBE1PBE/6-311+G** (mae = 2.7 kcal/mol). Four Gaussian-type methods (G2, G2MP2, G4, and G4MP2) gave similar results to each other (mae = 2.3 kcal/mol). Gaussian-type methods are quite accurate, but their downside is the relatively long computational time.

  13. Absolute instrumental neutron activation analysis at Lawrence Livermore Laboratory

    International Nuclear Information System (INIS)

    Heft, R.E.

    1977-01-01

    The Environmental Science Division at Lawrence Livermore Laboratory has in use a system of absolute Instrumental Neutron Activation Analysis (INAA). Basically, absolute INAA is dependent upon the absolute measurement of the disintegration rates of the nuclides produced by neutron capture. From such disintegration rate data, the amount of the target element present in the irradiated sample is calculated by dividing the observed disintegration rate for each nuclide by the expected value for the disintegration rate per microgram of the target element that produced the nuclide. In absolute INAA, the expected value for disintegration rate per microgram is calculated from nuclear parameters and from measured values of both thermal and epithermal neutron fluxes which were present during irradiation. Absolute INAA does not depend on the concurrent irradiation of elemental standards but does depend on the values for thermal and epithermal neutron capture cross-sections for the target nuclides. A description of the analytical method is presented

  14. Absolute Navigation Information Estimation for Micro Planetary Rovers

    Directory of Open Access Journals (Sweden)

    Muhammad Ilyas

    2016-03-01

    Full Text Available This paper provides algorithms to estimate absolute navigation information, e.g., absolute attitude and position, by using low power, weight and volume Microelectromechanical Systems-type (MEMS sensors that are suitable for micro planetary rovers. Planetary rovers appear to be easily navigable robots due to their extreme slow speed and rotation but, unfortunately, the sensor suites available for terrestrial robots are not always available for planetary rover navigation. This makes them difficult to navigate in a completely unexplored, harsh and complex environment. Whereas the relative attitude and position can be tracked in a similar way as for ground robots, absolute navigation information, unlike in terrestrial applications, is difficult to obtain for a remote celestial body, such as Mars or the Moon. In this paper, an algorithm called the EASI algorithm (Estimation of Attitude using Sun sensor and Inclinometer is presented to estimate the absolute attitude using a MEMS-type sun sensor and inclinometer, only. Moreover, the output of the EASI algorithm is fused with MEMS gyros to produce more accurate and reliable attitude estimates. An absolute position estimation algorithm has also been presented based on these on-board sensors. Experimental results demonstrate the viability of the proposed algorithms and the sensor suite for low-cost and low-weight micro planetary rovers.

  15. Ciliates learn to diagnose and correct classical error syndromes in mating strategies.

    Science.gov (United States)

    Clark, Kevin B

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social

  16. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    Directory of Open Access Journals (Sweden)

    Kevin Bradley Clark

    2013-08-01

    Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in

  17. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Mark [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Tuen Mun Hospital, Hong Kong (China); Grehn, Melanie [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Cremers, Florian [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Siebert, Frank-Andre [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Wurster, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Department for Radiation Oncology, University Medicine Greifswald, Greifswald (Germany); Huttenlocher, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Dunst, Jürgen [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Department for Radiation Oncology, University Clinic Copenhagen, Copenhagen (Denmark); Hildebrandt, Guido [Department for Radiation Oncology, University Medicine Rostock, Rostock (Germany); Schweikard, Achim [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Rades, Dirk [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Ernst, Floris [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); and others

    2017-03-15

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  18. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    International Nuclear Information System (INIS)

    Chan, Mark; Grehn, Melanie; Cremers, Florian; Siebert, Frank-Andre; Wurster, Stefan; Huttenlocher, Stefan; Dunst, Jürgen; Hildebrandt, Guido; Schweikard, Achim; Rades, Dirk; Ernst, Floris

    2017-01-01

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  19. Oligomeric models for estimation of polydimethylsiloxane-water partition ratios with COSMO-RS theory: impact of the combinatorial term on absolute error.

    Science.gov (United States)

    Parnis, J Mark; Mackay, Donald

    2017-03-22

    A series of 12 oligomeric models for polydimethylsiloxane (PDMS) were evaluated for their effectiveness in estimating the PDMS-water partition ratio, K PDMS-w . Models ranging in size and complexity from the -Si(CH 3 ) 2 -O- model previously published by Goss in 2011 to octadeca-methyloctasiloxane (CH 3 -(Si(CH 3 ) 2 -O-) 8 CH 3 ) were assessed based on their RMS error with 253 experimental measurements of log K PDMS-w from six published works. The lowest RMS error for log K PDMS-w (0.40 in log K) was obtained with the cyclic oligomer, decamethyl-cyclo-penta-siloxane (D5), (-Si(CH 3 ) 2 -O-) 5 , with the mixing-entropy associated combinatorial term included in the chemical potential calculation. The presence or absence of terminal methyl groups on linear oligomer models is shown to have significant impact only for oligomers containing 1 or 2 -Si(CH 3 ) 2 -O- units. Removal of the combinatorial term resulted in a significant increase in the RMS error for most models, with the smallest increase associated with the largest oligomer studied. The importance of inclusion of the combinatorial term in the chemical potential for liquid oligomer models is discussed.

  20. Absolute-magnitude distributions of supernovae

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, Dean; Wright, John [Department of Physics, Xavier University of Louisiana, New Orleans, LA 70125 (United States); Jenkins III, Robert L. [Applied Physics Department, Richard Stockton College, Galloway, NJ 08205 (United States); Maddox, Larry, E-mail: drichar7@xula.edu [Department of Chemistry and Physics, Southeastern Louisiana University, Hammond, LA 70402 (United States)

    2014-05-01

    The absolute-magnitude distributions of seven supernova (SN) types are presented. The data used here were primarily taken from the Asiago Supernova Catalogue, but were supplemented with additional data. We accounted for both foreground and host-galaxy extinction. A bootstrap method is used to correct the samples for Malmquist bias. Separately, we generate volume-limited samples, restricted to events within 100 Mpc. We find that the superluminous events (M{sub B} < –21) make up only about 0.1% of all SNe in the bias-corrected sample. The subluminous events (M{sub B} > –15) make up about 3%. The normal Ia distribution was the brightest with a mean absolute blue magnitude of –19.25. The IIP distribution was the dimmest at –16.75.

  1. Calibration with Absolute Shrinkage

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul

    2001-01-01

    In this paper, penalized regression using the L-1 norm on the estimated parameters is proposed for chemometric je calibration. The algorithm is of the lasso type, introduced by Tibshirani in 1996 as a linear regression method with bound on the absolute length of the parameters, but a modification...

  2. ERROR REDUCTION IN DUCT LEAKAGE TESTING THROUGH DATA CROSS-CHECKS

    Energy Technology Data Exchange (ETDEWEB)

    ANDREWS, J.W.

    1998-12-31

    One way to reduce uncertainty in scientific measurement is to devise a protocol in which more quantities are measured than are absolutely required, so that the result is over constrained. This report develops a method for so combining data from two different tests for air leakage in residential duct systems. An algorithm, which depends on the uncertainty estimates for the measured quantities, optimizes the use of the excess data. In many cases it can significantly reduce the error bar on at least one of the two measured duct leakage rates (supply or return), and it provides a rational method of reconciling any conflicting results from the two leakage tests.

  3. CMIP5 downscaling and its uncertainty in China

    Science.gov (United States)

    Yue, TianXiang; Zhao, Na; Fan, ZeMeng; Li, Jing; Chen, ChuanFa; Lu, YiMin; Wang, ChenLiang; Xu, Bing; Wilson, John

    2016-11-01

    A comparison between the Coupled Model Intercomparison Project Phase 5 (CMIP5) data and observations at 735 meteorological stations indicated that mean annual temperature (MAT) was underestimated about 1.8 °C while mean annual precipitation (MAP) was overestimated about 263 mm in general across the whole of China. A statistical analysis of China-CMIP5 data demonstrated that MAT exhibits spatial stationarity, while MAP exhibits spatial non-stationarity. MAT and MAP data from the China-CMIP5 dataset were downscaled by combining statistical approaches with a method for high accuracy surface modeling (HASM). A statistical transfer function (STF) of MAT was formulated using minimized residuals output by HASM with an ordinary least squares (OLS) linear equation that used latitude and elevation as independent variables, abbreviated as HASM-OLS. The STF of MAP under a BOX-COX transformation was derived as a combination of minimized residuals output by HASM with a geographically weight regression (GWR) using latitude, longitude, elevation and impact coefficient of aspect as independent variables, abbreviated as HASM-GB. Cross validation, using observational data from the 735 meteorological stations across China for the period 1976 to 2005, indicates that the largest uncertainty occurred on the Tibet plateau with mean absolute errors (MAEs) of MAT and MAP as high as 4.64 °C and 770.51 mm, respectively. The downscaling processes of HASM-OLS and HASM-GB generated MAEs of MAT and MAP that were 67.16% and 77.43% lower, respectively across the whole of China on average, and 88.48% and 97.09% lower for the Tibet plateau.

  4. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  5. Efficacy of intrahepatic absolute alcohol in unrespectable hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Farooqi, J.I.; Hameed, K.; Khan, I.U.; Shah, S.

    2001-01-01

    To determine efficacy of intrahepatic absolute alcohol injection in researchable hepatocellular carcinoma. A randomized, controlled, experimental and interventional clinical trial. Gastroenterology Department, PGMI, Hayatabad Medical Complex, Peshawar during the period from June, 1998 to June, 2000. Thirty patients were treated by percutaneous, intrahepatic absolute alcohol injection sin repeated sessions, 33 patients were not given or treated with alcohol to serve as control. Both the groups were comparable for age, sex and other baseline characteristics. Absolute alcohol therapy significantly improved quality of life of patients, reduced the tumor size and mortality as well as showed significantly better results regarding survival (P< 0.05) than the patients of control group. We conclude that absolute alcohol is a beneficial and safe palliative treatment measure in advanced hepatocellular carcinoma (HCC). (author)

  6. Absolute and Relative Reliability of the Timed 'Up & Go' Test and '30second Chair-Stand' Test in Hospitalised Patients with Stroke

    DEFF Research Database (Denmark)

    Lyders Johansen, Katrine; Derby Stistrup, Rikke; Skibdal Schjøtt, Camilla

    2016-01-01

    OBJECTIVE: The timed 'Up & Go' test and '30second Chair-Stand' test are simple clinical outcome measures widely used to assess functional performance. The reliability of both tests in hospitalised stroke patients is unknown. The purpose was to investigate the relative and absolute reliability...... of both tests in patients admitted to an acute stroke unit. METHODS: Sixty-two patients (men, n = 41) attended two test sessions separated by a one hours rest. Intraclass correlation coefficients (ICC2,1) were calculated to assess relative reliability. Absolute reliability was expressed as Standard Error...... of Measurement (with 95% certainty-SEM95) and Smallest Real Difference (SRD) and as percentage of their respective means if heteroscedasticity was observed in Bland Altman plots (SEM95% and SRD%). RESULTS: ICC values for interrater reliability were 0.97 and 0.99 for the timed 'Up & Go' test and 0.88 and 0...

  7. Planck absolute entropy of a rotating BTZ black hole

    Science.gov (United States)

    Riaz, S. M. Jawwad

    2018-04-01

    In this paper, the Planck absolute entropy and the Bekenstein-Smarr formula of the rotating Banados-Teitelboim-Zanelli (BTZ) black hole are presented via a complex thermodynamical system contributed by its inner and outer horizons. The redefined entropy approaches zero as the temperature of the rotating BTZ black hole tends to absolute zero, satisfying the Nernst formulation of a black hole. Hence, it can be regarded as the Planck absolute entropy of the rotating BTZ black hole.

  8. Absolute nuclear material assay using count distribution (LAMBDA) space

    Science.gov (United States)

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  9. THE DISKMASS SURVEY. II. ERROR BUDGET

    International Nuclear Information System (INIS)

    Bershady, Matthew A.; Westfall, Kyle B.; Verheijen, Marc A. W.; Martinsson, Thomas; Andersen, David R.; Swaters, Rob A.

    2010-01-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ * ), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25 0 -35 0 is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction (F bar ) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σ dyn ), disk stellar mass-to-light ratio (Υ disk * ), and disk maximality (F *,max disk ≡V disk *,max / V c ). Random and systematic errors in these quantities for individual galaxies will be ∼25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  10. Com quantas caixas se faz uma reserva técnica? Um relato de experiência sobre a gestão dos acervos arqueológicos no MAE/UFBA

    Directory of Open Access Journals (Sweden)

    Mara Lúcia Carrett de Vasconcelos

    2017-11-01

    Full Text Available A gestão dos acervos é parte fundamental para que instituições de salvaguarda do patrimônio arqueológico cumpram suas funções de pesquisa, conservação e extroversão. No contexto dos museus de arqueologia, as reservas técnicas são talvez os espaços que mais têm sofrido as consequências da incorporação contínua de grandes quantidades de objetos, decorrente principalmente do aumento no número das pesquisas preventivas, e da falta de recursos das instituições. No Museu de Arqueologia e Etnologia da Universidade Federal da Bahia (MAE/UFBA, a reserva técnica que abriga o acervo arqueológico foi submetida, na última década, a sucessivas realocações e encontra-se hoje em local provisório e inadequado, sendo considerada inapta para o recebimento de novas coleções. O esforço atual do MAE/UFBA consiste na implementação de um projeto de requalificação deste espaço, com a finalidade de minimizar o impacto causado pelos agentes de deterioração e garantir a preservação dos artefatos. A ação final relativa ao projeto consistirá na construção do Centro de Referência em Arqueologia e Conservação e Restauro, que abrigará reserva técnica e laboratórios para utilização tanto do museu como de outras unidades da universidade.

  11. Prediction of the Reference Evapotranspiration Using a Chaotic Approach

    Science.gov (United States)

    Wang, Wei-guang; Zou, Shan; Luo, Zhao-hui; Zhang, Wei; Kong, Jun

    2014-01-01

    Evapotranspiration is one of the most important hydrological variables in the context of water resources management. An attempt was made to understand and predict the dynamics of reference evapotranspiration from a nonlinear dynamical perspective in this study. The reference evapotranspiration data was calculated using the FAO Penman-Monteith equation with the observed daily meteorological data for the period 1966–2005 at four meteorological stations (i.e., Baotou, Zhangbei, Kaifeng, and Shaoguan) representing a wide range of climatic conditions of China. The correlation dimension method was employed to investigate the chaotic behavior of the reference evapotranspiration series. The existence of chaos in the reference evapotranspiration series at the four different locations was proved by the finite and low correlation dimension. A local approximation approach was employed to forecast the daily reference evapotranspiration series. Low root mean square error (RSME) and mean absolute error (MAE) (for all locations lower than 0.31 and 0.24, resp.), high correlation coefficient (CC), and modified coefficient of efficiency (for all locations larger than 0.97 and 0.8, resp.) indicate that the predicted reference evapotranspiration agrees well with the observed one. The encouraging results indicate the suitableness of chaotic approach for understanding and predicting the dynamics of the reference evapotranspiration. PMID:25133221

  12. Prediction of the Reference Evapotranspiration Using a Chaotic Approach

    Directory of Open Access Journals (Sweden)

    Wei-guang Wang

    2014-01-01

    Full Text Available Evapotranspiration is one of the most important hydrological variables in the context of water resources management. An attempt was made to understand and predict the dynamics of reference evapotranspiration from a nonlinear dynamical perspective in this study. The reference evapotranspiration data was calculated using the FAO Penman-Monteith equation with the observed daily meteorological data for the period 1966–2005 at four meteorological stations (i.e., Baotou, Zhangbei, Kaifeng, and Shaoguan representing a wide range of climatic conditions of China. The correlation dimension method was employed to investigate the chaotic behavior of the reference evapotranspiration series. The existence of chaos in the reference evapotranspiration series at the four different locations was proved by the finite and low correlation dimension. A local approximation approach was employed to forecast the daily reference evapotranspiration series. Low root mean square error (RSME and mean absolute error (MAE (for all locations lower than 0.31 and 0.24, resp., high correlation coefficient (CC, and modified coefficient of efficiency (for all locations larger than 0.97 and 0.8, resp. indicate that the predicted reference evapotranspiration agrees well with the observed one. The encouraging results indicate the suitableness of chaotic approach for understanding and predicting the dynamics of the reference evapotranspiration.

  13. An Electricity Price Forecasting Model by Hybrid Structured Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Ping-Huan Kuo

    2018-04-01

    Full Text Available Electricity price is a key influencer in the electricity market. Electricity market trades by each participant are based on electricity price. The electricity price adjusted with the change in supply and demand relationship can reflect the real value of electricity in the transaction process. However, for the power generating party, bidding strategy determines the level of profit, and the accurate prediction of electricity price could make it possible to determine a more accurate bidding price. This cannot only reduce transaction risk, but also seize opportunities in the electricity market. In order to effectively estimate electricity price, this paper proposes an electricity price forecasting system based on the combination of 2 deep neural networks, the Convolutional Neural Network (CNN and the Long Short Term Memory (LSTM. In order to compare the overall performance of each algorithm, the Mean Absolute Error (MAE and Root-Mean-Square error (RMSE evaluating measures were applied in the experiments of this paper. Experiment results show that compared with other traditional machine learning methods, the prediction performance of the estimating model proposed in this paper is proven to be the best. By combining the CNN and LSTM models, the feasibility and practicality of electricity price prediction is also confirmed in this paper.

  14. Short-Term Solar Irradiance Forecasts Using Sky Images and Radiative Transfer Model

    Directory of Open Access Journals (Sweden)

    Juan Du

    2018-05-01

    Full Text Available In this paper, we propose a novel forecast method which addresses the difficulty in short-term solar irradiance forecasting that arises due to rapidly evolving environmental factors over short time periods. This involves the forecasting of Global Horizontal Irradiance (GHI that combines prediction sky images with a Radiative Transfer Model (RTM. The prediction images (up to 10 min ahead are produced by a non-local optical flow method, which is used to calculate the cloud motion for each pixel, with consecutive sky images at 1 min intervals. The Direct Normal Irradiance (DNI and the diffuse radiation intensity field under clear sky and overcast conditions obtained from the RTM are then mapped to the sky images. Through combining the cloud locations on the prediction image with the corresponding instance of image-based DNI and diffuse radiation intensity fields, the GHI can be quantitatively forecasted for time horizons of 1–10 min ahead. The solar forecasts are evaluated in terms of root mean square error (RMSE and mean absolute error (MAE in relation to in-situ measurements and compared to the performance of the persistence model. The results of our experiment show that GHI forecasts using the proposed method perform better than the persistence model.

  15. Back to the Future Betas: Empirical Asset Pricing of US and Southeast Asian Markets

    Directory of Open Access Journals (Sweden)

    Jordan French

    2016-07-01

    Full Text Available The study adds an empirical outlook on the predicting power of using data from the future to predict future returns. The crux of the traditional Capital Asset Pricing Model (CAPM methodology is using historical data in the calculation of the beta coefficient. This study instead uses a battery of Generalized Auto Regressive Conditional Heteroskedasticity (GARCH models, of differing lag and parameter terms, to forecast the variance of the market used in the denominator of the beta formula. The covariance of the portfolio and market returns are assumed to remain constant in the time-varying beta calculations. The data spans from 3 January 2005 to 29 December 2014. One ten-year, two five-year, and three three-year sample periods were used, for robustness, with ten different portfolios. Out of sample forecasts, mean absolute error (MAE and mean squared forecast error (MSE were used to compare the forecasting ability of the ex-ante GARCH models, Artificial Neural Network, and the standard market ex-post model. Find that the time-varying MGARCH and SGARCH beta performed better with out-of-sample testing than the other ex-ante models. Although the simplest approach, constant ex-post beta, performed as well or better within this empirical study.

  16. An application of seasonal ARIMA models on group commodities to forecast Philippine merchandise exports performance

    Science.gov (United States)

    Natividad, Gina May R.; Cawiding, Olive R.; Addawe, Rizavel C.

    2017-11-01

    The increase in the merchandise exports of the country offers information about the Philippines' trading role within the global economy. Merchandise exports statistics are used to monitor the country's overall production that is consumed overseas. This paper investigates the comparison between two models obtained by a) clustering the commodity groups into two based on its proportional contribution to the total exports, and b) treating only the total exports. Different seasonal autoregressive integrated moving average (SARIMA) models were then developed for the clustered commodities and for the total exports based on the monthly merchandise exports of the Philippines from 2011 to 2016. The data set used in this study was retrieved from the Philippine Statistics Authority (PSA) which is the central statistical authority in the country responsible for primary data collection. A test for significance of the difference between means at 0.05 level of significance was then performed on the forecasts produced. The result indicates that there is a significant difference between the mean of the forecasts of the two models. Moreover, upon a comparison of the root mean square error (RMSE) and mean absolute error (MAE) of the models, it was found that the models used for the clustered groups outperform the model for the total exports.

  17. Role of hybrid forecasting techniques for transportation planning of broiler meat under uncertain demand in thailand

    Directory of Open Access Journals (Sweden)

    Thoranin Sujjaviriyasup

    2014-12-01

    Full Text Available One of numerous problems experiencing in supply chain management is the demand. Most demands are appeared in terms of uncertainty. The broiler meat industry is inevitably encountering the same problem. In this research, hybrid forecasting model of ARIMA and Support Vector Machine (SVMs are developed to forecast broiler meat export. In addition, ARIMA, SVMs, and Moving Average (MA are chosen for comparing the forecasting efficiency. All the forecasting models are tested and validated using the data of Brazil’s export, Canada’s export, and Thailand’s export. The hybrid model provides accuracy of the forecasted values that are 98.71%, 97.50%, and 93.01%, respectively. In addition, the hybrid model presents the least error of all MAE, RMSE, and MAPE comparing with other forecasting models. As forecasted data are applied to transportation planning, the mean absolute percentage error (MAPE of optimal value of forecasted value and actual value is 14.53%. The hybrid forecasting model shows an ability to reduce risk of total cost of transportation when broiler meat export is forecasted by using MA(2, MA(3, ARIMA, and SVM are 50.59%, 60.18%, 68.01%, and 46.55%, respectively. The results indicate that the developed forecasting model is recommended to broiler meat industries’ supply chain decision.

  18. Design and construction of miniature artificial ecosystem based on dynamic response optimization

    Science.gov (United States)

    Hu, Dawei; Liu, Hong; Tong, Ling; Li, Ming; Hu, Enzhu

    The miniature artificial ecosystem (MAES) is a combination of man, silkworm, salad and mi-croalgae to partially regenerate O2 , sanitary water and food, simultaneously dispose CO2 and wastes, therefore it have a fundamental life support function. In order to enhance the safety and reliability of MAES and eliminate the influences of internal variations and external dis-turbances, it was necessary to configure MAES as a closed-loop control system, and it could be considered as a prototype for future bioregenerative life support system. However, MAES is a complex system possessing large numbers of parameters, intricate nonlinearities, time-varying factors as well as uncertainties, hence it is difficult to perfectly design and construct a prototype through merely conducting experiments by trial and error method. Our research presented an effective way to resolve preceding problem by use of dynamic response optimiza-tion. Firstly the mathematical model of MAES with first-order nonlinear ordinary differential equations including parameters was developed based on relevant mechanisms and experimental data, secondly simulation model of MAES was derived on the platform of MatLab/Simulink to perform model validation and further digital simulations, thirdly reference trajectories of de-sired dynamic response of system outputs were specified according to prescribed requirements, and finally optimization for initial values, tuned parameter and independent parameters was carried out using the genetic algorithm, the advanced direct search method along with parallel computing methods through computer simulations. The result showed that all parameters and configurations of MAES were determined after a series of computer experiments, and its tran-sient response performances and steady characteristics closely matched the reference curves. Since the prototype is a physical system that represents the mathematical model with reason-able accuracy, so the process of designing and

  19. Potential errors in optical density measurements due to scanning side in EBT and EBT2 Gafchromic film dosimetry.

    Science.gov (United States)

    Desroches, Joannie; Bouchard, Hugo; Lacroix, Frédéric

    2010-04-01

    The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side ("face up" or "face down") were simulated. Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%-3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.

  20. Approach to Absolute Zero

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 10. Approach to Absolute Zero Below 10 milli-Kelvin. R Srinivasan. Series Article Volume 2 Issue 10 October 1997 pp 8-16. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/002/10/0008-0016 ...

  1. Absolute total and one and two electron transfer cross sections for Ar8+ on Ar as a function of energy

    International Nuclear Information System (INIS)

    Vancura, J.; Kostroun, V.O.

    1992-01-01

    The absolute total and one and two electron transfer cross sections for Ar 8+ on Ar were measured as a function of projectile laboratory energy from 0.090 to 0.550 keV/amu. The effective one electron transfer cross section dominates above 0.32 keV/amu, while below this energy, the effective two electron transfer starts to become appreciable. The total cross section varies by a factor over the energy range explored. The overall error in the cross section measurement is estimated to be ± 15%

  2. Population-based absolute risk estimation with survey data

    Science.gov (United States)

    Kovalchik, Stephanie A.; Pfeiffer, Ruth M.

    2013-01-01

    Absolute risk is the probability that a cause-specific event occurs in a given time interval in the presence of competing events. We present methods to estimate population-based absolute risk from a complex survey cohort that can accommodate multiple exposure-specific competing risks. The hazard function for each event type consists of an individualized relative risk multiplied by a baseline hazard function, which is modeled nonparametrically or parametrically with a piecewise exponential model. An influence method is used to derive a Taylor-linearized variance estimate for the absolute risk estimates. We introduce novel measures of the cause-specific influences that can guide modeling choices for the competing event components of the model. To illustrate our methodology, we build and validate cause-specific absolute risk models for cardiovascular and cancer deaths using data from the National Health and Nutrition Examination Survey. Our applications demonstrate the usefulness of survey-based risk prediction models for predicting health outcomes and quantifying the potential impact of disease prevention programs at the population level. PMID:23686614

  3. Absolute marine gravimetry with matter-wave interferometry.

    Science.gov (United States)

    Bidel, Y; Zahzam, N; Blanchard, C; Bonnin, A; Cadoret, M; Bresson, A; Rouxel, D; Lequentrec-Lalancette, M F

    2018-02-12

    Measuring gravity from an aircraft or a ship is essential in geodesy, geophysics, mineral and hydrocarbon exploration, and navigation. Today, only relative sensors are available for onboard gravimetry. This is a major drawback because of the calibration and drift estimation procedures which lead to important operational constraints. Atom interferometry is a promising technology to obtain onboard absolute gravimeter. But, despite high performances obtained in static condition, no precise measurements were reported in dynamic. Here, we present absolute gravity measurements from a ship with a sensor based on atom interferometry. Despite rough sea conditions, we obtained precision below 10 -5  m s -2 . The atom gravimeter was also compared with a commercial spring gravimeter and showed better performances. This demonstration opens the way to the next generation of inertial sensors (accelerometer, gyroscope) based on atom interferometry which should provide high-precision absolute measurements from a moving platform.

  4. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Relative and absolute risk in epidemiology and health physics

    International Nuclear Information System (INIS)

    Goldsmith, R.; Peterson, H.T. Jr.

    1983-01-01

    The health risk from ionizing radiation commonly is expressed in two forms: (1) the relative risk, which is the percentage increase in natural disease rate and (2) the absolute or attributable risk which represents the difference between the natural rate and the rate associated with the agent in question. Relative risk estimates for ionizing radiation generally are higher than those expressed as the absolute risk. This raises the question of which risk estimator is the most appropriate under different conditions. The absolute risk has generally been used for radiation risk assessment, although mathematical combinations such as the arithmetic or geometric mean of both the absolute and relative risks, have also been used. Combinations of the two risk estimators are not valid because the absolute and relative risk are not independent variables. Both human epidemiologic studies and animal experimental data can be found to illustrate the functional relationship between the natural cancer risk and the risk associated with radiation. This implies that the radiation risk estimate derived from one population may not be appropriate for predictions in another population, unless it is adjusted for the difference in the natural disease incidence between the two populations

  6. Redetermination and absolute configuration of atalaphylline

    Directory of Open Access Journals (Sweden)

    Hoong-Kun Fun

    2010-02-01

    Full Text Available The title acridone alkaloid [systematic name: 1,3,5-trihydroxy-2,4-bis(3-methylbut-2-enylacridin-9(10H-one], C23H25NO4, has previously been reported as crystallizing in the chiral orthorhombic space group P212121 [Chantrapromma et al. (2010. Acta Cryst. E66, o81–o82] but the absolute configuration could not be determined from data collected with Mo radiation. The absolute configuration has now been determined by refinement of the Flack parameter with data collected using Cu radiation. All features of the molecule and its crystal packing are similar to those previously described.

  7. Absolute calibration of sniffer probes on Wendelstein 7-X

    International Nuclear Information System (INIS)

    Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.

    2016-01-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m 2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m 2 per MW injected beam power is measured.

  8. Absolute calibration of sniffer probes on Wendelstein 7-X

    Science.gov (United States)

    Moseev, D.; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.; Oosterbeek, J. W.

    2016-08-01

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m2 per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m2 per MW injected beam power is measured.

  9. Absolute calibration of sniffer probes on Wendelstein 7-X

    Energy Technology Data Exchange (ETDEWEB)

    Moseev, D., E-mail: dmitry.moseev@ipp.mpg.de; Laqua, H. P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V. [Max-Planck-Institut für Plasmaphysik, Greifswald (Germany); Gellert, F. [Max-Planck-Institut für Plasmaphysik, Greifswald (Germany); Ernst-Moritz-Arndt-Universität Greifswald, Greifswald (Germany); Oosterbeek, J. W. [Eindhoven University of Technology, Eindhoven (Netherlands)

    2016-08-15

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of the Wendelstein 7-X empty vacuum vessel. Normalized absolute calibration coefficients agree with the cross-calibration coefficients that are obtained by the direct measurements, indicating that the measured absolute calibration coefficients and stray radiation levels in the vessel are valid. Close to the launcher, the stray radiation in the empty vessel reaches power levels up to 340 kW/m{sup 2} per MW injected beam power. Furthest away from the launcher, i.e., half a toroidal turn, still 90 kW/m{sup 2} per MW injected beam power is measured.

  10. Absolute magnitudes by statistical parallaxes

    International Nuclear Information System (INIS)

    Heck, A.

    1978-01-01

    The author describes an algorithm for stellar luminosity calibrations (based on the principle of maximum likelihood) which allows the calibration of relations of the type: Msub(i)=sup(N)sub(j=1)Σqsub(j)Csub(ij), i=1,...,n, where n is the size of the sample at hand, Msub(i) are the individual absolute magnitudes, Csub(ij) are observational quantities (j=1,...,N), and qsub(j) are the coefficients to be determined. If one puts N=1 and Csub(iN)=1, one has q 1 =M(mean), the mean absolute magnitude of the sample. As additional output, the algorithm provides one also with the dispersion in magnitude of the sample sigmasub(M), the mean solar motion (U,V,W) and the corresponding velocity ellipsoid (sigmasub(u), sigmasub(v), sigmasub(w). The use of this algorithm is illustrated. (Auth.)

  11. Noncircular features in Saturn's rings IV: Absolute radius scale and Saturn's pole direction

    Science.gov (United States)

    French, Richard G.; McGhee-French, Colleen A.; Lonergan, Katherine; Sepersky, Talia; Jacobson, Robert A.; Nicholson, Philip D.; Hedman, Mathew M.; Marouf, Essam A.; Colwell, Joshua E.

    2017-07-01

    We present a comprehensive solution for the geometry of Saturn's ring system, based on orbital fits to an extensive set of occultation observations of 122 individual ring edges and gaps. We begin with a restricted set of very high quality Cassini VIMS, UVIS, and RSS measurements for quasi-circular features in the C and B rings and the Cassini Division, and then successively add suitably weighted additional Cassini and historical occultation measurements (from Voyager, HST and the widely-observed 28 Sgr occultation of 3 Jul 1989) for additional non-circular features, to derive an absolute radius scale applicable across the entire classical ring system. As part of our adopted solution, we determine first-order corrections to the spacecraft trajectories used to determine the geometry of individual occultation chords. We adopt a simple linear model for Saturn's precession, and our favored solution yields a precession rate on the sky n^˙P = 0.207 ± 0 .006‧‧yr-1 , equivalent to an angular rate of polar motion ΩP = 0.451 ± 0 .014‧‧yr-1 . The 3% formal uncertainty in the fitted precession rate is approaching the point where it can provide a useful constraint on models of Saturn's interior, although realistic errors are likely to be larger, given the linear approximation of the precession model and possible unmodeled systematic errors in the spacecraft ephemerides. Our results are largely consistent with independent estimates of the precession rate based on historical RPX times (Nicholson et al., 1999 AAS/Division for Planetary Sciences Meeting Abstracts #31 31, 44.01) and from theoretical expectations that account for Titan's 700-yr precession period (Vienne and Duriez 1992, Astronomy and Astrophysics 257, 331-352). The fitted precession rate based on Cassini data only is somewhat lower, which may be an indication of unmodeled shorter term contributions to Saturn's polar motion from other satellites, or perhaps the result of inconsistencies in the assumed

  12. Regional absolute conductivity reconstruction using projected current density in MREIT

    International Nuclear Information System (INIS)

    Sajib, Saurav Z K; Kim, Hyung Joong; Woo, Eung Je; Kwon, Oh In

    2012-01-01

    slice and the reconstructed regional projected current density, we propose a direct non-iterative algorithm to reconstruct the absolute conductivity in the ROI. The numerical simulations in the presence of various degrees of noise, as well as a phantom MRI imaging experiment showed that the proposed method reconstructs the regional absolute conductivity in a ROI within a subject including the defective regions. In the simulation experiment, the relative L 2 -mode errors of the reconstructed regional and global conductivities were 0.79 and 0.43, respectively, using a noise level of 50 db in the defective region. (paper)

  13. Coordinated joint motion control system with position error correction

    Science.gov (United States)

    Danko, George L.

    2016-04-05

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  14. Correcting electrode modelling errors in EIT on realistic 3D head models.

    Science.gov (United States)

    Jehl, Markus; Avery, James; Malone, Emma; Holder, David; Betcke, Timo

    2015-12-01

    Electrical impedance tomography (EIT) is a promising medical imaging technique which could aid differentiation of haemorrhagic from ischaemic stroke in an ambulance. One challenge in EIT is the ill-posed nature of the image reconstruction, i.e., that small measurement or modelling errors can result in large image artefacts. It is therefore important that reconstruction algorithms are improved with regard to stability to modelling errors. We identify that wrongly modelled electrode positions constitute one of the biggest sources of image artefacts in head EIT. Therefore, the use of the Fréchet derivative on the electrode boundaries in a realistic three-dimensional head model is investigated, in order to reconstruct electrode movements simultaneously to conductivity changes. We show a fast implementation and analyse the performance of electrode position reconstructions in time-difference and absolute imaging for simulated and experimental voltages. Reconstructing the electrode positions and conductivities simultaneously increased the image quality significantly in the presence of electrode movement.

  15. Strongly nonlinear theory of rapid solidification near absolute stability

    Science.gov (United States)

    Kowal, Katarzyna N.; Altieri, Anthony L.; Davis, Stephen H.

    2017-10-01

    We investigate the nonlinear evolution of the morphological deformation of a solid-liquid interface of a binary melt under rapid solidification conditions near two absolute stability limits. The first of these involves the complete stabilization of the system to cellular instabilities as a result of large enough surface energy. We derive nonlinear evolution equations in several limits in this scenario and investigate the effect of interfacial disequilibrium on the nonlinear deformations that arise. In contrast to the morphological stability problem in equilibrium, in which only cellular instabilities appear and only one absolute stability boundary exists, in disequilibrium the system is prone to oscillatory instabilities and a second absolute stability boundary involving attachment kinetics arises. Large enough attachment kinetics stabilize the oscillatory instabilities. We derive a nonlinear evolution equation to describe the nonlinear development of the solid-liquid interface near this oscillatory absolute stability limit. We find that strong asymmetries develop with time. For uniform oscillations, the evolution equation for the interface reduces to the simple form f''+(βf')2+f =0 , where β is the disequilibrium parameter. Lastly, we investigate a distinguished limit near both absolute stability limits in which the system is prone to both cellular and oscillatory instabilities and derive a nonlinear evolution equation that captures the nonlinear deformations in this limit. Common to all these scenarios is the emergence of larger asymmetries in the resulting shapes of the solid-liquid interface with greater departures from equilibrium and larger morphological numbers. The disturbances additionally sharpen near the oscillatory absolute stability boundary, where the interface becomes deep-rooted. The oscillations are time-periodic only for small-enough initial amplitudes and their frequency depends on a single combination of physical parameters, including the

  16. Engineering Complex Microbial Phenotypes with Continuous Genetic Integration and Plasmid Based Multi-gene Library

    Science.gov (United States)

    2013-10-09

    acetyltransferase) pS2 1004096 1006110 2015 (mae) citC citD (citE) pS3 2866824 2868276 1453 (pts26BCA) pS4 3158907 3162902 3996 tal2 [lp_3539], [lp_3540...with error bars indicates standard error. 25 50 40 > CO 30 20 10 L LibCtl pS1 pS2 pS3 pS4 Fig. 19: Suvival tolerance assay performed

  17. Hourly cooling load prediction of a vehicle in the southern region of Turkey by Artificial Neural Network

    International Nuclear Information System (INIS)

    Solmaz, Ozgur; Ozgoren, Muammer; Aksoy, Muharrem Hilmi

    2014-01-01

    Highlights: • An ANN model was developed to predict hourly cooling load of a vehicle. • Hourly meteorological data of 5 different provinces was used. • The agreement of the cooling load values between the calculations and predictions was fairly promising. • The ANN model could be successfully used to design automotive air conditioning systems. - Abstract: In this study, Artificial Neural Networks (ANNs) method for prediction hourly cooling load of a vehicle was implemented. The cooling load of the vehicle was calculated along the cooling season (1 May–30 September) for Antalya, Konya, Mersin, Mugla and Sanliurfa provinces in Turkey. For ANN model, seven neurons determinated as input signals of latitude, longitude, altitude, day of the year, hour of the day, hourly mean ambient air temperature and hourly solar radiation were used for the input layer of the network. One neuron producing an output signal of the hourly cooling load was utilized in the output layer. All data were divided into two categories for training and testing of the ANN. The 80% of the data was reserved to training and the remaining was used for testing of the model. Neuron numbers in the hidden layer from 7 to 40 were tested step by step to find the best matching ANN structure. The obtained results for different numbers of neurons were compared in terms of root mean squared error (RMSE), coefficient of determination (R 2 ) and mean absolute error (MAE). The best matching results for the training and testing were obtained as 8 neurons for the minimum testing RMSE value for the prediction of cooling load by the ANN model on the 23rd day of each month along the cooling season. For the model with 8 neurons RMSE, R 2 and MAE (Training/Testing) were found to be 0.0128/0.0259, 0.9959/0.9818 and 78.81/174.71 W/m 2 , respectively. It is shown that the cooling load of a vehicle can be successfully predicted by means of the ANNs from geographical characteristics and meteorological data

  18. Temperature based daily incoming solar radiation modeling based on gene expression programming, neuro-fuzzy and neural network computing techniques.

    Science.gov (United States)

    Landeras, G.; López, J. J.; Kisi, O.; Shiri, J.

    2012-04-01

    The correct observation/estimation of surface incoming solar radiation (RS) is very important for many agricultural, meteorological and hydrological related applications. While most weather stations are provided with sensors for air temperature detection, the presence of sensors necessary for the detection of solar radiation is not so habitual and the data quality provided by them is sometimes poor. In these cases it is necessary to estimate this variable. Temperature based modeling procedures are reported in this study for estimating daily incoming solar radiation by using Gene Expression Programming (GEP) for the first time, and other artificial intelligence models such as Artificial Neural Networks (ANNs), and Adaptive Neuro-Fuzzy Inference System (ANFIS). Traditional temperature based solar radiation equations were also included in this study and compared with artificial intelligence based approaches. Root mean square error (RMSE), mean absolute error (MAE) RMSE-based skill score (SSRMSE), MAE-based skill score (SSMAE) and r2 criterion of Nash and Sutcliffe criteria were used to assess the models' performances. An ANN (a four-input multilayer perceptron with ten neurons in the hidden layer) presented the best performance among the studied models (2.93 MJ m-2 d-1 of RMSE). A four-input ANFIS model revealed as an interesting alternative to ANNs (3.14 MJ m-2 d-1 of RMSE). Very limited number of studies has been done on estimation of solar radiation based on ANFIS, and the present one demonstrated the ability of ANFIS to model solar radiation based on temperatures and extraterrestrial radiation. By the way this study demonstrated, for the first time, the ability of GEP models to model solar radiation based on daily atmospheric variables. Despite the accuracy of GEP models was slightly lower than the ANFIS and ANN models the genetic programming models (i.e., GEP) are superior to other artificial intelligence models in giving a simple explicit equation for the

  19. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  20. Forcing absoluteness and regularity properties

    NARCIS (Netherlands)

    Ikegami, D.

    2010-01-01

    For a large natural class of forcing notions, we prove general equivalence theorems between forcing absoluteness statements, regularity properties, and transcendence properties over L and the core model K. We use our results to answer open questions from set theory of the reals.

  1. A NEW METHOD TO QUANTIFY AND REDUCE THE NET PROJECTION ERROR IN WHOLE-SOLAR-ACTIVE-REGION PARAMETERS MEASURED FROM VECTOR MAGNETOGRAMS

    Energy Technology Data Exchange (ETDEWEB)

    Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L. [NASA Marshall Space Flight Center, Huntsville, AL 35812 (United States); Khazanov, Igor, E-mail: David.a.Falconer@nasa.gov [Center for Space Plasma and Aeronomic Research, University of Alabama in Huntsville, Huntsville, AL 35899 (United States)

    2016-12-20

    Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter at central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.

  2. Absolute calibration of sniffer probes on Wendelstein 7-X

    NARCIS (Netherlands)

    Moseev, D.; Laqua, H.P.; Marsen, S.; Stange, T.; Braune, H.; Erckmann, V.; Gellert, F.J.; Oosterbeek, J.W.

    Here we report the first measurements of the power levels of stray radiation in the vacuum vessel of Wendelstein 7-X using absolutely calibrated sniffer probes. The absolute calibration is achieved by using calibrated sources of stray radiation and the implicit measurement of the quality factor of

  3. Absolute tense forms in Tswana | Pretorius | Journal for Language ...

    African Journals Online (AJOL)

    These views were compared in an attempt to put forth an applicable framework for the classification of the tenses in Tswana and to identify the absolute tenses of Tswana. Keywords: tense; simple tenses; compound tenses; absolute tenses; relative tenses; aspect; auxiliary verbs; auxiliary verbal groups; Tswana Opsomming

  4. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  5. Comparación de Métodos de Interpolación para la Estimación de Temperatura del Reservorio CEASA

    OpenAIRE

    Fonseca, Kalina; IlbayYupa, Mercy; Bustillos, Luis; Barbosa, Sara; Iza, Alisson

    2018-01-01

    La interpolación de temperatura en cuerpos de agua permite realizar predicciones de puntos de muestreo que no presentan datos. En la presente investigación se evaluaron 12 métodos de interpolación para estimar la temperatura del reservorio del Centro de Experimentación Académica Salache (CEASA) de la Universidad Técnica de Cotopaxi. Los datos recolectados en campo fueron interpolados aleatoriamente y comparados con los reales en base al error medio (EM), error absoluto medio (MAE), error medi...

  6. Probative value of absolute and relative judgments in eyewitness identification.

    Science.gov (United States)

    Clark, Steven E; Erickson, Michael A; Breneman, Jesse

    2011-10-01

    It is well-accepted that eyewitness identification decisions based on relative judgments are less accurate than identification decisions based on absolute judgments. However, the theoretical foundation for this view has not been established. In this study relative and absolute judgments were compared through simulations of the WITNESS model (Clark, Appl Cogn Psychol 17:629-654, 2003) to address the question: Do suspect identifications based on absolute judgments have higher probative value than suspect identifications based on relative judgments? Simulations of the WITNESS model showed a consistent advantage for absolute judgments over relative judgments for suspect-matched lineups. However, simulations of same-foils lineups showed a complex interaction based on the accuracy of memory and the similarity relationships among lineup members.

  7. Quiet please! Drug round tabards: are they effective and accepted? A mixed method study

    NARCIS (Netherlands)

    Verweij, Lotte; Smeulers, Marian; Maaskant, Jolanda M.; Vermeulen, Hester

    2014-01-01

    The use of drug round tabards is a widespread intervention that is implemented to reduce the number of interruptions and medication administration errors (MAEs) by nurses; however, evidence for their effectiveness is scarce. Evaluation of the effect of drug round tabards on the frequency and type of

  8. Multiple kernel SVR based on the MRE for remote sensing water depth fusion detection

    Science.gov (United States)

    Wang, Jinjin; Ma, Yi; Zhang, Jingyu

    2018-03-01

    Remote sensing has an important means of water depth detection in coastal shallow waters and reefs. Support vector regression (SVR) is a machine learning method which is widely used in data regression. In this paper, SVR is used to remote sensing multispectral bathymetry. Aiming at the problem that the single-kernel SVR method has a large error in shallow water depth inversion, the mean relative error (MRE) of different water depth is retrieved as a decision fusion factor with single kernel SVR method, a multi kernel SVR fusion method based on the MRE is put forward. And taking the North Island of the Xisha Islands in China as an experimentation area, the comparison experiments with the single kernel SVR method and the traditional multi-bands bathymetric method are carried out. The results show that: 1) In range of 0 to 25 meters, the mean absolute error(MAE)of the multi kernel SVR fusion method is 1.5m,the MRE is 13.2%; 2) Compared to the 4 single kernel SVR method, the MRE of the fusion method reduced 1.2% (1.9%) 3.4% (1.8%), and compared to traditional multi-bands method, the MRE reduced 1.9%; 3) In 0-5m depth section, compared to the single kernel method and the multi-bands method, the MRE of fusion method reduced 13.5% to 44.4%, and the distribution of points is more concentrated relative to y=x.

  9. Climatological Modeling of Monthly Air Temperature and Precipitation in Egypt through GIS Techniques

    Science.gov (United States)

    El Kenawy, A.

    2009-09-01

    This paper describes a method for modeling and mapping four climatic variables (maximum temperature, minimum temperature, mean temperature and total precipitation) in Egypt using a multiple regression approach implemented in a GIS environment. In this model, a set of variables including latitude, longitude, elevation within a distance of 5, 10 and 15 km, slope, aspect, distance to the Mediterranean Sea, distance to the Red Sea, distance to the Nile, ratio between land and water masses within a radius of 5, 10, 15 km, the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Water Index (NDWI), the Normalized Difference Temperature Index (NDTI) and reflectance are included as independent variables. These variables were integrated as raster layers in MiraMon software at a spatial resolution of 1 km. Climatic variables were considered as dependent variables and averaged from quality controlled and homogenized 39 series distributing across the entire country during the period of (1957-2006). For each climatic variable, digital and objective maps were finally obtained using the multiple regression coefficients at monthly, seasonal and annual timescale. The accuracy of these maps were assessed through cross-validation between predicted and observed values using a set of statistics including coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), mean bias Error (MBE) and D Willmott statistic. These maps are valuable in the sense of spatial resolution as well as the number of observatories involved in the current analysis.

  10. Positioning, alignment and absolute pointing of the ANTARES neutrino telescope

    International Nuclear Information System (INIS)

    Fehr, F; Distefano, C

    2010-01-01

    A precise detector alignment and absolute pointing is crucial for point-source searches. The ANTARES neutrino telescope utilises an array of hydrophones, tiltmeters and compasses for the relative positioning of the optical sensors. The absolute calibration is accomplished by long-baseline low-frequency triangulation of the acoustic reference devices in the deep-sea with a differential GPS system at the sea surface. The absolute pointing can be independently verified by detecting the shadow of the Moon in cosmic rays.

  11. Does Absolute Synonymy exist in Owere-Igbo? | Omego | AFRREV ...

    African Journals Online (AJOL)

    Among Igbo linguistic researchers, determining whether absolute synonymy exists in Owere–Igbo, a dialect of the Igbo language predominantly spoken by the people of Owerri, Imo State, Nigeria, has become a thorny issue. While some linguistic scholars strive to establish that absolute synonymy exists in the lexical ...

  12. Three-dimensional patient setup errors at different treatment sites measured by the Tomotherapy megavoltage CT

    Energy Technology Data Exchange (ETDEWEB)

    Hui, S.K.; Lusczek, E.; Dusenbery, K. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; DeFor, T. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Biostatistics and Informatics Core; Levitt, S. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; Karolinska Institutet, Stockholm (Sweden). Dept. of Onkol-Patol

    2012-04-15

    Reduction of interfraction setup uncertainty is vital for assuring the accuracy of conformal radiotherapy. We report a systematic study of setup error to assess patients' three-dimensional (3D) localization at various treatment sites. Tomotherapy megavoltage CT (MVCT) images were scanned daily in 259 patients from 2005-2008. We analyzed 6,465 MVCT images to measure setup error for head and neck (H and N), chest/thorax, abdomen, prostate, legs, and total marrow irradiation (TMI). Statistical comparisons of the absolute displacements across sites and time were performed in rotation (R), lateral (x), craniocaudal (y), and vertical (z) directions. The global systematic errors were measured to be less than 3 mm in each direction with increasing order of errors for different sites: H and N, prostate, chest, pelvis, spine, legs, and TMI. The differences in displacements in the x, y, and z directions, and 3D average displacement between treatment sites were significant (p < 0.01). Overall improvement in patient localization with time (after 3-4 treatment fractions) was observed. Large displacement (> 5 mm) was observed in the 75{sup th} percentile of the patient groups for chest, pelvis, legs, and spine in the x and y direction in the second week of the treatment. MVCT imaging is essential for determining 3D setup error and to reduce uncertainty in localization at all anatomical locations. Setup error evaluation should be performed daily for all treatment regions, preferably for all treatment fractions. (orig.)

  13. Non-destructive analysis of sensory traits of dry-cured loins by MRI-computer vision techniques and data mining.

    Science.gov (United States)

    Caballero, Daniel; Antequera, Teresa; Caro, Andrés; Ávila, María Del Mar; G Rodríguez, Pablo; Perez-Palacios, Trinidad

    2017-07-01

    Magnetic resonance imaging (MRI) combined with computer vision techniques have been proposed as an alternative or complementary technique to determine the quality parameters of food in a non-destructive way. The aim of this work was to analyze the sensory attributes of dry-cured loins using this technique. For that, different MRI acquisition sequences (spin echo, gradient echo and turbo 3D), algorithms for MRI analysis (GLCM, NGLDM, GLRLM and GLCM-NGLDM-GLRLM) and predictive data mining techniques (multiple linear regression and isotonic regression) were tested. The correlation coefficient (R) and mean absolute error (MAE) were used to validate the prediction results. The combination of spin echo, GLCM and isotonic regression produced the most accurate results. In addition, the MRI data from dry-cured loins seems to be more suitable than the data from fresh loins. The application of predictive data mining techniques on computational texture features from the MRI data of loins enables the determination of the sensory traits of dry-cured loins in a non-destructive way. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  14. Zeolitic Imidazolate Framework-8 Membrane for H2/CO2 Separation: Experimental and Modeling

    Science.gov (United States)

    Lai, L. S.; Yeong, Y. F.; Lau, K. K.; Azmi, M. S.; Chew, T. L.

    2018-03-01

    In this work, ZIF-8 membrane synthesized through solvent evaporation secondary seeded growth was tested for single gas permeation and binary gases separation of H2 and CO2. Subsequently, a modified mathematical modeling combining the effects of membrane and support layers was applied to represent the gas transport properties of ZIF-8 membrane. Results showed that, the membrane has exhibited H2/CO2 ideal selectivity of 5.83 and separation factor of 3.28 at 100 kPa and 303 K. Besides, the experimental results were fitted well with the simulated results by demonstrating means absolute error (MAE) values ranged from 1.13 % to 3.88 % for single gas permeation and 10.81 % to 21.22 % for binary gases separation. Based on the simulated data, most of the H2 and CO2 gas molecules have transported through the molecular pores of membrane layer, which was up to 70 %. Thus, the gas transport of the gases is mainly dominated by adsorption and diffusion across the membrane.

  15. Bounds on absolutely maximally entangled states from shadow inequalities, and the quantum MacWilliams identity

    Science.gov (United States)

    Huber, Felix; Eltschka, Christopher; Siewert, Jens; Gühne, Otfried

    2018-04-01

    A pure multipartite quantum state is called absolutely maximally entangled (AME), if all reductions obtained by tracing out at least half of its parties are maximally mixed. Maximal entanglement is then present across every bipartition. The existence of such states is in many cases unclear. With the help of the weight enumerator machinery known from quantum error correction and the shadow inequalities, we obtain new bounds on the existence of AME states in dimensions larger than two. To complete the treatment on the weight enumerator machinery, the quantum MacWilliams identity is derived in the Bloch representation. Finally, we consider AME states whose subsystems have different local dimensions, and present an example for a 2×3×3×3 system that shows maximal entanglement across every bipartition.

  16. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors

    Directory of Open Access Journals (Sweden)

    Heon-Ju Kwon

    2018-03-01

    Full Text Available Background/Aims Computed tomography (CT hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT. However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Methods Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (VP was measured via the assumptive hepatectomy plane. Retrospective liver volume (VR was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W, errors in percentage (% VP and VR were evaluated. Plane-dependent error in VP was defined as the absolute difference between VP and VR. % plane-dependent error was defined as follows: |VP–VR|/W∙100. Results Mean VP, VR, and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in VP were 73.3 mL and 10.7%. Mean error and % error in VR were 64.4 mL and 9.3%. Mean plane-dependent error in VP was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in VP exceeded 10% of W in approximately 10% of the subjects in our study. Conclusions There was approximately 5% plane-dependent error in liver VP on CT volumetry. Plane-dependent error in VP exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane.

  17. Resection plane-dependent error in computed tomography volumetry of the right hepatic lobe in living liver donors.

    Science.gov (United States)

    Kwon, Heon-Ju; Kim, Kyoung Won; Kim, Bohyun; Kim, So Yeon; Lee, Chul Seung; Lee, Jeongjin; Song, Gi Won; Lee, Sung Gyu

    2018-03-01

    Computed tomography (CT) hepatic volumetry is currently accepted as the most reliable method for preoperative estimation of graft weight in living donor liver transplantation (LDLT). However, several factors can cause inaccuracies in CT volumetry compared to real graft weight. The purpose of this study was to determine the frequency and degree of resection plane-dependent error in CT volumetry of the right hepatic lobe in LDLT. Forty-six living liver donors underwent CT before donor surgery and on postoperative day 7. Prospective CT volumetry (V P ) was measured via the assumptive hepatectomy plane. Retrospective liver volume (V R ) was measured using the actual plane by comparing preoperative and postoperative CT. Compared with intraoperatively measured weight (W), errors in percentage (%) V P and V R were evaluated. Plane-dependent error in V P was defined as the absolute difference between V P and V R . % plane-dependent error was defined as follows: |V P -V R |/W∙100. Mean V P , V R , and W were 761.9 mL, 755.0 mL, and 696.9 g. Mean and % errors in V P were 73.3 mL and 10.7%. Mean error and % error in V R were 64.4 mL and 9.3%. Mean plane-dependent error in V P was 32.4 mL. Mean % plane-dependent error was 4.7%. Plane-dependent error in V P exceeded 10% of W in approximately 10% of the subjects in our study. There was approximately 5% plane-dependent error in liver V P on CT volumetry. Plane-dependent error in V P exceeded 10% of W in approximately 10% of LDLT donors in our study. This error should be considered, especially when CT volumetry is performed by a less experienced operator who is not well acquainted with the donor hepatectomy plane.

  18. Relationships between GPS-signal propagation errors and EISCAT observations

    Directory of Open Access Journals (Sweden)

    N. Jakowski

    1996-12-01

    Full Text Available When travelling through the ionosphere the signals of space-based radio navigation systems such as the Global Positioning System (GPS are subject to modifications in amplitude, phase and polarization. In particular, phase changes due to refraction lead to propagation errors of up to 50 m for single-frequency GPS users. If both the L1 and the L2 frequencies transmitted by the GPS satellites are measured, first-order range error contributions of the ionosphere can be determined and removed by difference methods. The ionospheric contribution is proportional to the total electron content (TEC along the ray path between satellite and receiver. Using about ten European GPS receiving stations of the International GPS Service for Geodynamics (IGS, the TEC over Europe is estimated within the geographic ranges -20°≤ λ ≤40°E and 32.5°≤ Φ ≤70°N in longitude and latitude, respectively. The derived TEC maps over Europe contribute to the study of horizontal coupling and transport proces- ses during significant ionospheric events. Due to their comprehensive information about the high-latitude ionosphere, EISCAT observations may help to study the influence of ionospheric phenomena upon propagation errors in GPS navigation systems. Since there are still some accuracy limiting problems to be solved in TEC determination using GPS, data comparison of TEC with vertical electron density profiles derived from EISCAT observations is valuable to enhance the accuracy of propagation-error estimations. This is evident both for absolute TEC calibration as well as for the conversion of ray-path-related observations to vertical TEC. The combination of EISCAT data and GPS-derived TEC data enables a better understanding of large-scale ionospheric processes.

  19. The DiskMass Survey. II. Error Budget

    Science.gov (United States)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  20. Moral absolutism and ectopic pregnancy.

    Science.gov (United States)

    Kaczor, C

    2001-02-01

    If one accepts a version of absolutism that excludes the intentional killing of any innocent human person from conception to natural death, ectopic pregnancy poses vexing difficulties. Given that the embryonic life almost certainly will die anyway, how can one retain one's moral principle and yet adequately respond to a situation that gravely threatens the life of the mother and her future fertility? The four options of treatment most often discussed in the literature are non-intervention, salpingectomy (removal of tube with embryo), salpingostomy (removal of embryo alone), and use of methotrexate (MXT). In this essay, I review these four options and introduce a fifth (the milking technique). In order to assess these options in terms of the absolutism mentioned, it will also be necessary to discuss various accounts of the intention/foresight distinction. I conclude that salpingectomy, salpingostomy, and the milking technique are compatible with absolutist presuppositions, but not the use of methotrexate.

  1. Sexualidades disidentes en la narrativa española reciente: políticas sexuales, cine y subversión en Mae West y yo (2011 de Eduardo Mendicutti

    Directory of Open Access Journals (Sweden)

    Facundo Saxe

    2016-01-01

    Full Text Available Este artículo aborda la obra narrativa del español Eduardo Mendicutti y sus vínculos con la representación de la disidencia sexual en España en las últimas décadas. En particular, este trabajo focaliza en una de las últimas producciones en la narrativa del autor, Mae West y yo (2011. A partir del trabajo con una perspectiva queer en el texto mencionado, se abordan tópicos y dispositivos directamente relacionados con la disidencia sexual. En ese marco, la novela funciona como ejemplo del conjunto de textos del autor, los que funcionarían como una “historia cultural” de la sexualidad disidente en España en las últimas décadas.

  2. Technical Note: Potential errors in optical density measurements due to scanning side in EBT and EBT2 Gafchromic film dosimetry

    International Nuclear Information System (INIS)

    Desroches, Joannie; Bouchard, Hugo; Lacroix, Frederic

    2010-01-01

    Purpose: The purpose of this study is to determine the effect on the measured optical density of scanning on either side of a Gafchromic EBT and EBT2 film using an Epson (Epson Canada Ltd., Toronto, Ontario) 10000XL flat bed scanner. Methods: Calibration curves were constructed using EBT2 film scanned in landscape orientation in both reflection and transmission mode on an Epson 10000XL scanner. Calibration curves were also constructed using EBT film. Potential errors due to an optical density difference from scanning the film on either side (''face up'' or ''face down'') were simulated. Results: Scanning the film face up or face down on the scanner bed while keeping the film angular orientation constant affects the measured optical density when scanning in reflection mode. In contrast, no statistically significant effect was seen when scanning in transmission mode. This effect can significantly affect relative and absolute dose measurements. As an application example, the authors demonstrate potential errors of 17.8% by inverting the film scanning side on the gamma index for 3%--3 mm criteria on a head and neck intensity modulated radiotherapy plan, and errors in absolute dose measurements ranging from 10% to 35% between 2 and 5 Gy. Conclusions: Process consistency is the key to obtaining accurate and precise results in Gafchromic film dosimetry. When scanning in reflection mode, care must be taken to place the film consistently on the same side on the scanner bed.

  3. Growth of teak regenerated by coppice and stump planting in Mae Moh Plantation, Lampang province, Thailand

    Directory of Open Access Journals (Sweden)

    Anatta Auykim

    2017-08-01

    Full Text Available The current annual increment (CAIdbh and the mean annual increment (MAIdbh both for the diameter at breast height (1.3 m were investigated to compare the differences between coppice and stump-planted teak in Mae Moh Plantation. Forty-eight sample cores were collected from a 9 yr-old teak plantation using an increment borer; annual increments were analyzed using dendrochronological techniques. The results indicated that there was no significant (p > 0.05 difference in the average diameter at breast height (DBH between the coppice and stump-planted teak, whereas the total height of stump planting was significantly greater than that of coppice teak. The CAIdbh of coppice teak was in the range 0.316–2.371 cm and continuously decreased throughout the 9 yr period. The CAIdbh of stump planting was in the range 0.162–1.982 cm and continuously increased from the beginning of growth for 5 yr followed by a decline thereafter for 4 yr. The CAIdbh of coppice showed rapid growth in the years 1–4 and was greater than for the stump-planted teak even in years 5–8 after planting; however, the growth of the stump-planted teak in the ninth year was higher than for the coppice. The MAIdbh values of coppice and stump-planted teak were not significantly (p > 0.05 different. The results showed that CAIdbh at age 5 yr can be used as a silvicultural guide to increase the yield of teak coppice.

  4. Absolute and Relative Socioeconomic Health Inequalities across Age Groups.

    Science.gov (United States)

    van Zon, Sander K R; Bültmann, Ute; Mendes de Leon, Carlos F; Reijneveld, Sijmen A

    2015-01-01

    The magnitude of socioeconomic health inequalities differs across age groups. It is less clear whether socioeconomic health inequalities differ across age groups by other factors that are known to affect the relation between socioeconomic position and health, like the indicator of socioeconomic position, the health outcome, gender, and as to whether socioeconomic health inequalities are measured in absolute or in relative terms. The aim is to investigate whether absolute and relative socioeconomic health inequalities differ across age groups by indicator of socioeconomic position, health outcome and gender. The study sample was derived from the baseline measurement of the LifeLines Cohort Study and consisted of 95,432 participants. Socioeconomic position was measured as educational level and household income. Physical and mental health were measured with the RAND-36. Age concerned eleven 5-years age groups. Absolute inequalities were examined by comparing means. Relative inequalities were examined by comparing Gini-coefficients. Analyses were performed for both health outcomes by both educational level and household income. Analyses were performed for all age groups, and stratified by gender. Absolute and relative socioeconomic health inequalities differed across age groups by indicator of socioeconomic position, health outcome, and gender. Absolute inequalities were most pronounced for mental health by household income. They were larger in younger than older age groups. Relative inequalities were most pronounced for physical health by educational level. Gini-coefficients were largest in young age groups and smallest in older age groups. Absolute and relative socioeconomic health inequalities differed cross-sectionally across age groups by indicator of socioeconomic position, health outcome and gender. Researchers should critically consider the implications of choosing a specific age group, in addition to the indicator of socioeconomic position and health outcome

  5. Some things ought never be done: moral absolutes in clinical ethics.

    Science.gov (United States)

    Pellegrino, Edmund D

    2005-01-01

    Moral absolutes have little or no moral standing in our morally diverse modern society. Moral relativism is far more palatable for most ethicists and to the public at large. Yet, when pressed, every moral relativist will finally admit that there are some things which ought never be done. It is the rarest of moral relativists that will take rape, murder, theft, child sacrifice as morally neutral choices. In general ethics, the list of those things that must never be done will vary from person to person. In clinical ethics, however, the nature of the physician-patient relationship is such that certain moral absolutes are essential to the attainment of the good of the patient - the end of the relationship itself. These are all derivatives of the first moral absolute of all morality: Do good and avoid evil. In the clinical encounter, this absolute entails several subsidiary absolutes - act for the good of the patient, do not kill, keep promises, protect the dignity of the patient, do not lie, avoid complicity with evil. Each absolute is intrinsic to the healing and helping ends of the clinical encounter.

  6. Relativistic Absolutism in Moral Education.

    Science.gov (United States)

    Vogt, W. Paul

    1982-01-01

    Discusses Emile Durkheim's "Moral Education: A Study in the Theory and Application of the Sociology of Education," which holds that morally healthy societies may vary in culture and organization but must possess absolute rules of moral behavior. Compares this moral theory with current theory and practice of American educators. (MJL)

  7. Verification of pharmacogenetics-based warfarin dosing algorithms in Han-Chinese patients undertaking mechanic heart valve replacement.

    Science.gov (United States)

    Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li

    2014-01-01

    To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88-4.38 mg/day) than the low-dose range (pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement.

  8. Health impact on Economy by Artificial Neural Network and Dynamic Ordinary Least Squares

    Directory of Open Access Journals (Sweden)

    Marziyeh Sadat Safe

    2017-10-01

    Full Text Available Introduction: Achievement of economic growth, as one of the most important macroeconomic variables, depends on the precise understanding of potential routes and the factors affecting on it. The aim of this study was to evaluate the health care sector’s effect on Iran Gross Domestic Product (GDP, as the status of economy. Method: Artificial Neural Network (ANN and Dynamic Ordinary Least Squares (DOLS were performed according to Iran GDP as the output variable and the input variables of life expectancy at birth, under five mortality rates, public health expenditures, the number of doctors and hospital beds during 1961-2012 in Iran. Data were collected from the Statistical Center of Iran, the Central Bank of the Islamic Republic of Iran, the World Health Organization and the World Bank databases. Data management and analysis were performed using Eviewes 7, stata 11 and also Mathlab. MSE, MAE and R2 were calculated to assess and compare the models. Results: One percent reduction in deaths of children under 5-years could improve Iran GDP as much as 1.9%. Additionally, one percent increment in the number of doctors, hospital beds or health expenditure would increase GDP by 0.37%, 0.27% and 0.29%, respectively. Mean Absolute Error (MAE demonstrated the superiority of DOLS in the model estimation. Conclusion: The lack of sufficient considerations and excellent models in the health care sector is the main reason for underestimating the effect of this sector on economy. This limitation leads to neglecting the resource allocation to the health care sector, as the great potential motivation of the economic growth.

  9. Harmonize input selection for sediment transport prediction

    Science.gov (United States)

    Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed

    2017-09-01

    In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.

  10. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  11. Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction

    Directory of Open Access Journals (Sweden)

    Tianzhou Chen

    2013-09-01

    Full Text Available Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation.

  12. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  13. Effekten af absolut kumulation

    DEFF Research Database (Denmark)

    Kyvsgaard, Britta; Klement, Christian

    2012-01-01

    Som led i finansloven for 2011 blev regeringen og forligspartierne enige om at undersøge reglerne om strafudmåling ved samtidig pådømmelse af flere kriminelle forhold og i forbindelse hermed vurdere konsekvenserne af at ændre de gældende regler i forhold til kapacitetsbehovet i Kriminalforsorgens...... samlet bødesum ved en absolut kumulation i forhold til en modereret kumulation, som nu er gældende....

  14. Some absolutely effective product methods

    Directory of Open Access Journals (Sweden)

    H. P. Dikshit

    1992-01-01

    Full Text Available It is proved that the product method A(C,1, where (C,1 is the Cesàro arithmetic mean matrix, is totally effective under certain conditions concerning the matrix A. This general result is applied to study absolute Nörlund summability of Fourier series and other related series.

  15. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  16. Absolute measurement method of environment radon content

    International Nuclear Information System (INIS)

    Ji Changsong

    1989-11-01

    A portable environment radon content device with a 40 liter decay chamber based on the method of Thomas double filter radon content absolute measurement has been developed. The correctness of the method of Thomas double filter absolute measurement has been verified by the experiments to measure the sampling gas density of radon that the theoretical density has been known. In addition, the intrinsic uncertainty of this method is also determined in the experiments. The confidence of this device is about 95%, the sensitivity is better than 0.37 Bqm -3 and the intrinsic uncertainty is less than 10%. The results show that the selected measuring and structure parameters are reasonable and the experimental methods are acceptable. In this method, the influence on the measured values from the radioactive equilibrium of radon and its daughters, the ratio of combination daughters to the total daughters and the fraction of charged particles has been excluded in the theory and experimental methods. The formula of Thomas double filter absolute measuring radon is applicable to the cylinder decay chamber, and the applicability is also verified when the diameter of exit filter is much smaller than the diameter of inlet filter

  17. Determination of flumequine and oxolinic acid in sediments and soils by microwave-assisted extraction and liquid chromatography-fluorescence

    International Nuclear Information System (INIS)

    Prat, M.D.; Ramil, D.; Compano, R.; Hernandez-Arteseros, J.A.; Granados, M.

    2006-01-01

    A method is reported for the determination of the quinolones oxolinic acid and flumequine in aquatic sediments and agricultural soils. The analytes are extracted by liquid-liquid partitioning between a sample homogenated in an aqueous buffer solution and dichloromethane. Microwave-assisted extraction (MAE) was tested to improve the speed and efficiency of the extraction process. The parameters affecting the efficiency of MAE, such as irradiation time and temperature, were studied. The clean-up consists of back-extraction in 1 M sodium hydroxide. The determination is carried out by reversed phase liquid chromatography on an octyl silica-based column and fluorimetric detection. The optimised method was applied to the analysis of two sediments and one agricultural soil spiked with the analytes. The absolute recovery rates for the whole process range from 79% to 94% (RSD 3-7%), and detection limits are in the low μg kg -1 level

  18. Study on absolute humidity influence of NRL-1 measuring apparatus for radon

    International Nuclear Information System (INIS)

    Shan Jian; Xiao Detao; Zhao Guizhi; Zhou Qingzhi; Liu Yan; Qiu Shoukang; Meng Yecheng; Xiong Xinming; Liu Xiaosong; Ma Wenrong

    2014-01-01

    The absolute humidity and temperature's effects on the NRL-1 measuring apparatus for radon were studied in this paper. By controlling the radon activity concentration of the radon laboratory in University of South China and improving the temperature and humidity adjust strategy, different correction factor values under different absolute humidities were obtained. Moreover, a correction curve between 1.90 and 14.91 g/m"3 was also attained. The results show that in the case of absolute humidity, when it is less than 2.4 g/m"3, collection efficiency of the NRL-1 measuring apparatus for radon tends to be constant, and the correction factor of the absolute humidity closes to 1. However, the correction factor increases nonlinearly along with the absolute humidity. (authors)

  19. Performance evaluations of continuous glucose monitoring systems: precision absolute relative deviation is part of the assessment.

    Science.gov (United States)

    Obermaier, Karin; Schmelzeisen-Redeker, Günther; Schoemaker, Michael; Klötzer, Hans-Martin; Kirchsteiger, Harald; Eikmeier, Heino; del Re, Luigi

    2013-07-01

    Even though a Clinical and Laboratory Standards Institute proposal exists on the design of studies and performance criteria for continuous glucose monitoring (CGM) systems, it has not yet led to a consistent evaluation of different systems, as no consensus has been reached on the reference method to evaluate them or on acceptance levels. As a consequence, performance assessment of CGM systems tends to be inconclusive, and a comparison of the outcome of different studies is difficult. Published information and available data (as presented in this issue of Journal of Diabetes Science and Technology by Freckmann and coauthors) are used to assess the suitability of several frequently used methods [International Organization for Standardization, continuous glucose error grid analysis, mean absolute relative deviation (MARD), precision absolute relative deviation (PARD)] when assessing performance of CGM systems in terms of accuracy and precision. The combined use of MARD and PARD seems to allow for better characterization of sensor performance. The use of different quantities for calibration and evaluation, e.g., capillary blood using a blood glucose (BG) meter versus venous blood using a laboratory measurement, introduces an additional error source. Using BG values measured in more or less large intervals as the only reference leads to a significant loss of information in comparison with the continuous sensor signal and possibly to an erroneous estimation of sensor performance during swings. Both can be improved using data from two identical CGM sensors worn by the same patient in parallel. Evaluation of CGM performance studies should follow an identical study design, including sufficient swings in glycemia. At least a part of the study participants should wear two identical CGM sensors in parallel. All data available should be used for evaluation, both by MARD and PARD, a good PARD value being a precondition to trust a good MARD value. Results should be analyzed and

  20. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  1. Absolutely minimal extensions of functions on metric spaces

    International Nuclear Information System (INIS)

    Milman, V A

    1999-01-01

    Extensions of a real-valued function from the boundary ∂X 0 of an open subset X 0 of a metric space (X,d) to X 0 are discussed. For the broad class of initial data coming under discussion (linearly bounded functions) locally Lipschitz extensions to X 0 that preserve localized moduli of continuity are constructed. In the set of these extensions an absolutely minimal extension is selected, which was considered before by Aronsson for Lipschitz initial functions in the case X 0 subset of R n . An absolutely minimal extension can be regarded as an ∞-harmonic function, that is, a limit of p-harmonic functions as p→+∞. The proof of the existence of absolutely minimal extensions in a metric space with intrinsic metric is carried out by the Perron method. To this end, ∞-subharmonic, ∞-superharmonic, and ∞-harmonic functions on a metric space are defined and their properties are established

  2. Absolute carrier phase effects in the two-color excitation of dipolar molecules

    International Nuclear Information System (INIS)

    Brown, Alex; Meath, W.J.; Kondo, A.E.

    2002-01-01

    The pump-probe excitation of a two-level dipolar (d≠0) molecule, where the pump frequency is tuned to the energy level separation while the probe frequency is extremely small, is examined theoretically as an example of absolute phase control of excitation processes. The state populations depend on the probe field's absolute carrier phase but are independent of the pump field's absolute carrier phase. Interestingly, the absolute phase effects occur for pulse durations much longer and field intensities much weaker than those required to see such effects in single pulse excitation

  3. Determination of absolute detection efficiencies for detectors of interest in homeland security

    International Nuclear Information System (INIS)

    Ayaz-Maierhafer, Birsen; DeVol, Timothy A.

    2007-01-01

    The absolute total and absolute peak detection efficiencies of gamma ray detector materials NaI:Tl, CdZnTe, HPGe, HPXe, LaBr 3 :Ce and LaCl 3 :Ce were simulated and compared to that of polyvinyltoluene (PVT). The dimensions of the PVT detector were 188.82 cmx60.96 cmx5.08 cm, which is a typical size for a single-panel portal monitor. The absolute total and peak detection efficiencies for these detector materials for the point, line and spherical source geometries of 60 Co (1332 keV), 137 Cs (662 keV) and 241 Am (59.5 keV) were simulated at various source-to-detector distances using the Monte Carlo N-Particle software (MCNP5-V1.30). The comparison of the absolute total detection efficiencies for a point, line and spherical source geometry of 60 Co and 137 Cs at different source-to-detector distance showed that the absolute detection efficiency for PVT is higher relative to the other detectors of typical dimensions for that material. However, the absolute peak detection efficiency of some of these detectors are higher relative to PVT, for example the absolute peak detection efficiency of NaI:Tl (7.62 cm diameterx7.62 cm long), HPGe (7.62 cm diameterx7.62 cm long), HPXe (11.43 cm diameterx60.96 cm long), and LaCl 3 :Ce (5.08 cm diameterx5.08 cm long) are all greater than that of a 188.82 cmx60.96 cmx5.08 cm PVT detector for 60 Co and 137 Cs for all geometries studied. The absolute total and absolute peak detection efficiencies of a right circular cylinder of NaI:Tl with various diameters and thicknesses were determined for a point source. The effect of changing the solid angle on the NaI:Tl detectors showed that with increasing solid angle and detector thickness, the absolute efficiency increases. This work establishes a common basis for differentiating detector materials for passive portal monitoring of gamma ray radiation

  4. Regional and site-specific absolute humidity data for use in tritium dose calculations

    International Nuclear Information System (INIS)

    Etnier, E.L.

    1980-01-01

    Due to the potential variability in average absolute humidity over the continental U.S., and the dependence of atmospheric 3 H specific activity on absolute humidity, availability of regional absolute humidity data is of value in estimating the radiological significance of 3 H releases. Most climatological data are in the form of relative humidity, which must be converted to absolute humidity for dose calculations. Absolute humidity was calculated for 218 points across the U.S., using the 1977 annual summary of U.S. Climatological Data, and is given in a table. Mean regional values are shown on a map. (author)

  5. Absolute decay parametric instability of high-temperature plasma

    International Nuclear Information System (INIS)

    Zozulya, A.A.; Silin, V.P.; Tikhonchuk, V.T.

    1986-01-01

    A new absolute decay parametric instability having wide spatial localization region is shown to be possible near critical plasma density. Its excitation is conditioned by distributed feedback of counter-running Langmuir waves occurring during parametric decay of incident and reflected pumping wave components. In a hot plasma with the temperature of the order of kiloelectronvolt its threshold is lower than that of a known convective decay parametric instability. Minimum absolute instability threshold is shown to be realized under conditions of spatial parametric resonance of higher orders

  6. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  7. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  8. Confidence-Accuracy Calibration in Absolute and Relative Face Recognition Judgments

    Science.gov (United States)

    Weber, Nathan; Brewer, Neil

    2004-01-01

    Confidence-accuracy (CA) calibration was examined for absolute and relative face recognition judgments as well as for recognition judgments from groups of stimuli presented simultaneously or sequentially (i.e., simultaneous or sequential mini-lineups). When the effect of difficulty was controlled, absolute and relative judgments produced…

  9. Auditory working memory predicts individual differences in absolute pitch learning.

    Science.gov (United States)

    Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C

    2015-07-01

    Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the

  10. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  11. El problema de la conciencia en Los errores de José Revueltas

    Directory of Open Access Journals (Sweden)

    Evodio Escalante Betancourt

    2014-07-01

    Full Text Available Los errores es la gran novela que José Revueltas estaba destinado a escribir. En ella se concentra su etapa de madurez intelectual, filosófica y literaria. “El hombre es un ser erróneo y en eso radica su condición trágica” —dice Jacobo Ponce, personaje de la novela y alter ego de Revueltas. Reducir el error al grueso del ínfimo diámetro de un cabello, puesto en dimensiones cósmicas, se revela como un abismo puesto en relación con la categoría de saber absoluto que prefigura G. W. F. Hegel en su Fenomenología del espíritu. En este breve ensayo se da cuenta de las reflexiones intelectuales de Revueltas en torno a los postulados sobre la autoconciencia y el saber absoluto de Hegel. Los errores it’s the great novel that Revueltas was bound to write some day. It reveals his intellectual, philosophical and literary maturity. In words of Jacobo Ponce —character in the novel and alter ego of Revueltas itself—: “the man is erroneous by nature; here’s within his own tragedy”. Diminish the error up to the thickness of a capillar hair’s wide, in cosmic measures, means and reveals a huge gap facing the absolute knowledge coined by H.W. Hegel itself. So, in this brief essay Revueltas utter’s his owns intellectual thoughts bounding self-awareness and total knowledge within Hegel’s mind.

  12. Calibrating the absolute amplitude scale for air showers measured at LOFAR

    International Nuclear Information System (INIS)

    Nelles, A.; Hörandel, J. R.; Karskens, T.; Krause, M.; Corstanje, A.; Enriquez, J. E.; Falcke, H.; Rachen, J. P.; Rossetto, L.; Schellart, P.; Buitink, S.; Erdmann, M.; Krause, R.; Haungs, A.; Hiller, R.; Huege, T.; Link, K.; Schröder, F. G.; Norden, M. J.; Scholten, O.

    2015-01-01

    Air showers induced by cosmic rays create nanosecond pulses detectable at radio frequencies. These pulses have been measured successfully in the past few years at the LOw-Frequency ARray (LOFAR) and are used to study the properties of cosmic rays. For a complete understanding of this phenomenon and the underlying physical processes, an absolute calibration of the detecting antenna system is needed. We present three approaches that were used to check and improve the antenna model of LOFAR and to provide an absolute calibration of the whole system for air shower measurements. Two methods are based on calibrated reference sources and one on a calibration approach using the diffuse radio emission of the Galaxy, optimized for short data-sets. An accuracy of 19% in amplitude is reached. The absolute calibration is also compared to predictions from air shower simulations. These results are used to set an absolute energy scale for air shower measurements and can be used as a basis for an absolute scale for the measurement of astronomical transients with LOFAR

  13. New design and facilities for the International Database for Absolute Gravity Measurements (AGrav): A support for the Establishment of a new Global Absolute Gravity Reference System

    Science.gov (United States)

    Wziontek, Hartmut; Falk, Reinhard; Bonvalot, Sylvain; Rülke, Axel

    2017-04-01

    After about 10 years of successful joint operation by BGI and BKG, the International Database for Absolute Gravity Measurements "AGrav" (see references hereafter) was under a major revision. The outdated web interface was replaced by a responsive, high level web application framework based on Python and built on top of Pyramid. Functionality was added, like interactive time series plots or a report generator and the interactive map-based station overview was updated completely, comprising now clustering and the classification of stations. Furthermore, the database backend was migrated to PostgreSQL for better support of the application framework and long-term availability. As comparisons of absolute gravimeters (AG) become essential to realize a precise and uniform gravity standard, the database was extended to document the results on international and regional level, including those performed at monitoring stations equipped with SGs. By this it will be possible to link different AGs and to trace their equivalence back to the key comparisons under the auspices of International Committee for Weights and Measures (CIPM) as the best metrological realization of the absolute gravity standard. In this way the new AGrav database accommodates the demands of the new Global Absolute Gravity Reference System as recommended by the IAG Resolution No. 2 adopted in Prague 2015. The new database will be presented with focus on the new user interface and new functionality, calling all institutions involved in absolute gravimetry to participate and contribute with their information to built up a most complete picture of high precision absolute gravimetry and improve its visibility. A Digital Object Identifier (DOI) will be provided by BGI to contributors to give a better traceability and facilitate the referencing of their gravity surveys. Links and references: BGI mirror site : http://bgi.obs-mip.fr/data-products/Gravity-Databases/Absolute-Gravity-data/ BKG mirror site: http

  14. Absolute nutrient concentration measurements in cell culture media: 1H q-NMR spectra and data to compare the efficiency of pH-controlled protein precipitation versus CPMG or post-processing filtering approaches

    Directory of Open Access Journals (Sweden)

    Luca Goldoni

    2016-09-01

    Full Text Available The NMR spectra and data reported in this article refer to the research article titled “A simple and accurate protocol for absolute polar metabolite quantification in cell cultures using q-NMR” [1]. We provide the 1H q-NMR spectra of cell culture media (DMEM after removal of serum proteins, which show the different efficiency of various precipitating solvents, the solvent/DMEM ratios, and pH of the solution. We compare the data of the absolute nutrient concentrations, measured by PULCON external standard method, before and after precipitation of serum proteins and those obtained using CPMG (Carr-Purcell-Meiboom-Gill sequence or applying post-processing filtering algorithms to remove, from the 1H q-NMR spectra, the proteins signal contribution. For each of these approaches, the percent error in the absolute value of every measurement for all the nutrients is also plotted as accuracy assessment. Keywords: 1H NMR, pH-controlled serum removal, PULCON, Accuracy, CPMG, Deconvolution

  15. The good, the bad and the outliers: automated detection of errors and outliers from groundwater hydrographs

    Science.gov (United States)

    Peterson, Tim J.; Western, Andrew W.; Cheng, Xiang

    2018-03-01

    Suspicious groundwater-level observations are common and can arise for many reasons ranging from an unforeseen biophysical process to bore failure and data management errors. Unforeseen observations may provide valuable insights that challenge existing expectations and can be deemed outliers, while monitoring and data handling failures can be deemed errors, and, if ignored, may compromise trend analysis and groundwater model calibration. Ideally, outliers and errors should be identified but to date this has been a subjective process that is not reproducible and is inefficient. This paper presents an approach to objectively and efficiently identify multiple types of errors and outliers. The approach requires only the observed groundwater hydrograph, requires no particular consideration of the hydrogeology, the drivers (e.g. pumping) or the monitoring frequency, and is freely available in the HydroSight toolbox. Herein, the algorithms and time-series model are detailed and applied to four observation bores with varying dynamics. The detection of outliers was most reliable when the observation data were acquired quarterly or more frequently. Outlier detection where the groundwater-level variance is nonstationary or the absolute trend increases rapidly was more challenging, with the former likely to result in an under-estimation of the number of outliers and the latter an overestimation in the number of outliers.

  16. Absolute cross sections from the ''boomerang model'' for resonant electron-molecule scattering

    International Nuclear Information System (INIS)

    Dube, L.; Herzenberg, A.

    1979-01-01

    The boomerang model is used to calculate absolute cross sections near the 2 Pi/sub g/ shape resonance in e-N 2 scattering. The calculated cross sections are shown to satisfy detailed balancing. The exchange of electrons is taken into account. A parametrized complex-potential curve for the intermediate N 2 /sup ts-/ ion is determined from a small part of the experimental data, and then used to calculate other properties. The calculations are in good agreement with the absolute cross sections for vibrational excitation from the ground state, the absolute cross section v = 1 → 2, and the absolute total cross section

  17. Monitoring of chlorophyll-a and sea surface silicate concentrations in the south part of Cheju island in the East China sea using MODIS data

    Science.gov (United States)

    Zhang, Yuanzhi; Huang, Zhaojun; Fu, Dongyang; Tsou, Jin Yeu; Jiang, Tingchen; Liang, X. San; Lu, Xia

    2018-05-01

    Continually supplied with nutrients, phytoplankton maintains high productivity under ideal illumination and temperature conditions. Data in the south part of Cheju Island in the East China Sea (ECS), which has experienced a spring bloom since the 2000s, were acquired during a research cruise in the spring of 2007. Compared with in-situ measurements, MODIS chlorophyll-a measurements showed high stability in this area. Excluding some invalid stations data, the relationships between nutrients and chlorophyll-a concentrations in the study area were examined and compared with the results in 2015. A high positive correlation between silicate and chlorophyll-a concentration was identified, and a regression relationship was proposed. MODIS chlorophyll-a measurements and sea surface temperature were utilized to determine surface silicate distribution. The silicate concentration retrieved from MODIS exhibited good agreement with in-situ measurements with R2 of 0.803, root mean square error (RMSE) of 0.326 μmol/L (8.23%), and mean absolute error (MAE) of 0.925 μmol/L (23.38%). The study provides a new solution to identify nutrient distributions using satellite data such as MODIS for water bodies, but the method still needs to be refined to determine the relationship of chlorophyll-a and nutrients during other seasons to monitor water quality in this and other areas.

  18. Spatiotemporal Interpolation of Rainfall by Combining BME Theory and Satellite Rainfall Estimates

    Directory of Open Access Journals (Sweden)

    Tingting Shi

    2015-09-01

    Full Text Available The accurate assessment of spatiotemporal rainfall variability is a crucial and challenging task in many hydrological applications, mainly due to the lack of a sufficient number of rain gauges. The purpose of the present study is to investigate the spatiotemporal variations of annual and monthly rainfall over Fujian province in China by combining the Bayesian maximum entropy (BME method and satellite rainfall estimates. Specifically, based on annual and monthly rainfall data at 20 meteorological stations from 2000 to 2012, (1 the BME method with Tropical Rainfall Measuring Mission (TRMM estimates considered as soft data, (2 ordinary kriging (OK and (3 cokriging (CK were employed to model the spatiotemporal variations of rainfall in Fujian province. Subsequently, the performance of these methods was evaluated using cross-validation statistics. The results demonstrated that BME with TRMM as soft data (BME-TRMM performed better than the other two methods, generating rainfall maps that represented the local rainfall disparities in a more realistic manner. Of the three interpolation (mapping methods, the mean absolute error (MAE and root mean square error (RMSE values of the BME-TRMM method were the smallest. In conclusion, the BME-TRMM method improved spatiotemporal rainfall modeling and mapping by integrating hard data and soft information. Lastly, the study identified new opportunities concerning the application of TRMM rainfall estimates.

  19. Direct Quantification of Cd2+ in the Presence of Cu2+ by a Combination of Anodic Stripping Voltammetry Using a Bi-Film-Modified Glassy Carbon Electrode and an Artificial Neural Network.

    Science.gov (United States)

    Zhao, Guo; Wang, Hui; Liu, Gang

    2017-07-03

    Abstract : In this study, a novel method based on a Bi/glassy carbon electrode (Bi/GCE) for quantitatively and directly detecting Cd 2+ in the presence of Cu 2+ without further electrode modifications by combining square-wave anodic stripping voltammetry (SWASV) and a back-propagation artificial neural network (BP-ANN) has been proposed. The influence of the Cu 2+ concentration on the stripping response to Cd 2+ was studied. In addition, the effect of the ferrocyanide concentration on the SWASV detection of Cd 2+ in the presence of Cu 2+ was investigated. A BP-ANN with two inputs and one output was used to establish the nonlinear relationship between the concentration of Cd 2+ and the stripping peak currents of Cu 2+ and Cd 2+ . The factors affecting the SWASV detection of Cd 2+ and the key parameters of the BP-ANN were optimized. Moreover, the direct calibration model (i.e., adding 0.1 mM ferrocyanide before detection), the BP-ANN model and other prediction models were compared to verify the prediction performance of these models in terms of their mean absolute errors (MAEs), root mean square errors (RMSEs) and correlation coefficients. The BP-ANN model exhibited higher prediction accuracy than the direct calibration model and the other prediction models. Finally, the proposed method was used to detect Cd 2+ in soil samples with satisfactory results.

  20. Adaptive Neuro-Fuzzy Computing Technique for Determining Turbulent Flow Friction Coefficient

    Directory of Open Access Journals (Sweden)

    Mohammad Givehchi

    2013-08-01

    Full Text Available Estimation of the friction coefficient in pipes is very important in many water and wastewater engineering issues, such as distribution of velocity and shear stress, erosion, sediment transport and head loss. In analyzing these problems, knowing the friction coefficient, can obtain estimates that are more accurate. In this study in order to estimate the friction coefficient in pipes, using adaptive neuro-fuzzy inference systems (ANFIS, grid partition method was used. For training and testing of neuro-fuzzy model, the data derived from the Colebrook’s equation was used. In the neuro-fuzzy approach, pipe relative roughness and Reynolds number are considered as input variables and friction coefficient as output variable is considered. Performance of the proposed approach was evaluated by using of the data obtained from the Colebrook’s equation and based on statistical indicators such as coefficient determination (R2, root mean squared error (RMSE and mean absolute error (MAE. The results showed that the adaptive nerou-fuzzy inference system with grid partition method and gauss model as an input membership function and linear as an output function could estimate friction coefficient more accurately than other conditions. The new proposed approach in this paper has capability of application in the practical design issues and can be combined with mathematical and numerical models of sediment transfer or real-time updating of these models.

  1. Modelling of PM10 concentration for industrialized area in Malaysia: A case study in Shah Alam

    Science.gov (United States)

    N, Norazian Mohamed; Abdullah, M. M. A.; Tan, Cheng-yau; Ramli, N. A.; Yahaya, A. S.; Fitri, N. F. M. Y.

    In Malaysia, the predominant air pollutants are suspended particulate matter (SPM) and nitrogen dioxide (NO2). This research is on PM10 as they may trigger harm to human health as well as environment. Six distributions, namely Weibull, log-normal, gamma, Rayleigh, Gumbel and Frechet were chosen to model the PM10 observations at the chosen industrial area i.e. Shah Alam. One-year period hourly average data for 2006 and 2007 were used for this research. For parameters estimation, method of maximum likelihood estimation (MLE) was selected. Four performance indicators that are mean absolute error (MAE), root mean squared error (RMSE), coefficient of determination (R2) and prediction accuracy (PA), were applied to determine the goodness-of-fit criteria of the distributions. The best distribution that fits with the PM10 observations in Shah Alamwas found to be log-normal distribution. The probabilities of the exceedences concentration were calculated and the return period for the coming year was predicted from the cumulative density function (cdf) obtained from the best-fit distributions. For the 2006 data, Shah Alam was predicted to exceed 150 μg/m3 for 5.9 days in 2007 with a return period of one occurrence per 62 days. For 2007, the studied area does not exceed the MAAQG of 150 μg/m3

  2. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  3. Estimation of daily reference evapotranspiration (ETo) using artificial intelligence methods: Offering a new approach for lagged ETo data-based modeling

    Science.gov (United States)

    Mehdizadeh, Saeid

    2018-04-01

    Evapotranspiration (ET) is considered as a key factor in hydrological and climatological studies, agricultural water management, irrigation scheduling, etc. It can be directly measured using lysimeters. Moreover, other methods such as empirical equations and artificial intelligence methods can be used to model ET. In the recent years, artificial intelligence methods have been widely utilized to estimate reference evapotranspiration (ETo). In the present study, local and external performances of multivariate adaptive regression splines (MARS) and gene expression programming (GEP) were assessed for estimating daily ETo. For this aim, daily weather data of six stations with different climates in Iran, namely Urmia and Tabriz (semi-arid), Isfahan and Shiraz (arid), Yazd and Zahedan (hyper-arid) were employed during 2000-2014. Two types of input patterns consisting of weather data-based and lagged ETo data-based scenarios were considered to develop the models. Four statistical indicators including root mean square error (RMSE), mean absolute error (MAE), coefficient of determination (R2), and mean absolute percentage error (MAPE) were used to check the accuracy of models. The local performance of models revealed that the MARS and GEP approaches have the capability to estimate daily ETo using the meteorological parameters and the lagged ETo data as inputs. Nevertheless, the MARS had the best performance in the weather data-based scenarios. On the other hand, considerable differences were not observed in the models' accuracy for the lagged ETo data-based scenarios. In the innovation of this study, novel hybrid models were proposed in the lagged ETo data-based scenarios through combination of MARS and GEP models with autoregressive conditional heteroscedasticity (ARCH) time series model. It was concluded that the proposed novel models named MARS-ARCH and GEP-ARCH improved the performance of ETo modeling compared to the single MARS and GEP. In addition, the external

  4. A highly accurate absolute gravimetric network for Albania, Kosovo and Montenegro

    Science.gov (United States)

    Ullrich, Christian; Ruess, Diethard; Butta, Hubert; Qirko, Kristaq; Pavicevic, Bozidar; Murat, Meha

    2016-04-01

    The objective of this project is to establish a basic gravity network in Albania, Kosovo and Montenegro to enable further investigations in geodetic and geophysical issues. Therefore the first time in history absolute gravity measurements were performed in these countries. The Norwegian mapping authority Kartverket is assisting the national mapping authorities in Kosovo (KCA) (Kosovo Cadastral Agency - Agjencia Kadastrale e Kosovës), Albania (ASIG) (Autoriteti Shtetëror i Informacionit Gjeohapësinor) and in Montenegro (REA) (Real Estate Administration of Montenegro - Uprava za nekretnine Crne Gore) in improving the geodetic frameworks. The gravity measurements are funded by Kartverket. The absolute gravimetric measurements were performed from BEV (Federal Office of Metrology and Surveying) with the absolute gravimeter FG5-242. As a national metrology institute (NMI) the Metrology Service of the BEV maintains the national standards for the realisation of the legal units of measurement and ensures their international equivalence and recognition. Laser and clock of the absolute gravimeter were calibrated before and after the measurements. The absolute gravimetric survey was carried out from September to October 2015. Finally all 8 scheduled stations were successfully measured: there are three stations located in Montenegro, two stations in Kosovo and three stations in Albania. The stations are distributed over the countries to establish a gravity network for each country. The vertical gradients were measured at all 8 stations with the relative gravimeter Scintrex CG5. The high class quality of some absolute gravity stations can be used for gravity monitoring activities in future. The measurement uncertainties of the absolute gravity measurements range around 2.5 micro Gal at all stations (1 microgal = 10-8 m/s2). In Montenegro the large gravity difference of 200 MilliGal between station Zabljak and Podgorica can be even used for calibration of relative gravimeters

  5. Absolute Hugoniot measurements from a spherically convergent shock using x-ray radiography

    Science.gov (United States)

    Swift, Damian C.; Kritcher, Andrea L.; Hawreliak, James A.; Lazicki, Amy; MacPhee, Andrew; Bachmann, Benjamin; Döppner, Tilo; Nilsen, Joseph; Collins, Gilbert W.; Glenzer, Siegfried; Rothman, Stephen D.; Kraus, Dominik; Falcone, Roger W.

    2018-05-01

    The canonical high pressure equation of state measurement is to induce a shock wave in the sample material and measure two mechanical properties of the shocked material or shock wave. For accurate measurements, the experiment is normally designed to generate a planar shock which is as steady as possible in space and time, and a single state is measured. A converging shock strengthens as it propagates, so a range of shock pressures is induced in a single experiment. However, equation of state measurements must then account for spatial and temporal gradients. We have used x-ray radiography of spherically converging shocks to determine states along the shock Hugoniot. The radius-time history of the shock, and thus its speed, was measured by radiographing the position of the shock front as a function of time using an x-ray streak camera. The density profile of the shock was then inferred from the x-ray transmission at each instant of time. Simultaneous measurement of the density at the shock front and the shock speed determines an absolute mechanical Hugoniot state. The density profile was reconstructed using the known, unshocked density which strongly constrains the density jump at the shock front. The radiographic configuration and streak camera behavior were treated in detail to reduce systematic errors. Measurements were performed on the Omega and National Ignition Facility lasers, using a hohlraum to induce a spatially uniform drive over the outside of a solid, spherical sample and a laser-heated thermal plasma as an x-ray source for radiography. Absolute shock Hugoniot measurements were demonstrated for carbon-containing samples of different composition and initial density, up to temperatures at which K-shell ionization reduced the opacity behind the shock. Here we present the experimental method using measurements of polystyrene as an example.

  6. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  7. The Adaptive-Clustering and Error-Correction Method for Forecasting Cyanobacteria Blooms in Lakes and Reservoirs

    Directory of Open Access Journals (Sweden)

    Xiao-zhe Bai

    2017-01-01

    Full Text Available Globally, cyanobacteria blooms frequently occur, and effective prediction of cyanobacteria blooms in lakes and reservoirs could constitute an essential proactive strategy for water-resource protection. However, cyanobacteria blooms are very complicated because of the internal stochastic nature of the system evolution and the external uncertainty of the observation data. In this study, an adaptive-clustering algorithm is introduced to obtain some typical operating intervals. In addition, the number of nearest neighbors used for modeling was optimized by particle swarm optimization. Finally, a fuzzy linear regression method based on error-correction was used to revise the model dynamically near the operating point. We found that the combined method can characterize the evolutionary track of cyanobacteria blooms in lakes and reservoirs. The model constructed in this paper is compared to other cyanobacteria-bloom forecasting methods (e.g., phase space reconstruction and traditional-clustering linear regression, and, then, the average relative error and average absolute error are used to compare the accuracies of these models. The results suggest that the proposed model is superior. As such, the newly developed approach achieves more precise predictions, which can be used to prevent the further deterioration of the water environment.

  8. Absolute and relative dosimetry for ELIMED

    Energy Technology Data Exchange (ETDEWEB)

    Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Cuttone, G.; Candiano, G.; Musumarra, A.; Pisciotta, P.; Romano, F. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Leonora, E.; Randazzo, N. [INFN-Sezione di Catania, Via Santa Sofia 64, Catania (Italy); Presti, D. Lo [INFN-Sezione di Catania, Via Santa Sofia 64, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy); Raffaele, L. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and INFN-Sezione di Catania, Via Santa Sofia 64, Catania (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy); Cirio, R.; Sacchi, R.; Monaco, V. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino, Italy and Università di Torino, Dipartimento di Fisica, Via P.Giuria, 1 10125 Torino (Italy); Marchetto, F.; Giordanengo, S. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy)

    2013-07-26

    The definition of detectors, methods and procedures for the absolute and relative dosimetry of laser-driven proton beams is a crucial step toward the clinical use of this new kind of beams. Hence, one of the ELIMED task, will be the definition of procedures aiming to obtain an absolute dose measure at the end of the transport beamline with an accuracy as close as possible to the one required for clinical applications (i.e. of the order of 5% or less). Relative dosimetry procedures must be established, as well: they are necessary in order to determine and verify the beam dose distributions and to monitor the beam fluence and the energetic spectra during irradiations. Radiochromic films, CR39, Faraday Cup, Secondary Emission Monitor (SEM) and transmission ionization chamber will be considered, designed and studied in order to perform a fully dosimetric characterization of the ELIMED proton beam.

  9. Simple method for absolute calibration of geophones, seismometers, and other inertial vibration sensors

    International Nuclear Information System (INIS)

    Kann, Frank van; Winterflood, John

    2005-01-01

    A simple but powerful method is presented for calibrating geophones, seismometers, and other inertial vibration sensors, including passive accelerometers. The method requires no cumbersome or expensive fixtures such as shaker platforms and can be performed using a standard instrument commonly available in the field. An absolute calibration is obtained using the reciprocity property of the device, based on the standard mathematical model for such inertial sensors. It requires only simple electrical measurement of the impedance of the sensor as a function of frequency to determine the parameters of the model and hence the sensitivity function. The method is particularly convenient if one of these parameters, namely the suspended mass is known. In this case, no additional mechanical apparatus is required and only a single set of impedance measurements yields the desired calibration function. Moreover, this measurement can be made with the device in situ. However, the novel and most powerful aspect of the method is its ability to accurately determine the effective suspended mass. For this, the impedance measurement is made with the device hanging from a simple spring or flexible cord (depending on the orientation of its sensitive axis). To complete the calibration, the device is weighed to determine its total mass. All the required calibration parameters, including the suspended mass, are then determined from a least-squares fit to the impedance as a function of frequency. A demonstration using both a 4.5 Hz geophone and a 1 Hz seismometer shows that the method can yield accurate absolute calibrations with an error of 0.1% or better, assuming no a priori knowledge of any parameters

  10. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  11. Philosophy as Inquiry Aimed at the Absolute Knowledge

    Directory of Open Access Journals (Sweden)

    Ekaterina Snarskaya

    2017-09-01

    Full Text Available Philosophy as the absolute knowledge has been studied from two different but closely related approaches: historical and logical. The first approach exposes four main stages in the history of European metaphysics that marked out types of “philosophical absolutism”: the evolution of philosophy brought to light metaphysics of being, method, morals and logic. All of them are associated with the names of Aristotle, Bacon/Descartes, Kant and Hegel. Then these forms are considered in the second approach that defined them as subject-matter of philosophy as such. Due to their overall, comprehensive character, the focus of philosophy on them justifies its claim on absoluteness as far as philosophy is aimed at comprehension of the world’s unity regardless of the philosopher’s background, values and other preferences. And that is its prerogative since no other form of consciousness lays down this kind of aim. Thus, philosophy is defined as an everlasting attempt to succeed in conceiving the world in all its multifold manifestations. This article is to try to clarify the claim of philosophy on the absolute knowledge.

  12. Absolute pitch: a case study.

    Science.gov (United States)

    Vernon, P E

    1977-11-01

    The auditory skill known as 'absolute pitch' is discussed, and it is shown that this differs greatly in accuracy of identification or reproduction of musical tones from ordinary discrimination of 'tonal height' which is to some extent trainable. The present writer possessed absolute pitch for almost any tone or chord over the normal musical range, from about the age of 17 to 52. He then started to hear all music one semitone too high, and now at the age of 71 it is heard a full tone above the true pitch. Tests were carried out under controlled conditions, in which 68 to 95 per cent of notes were identified as one semitone or one tone higher than they should be. Changes with ageing seem more likely to occur in the elasticity of the basilar membrane mechanisms than in the long-term memory which is used for aural analysis of complex sounds. Thus this experience supports the view that some resolution of complex sounds takes place at the peripheral sense organ, and this provides information which can be incorrect, for interpretation by the cortical centres.

  13. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    Science.gov (United States)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere

  14. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  15. Absolute calibration technique for spontaneous fission sources

    International Nuclear Information System (INIS)

    Zucker, M.S.; Karpf, E.

    1984-01-01

    An absolute calibration technique for a spontaneously fissioning nuclide (which involves no arbitrary parameters) allows unique determination of the detector efficiency for that nuclide, hence of the fission source strength

  16. Absolute luminosity measurements with the LHCb detector at the LHC

    CERN Document Server

    Aaij, R; Adinolfi, M; Adrover, C; Affolder, A; Ajaltouni, Z; Albrecht, J; Alessio, F; Alexander, M; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amhis, Y; Anderson, J; Appleby, R B; Aquines Gutierrez, O; Archilli, F; Arrabito, L; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Bachmann, S; Back, J J; Bailey, D S; Balagura, V; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Bates, A; Bauer, C; Bauer, Th; Bay, A; Bediaga, I; Belous, K; Belyaev, I; Ben-Haim, E; Benayoun, M; Bencivenni, G; Benson, S; Benton, J; Bernet, R; Bettler, M-O; van Beuzekom, M; Bien, A; Bifani, S; Bizzeti, A; Bjørnstad, P M; Blake, T; Blanc, F; Blanks, C; Blouw, J; Blusk, S; Bobrov, A; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borghi, S; Borgia, A; Bowcock, T J V; Bozzi, C; Brambach, T; van den Brand, J; Bressieux, J; Brett, D; Brisbane, S; Britsch, M; Britton, T; Brook, N H; Brown, H; Büchler-Germann, A; Burducea, I; Bursche, A; Buytaert, J; Cadeddu, S; Caicedo Carvajal, J M; Callot, O; Calvi, M; Calvo Gomez, M; Camboni, A; Campana, P; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carson, L; Carvalho Akiba, K; Casse, G; Cattaneo, M; Charles, M; Charpentier, Ph; Chiapolini, N; Ciba, K; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coca, C; Coco, V; Cogan, J; Collins, P; Constantin, F; Conti, G; Contu, A; Cook, A; Coombes, M; Corti, G; Cowan, G A; Currie, R; D'Almagne, B; D'Ambrosio, C; David, P; De Bonis, I; De Capua, S; De Cian, M; De Lorenzi, F; De Miranda, J M; De Paula, L; De Simone, P; Decamp, D; Deckenhoff, M; Degaudenzi, H; Deissenroth, M; Del Buono, L; Deplano, C; Deschamps, O; Dettori, F; Dickens, J; Dijkstra, H; Diniz Batista, P; Donleavy, S; Dordei, F; Dosil Suárez, A; Dossett, D; Dovbnya, A; Dupertuis, F; Dzhelyadin, R; Eames, C; Easo, S; Egede, U; Egorychev, V; Eidelman, S; van Eijk, D; Eisele, F; Eisenhardt, S; Ekelhof, R; Eklund, L; Elsasser, Ch; d'Enterria, D G; Esperante Pereira, D; Estève, L; Falabella, A; Fanchini, E; Färber, C; Fardell, G; Farinelli, C; Farry, S; Fave, V; Fernandez Albor, V; Ferro-Luzzi, M; Filippov, S; Fitzpatrick, C; Fontana, M; Fontanelli, F; Forty, R; Frank, M; Frei, C; Frosini, M; Furcas, S; Gallas Torreira, A; Galli, D; Gandelman, M; Gandini, P; Gao, Y; Garnier, J-C; Garofoli, J; Garra Tico, J; Garrido, L; Gaspar, C; Gauvin, N; Gersabeck, M; Gershon, T; Ghez, Ph; Gibson, V; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gordon, H; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graziani, G; Grecu, A; Gregson, S; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Haefeli, G; Haen, C; Haines, S C; Hampson, T; Hansmann-Menzemer, S; Harji, R; Harnew, N; Harrison, J; Harrison, P F; He, J; Heijne, V; Hennessy, K; Henrard, P; Hernando Morata, J A; van Herwijnen, E; Hicks, E; Hofmann, W; Holubyev, K; Hopchev, P; Hulsbergen, W; Hunt, P; Huse, T; Huston, R S; Hutchcroft, D; Hynds, D; Iakovenko, V; Ilten, P; Imong, J; Jacobsson, R; Jaeger, A; Jahjah Hussein, M; Jans, E; Jansen, F; Jaton, P; Jean-Marie, B; Jing, F; John, M; Johnson, D; Jones, C R; Jost, B; Kandybei, S; Karacson, M; Karbach, T M; Keaveney, J; Kerzel, U; Ketel, T; Keune, A; Khanji, B; Kim, Y M; Knecht, M; Koblitz, S; Koppenburg, P; Kozlinskiy, A; Kravchuk, L; Kreplin, K; Kreps, M; Krocker, G; Krokovny, P; Kruse, F; Kruzelecki, K; Kucharczyk, M; Kukulak, S; Kumar, R; Kvaratskheliya, T; La Thi, V N; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lambert, R W; Lanciotti, E; Lanfranchi, G; Langenbruch, C; Latham, T; Le Gac, R; van Leerdam, J; Lees, J-P; Lefèvre, R; Leflat, A; Lefrançois, J; Leroy, O; Lesiak, T; Li, L; Li Gioi, L; Lieng, M; Liles, M; Lindner, R; Linn, C; Liu, B; Liu, G; Lopes, J H; Lopez Asamar, E; Lopez-March, N; Luisier, J; Machefert, F; Machikhiliyan, I V; Maciuc, F; Maev, O; Magnin, J; Malde, S; Mamunur, R M D; Manca, G; Mancinelli, G; Mangiafave, N; Marconi, U; Märki, R; Marks, J; Martellotti, G; Martens, A; Martin, L; Martín Sánchez, A; Martinez Santos, D; Massafferri, A; Matev, R; Mathe, Z; Matteuzzi, C; Matveev, M; Maurice, E; Maynard, B; Mazurov, A; McGregor, G; McNulty, R; Mclean, C; Meissner, M; Merk, M; Merkel, J; Messi, R; Miglioranzi, S; Milanes, D A; Minard, M-N; Monteil, S; Moran, D; Morawski, P; Mountain, R; Mous, I; Muheim, F; Müller, K; Muresan, R; Muryn, B; Musy, M; Mylroie-Smith, J; Naik, P; Nakada, T; Nandakumar, R; Nardulli, J; Nasteva, I; Nedos, M; Needham, M; Neufeld, N; Nguyen-Mau, C; Nicol, M; Nies, S; Niess, V; Nikitin, N; Oblakowska-Mucha, A; Obraztsov, V; Oggero, S; Ogilvy, S; Okhrimenko, O; Oldeman, R; Orlandea, M; Otalora Goicochea, J M; Owen, P; Pal, B; Palacios, J; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Parkes, C; Parkinson, C J; Passaleva, G; Patel, G D; Patel, M; Paterson, S K; Patrick, G N; Patrignani, C; Pavel-Nicorescu, C; Pazos Alvarez, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perego, D L; Perez Trigo, E; Pérez-Calero Yzquierdo, A; Perret, P; Perrin-Terrin, M; Pessina, G; Petrella, A; Petrolini, A; Pie Valls, B; Pietrzyk, B; Pilar, T; Pinci, D; Plackett, R; Playfer, S; Plo Casasus, M; Polok, G; Poluektov, A; Polycarpo, E; Popov, D; Popovici, B; Potterat, C; Powell, A; du Pree, T; Prisciandaro, J; Pugatch, V; Puig Navarro, A; Qian, W; Rademacker, J H; Rakotomiaramanana, B; Rangel, M S; Raniuk, I; Raven, G; Redford, S; Reid, M M; dos Reis, A C; Ricciardi, S; Rinnert, K; Roa Romero, D A; Robbe, P; Rodrigues, E; Rodrigues, F; Rodriguez Perez, P; Rogers, G J; Roiser, S; Romanovsky, V; Rouvinet, J; Ruf, T; Ruiz, H; Sabatino, G; Saborido Silva, J J; Sagidova, N; Sail, P; Saitta, B; Salzmann, C; Sannino, M; Santacesaria, R; Santamarina Rios, C; Santinelli, R; Santovetti, E; Sapunov, M; Sarti, A; Satriano, C; Satta, A; Savrie, M; Savrina, D; Schaack, P; Schiller, M; Schleich, S; Schmelling, M; Schmidt, B; Schneider, O; Schopper, A; Schune, M -H; Schwemmer, R; Sciubba, A; Seco, M; Semennikov, A; Senderowska, K; Sepp, I; Serra, N; Serrano, J; Seyfert, P; Shao, B; Shapkin, M; Shapoval, I; Shatalov, P; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, O; Shevchenko, V; Shires, A; Silva Coutinho, R; Skottowe, H P; Skwarnicki, T; Smith, A C; Smith, N A; Sobczak, K; Soler, F J P; Solomin, A; Soomro, F; Souza De Paula, B; Spaan, B; Sparkes, A; Spradlin, P; Stagni, F; Stahl, S; Steinkamp, O; Stoica, S; Stone, S; Storaci, B; Straticiuc, M; Straumann, U; Styles, N; Subbiah, V K; Swientek, S; Szczekowski, M; Szczypka, P; Szumlak, T; T'Jampens, S; Teodorescu, E; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Topp-Joergensen, S; Tran, M T; Tsaregorodtsev, A; Tuning, N; Ubeda Garcia, M; Ukleja, A; Urquijo, P; Uwer, U; Vagnoni, V; Valenti, G; Vazquez Gomez, R; Vazquez Regueiro, P; Vecchi, S; Velthuis, J J; Veltri, M; Vervink, K; Viaud, B; Videau, I; Vilasis-Cardona, X; Visniakov, J; Vollhardt, A; Voong, D; Vorobyev, A; Voss, H; Wacker, K; Wandernoth, S; Wang, J; Ward, D R; Webber, A D; Websdale, D; Whitehead, M; Wiedner, D; Wiggers, L; Wilkinson, G; Williams, M P; Williams, M; Wilson, F F; Wishahi, J; Witek, M; Witzeling, W; Wotton, S A; Wyllie, K; Xie, Y; Xing, F; Yang, Z; Young, R; Yushchenko, O; Zavertyaev, M; Zhang, F; Zhang, L; Zhang, W C; Zhang, Y; Zhelezov, A; Zhong, L; Zverev, E; Zvyagin, A

    2012-01-01

    Absolute luminosity measurements are of general interest for colliding-beam experiments at storage rings. These measurements are necessary to determine the absolute cross-sections of reaction processes and are valuable to quantify the performance of the accelerator. LHCb has applied two methods to determine the absolute scale of its luminosity measurements for proton-proton collisions at the LHC with a centre-of-mass energy of 7 TeV. In addition to the classic ``van der Meer scan'' method a novel technique has been developed which makes use of direct imaging of the individual beams using beam-gas and beam-beam interactions. This beam imaging method is made possible by the high resolution of the LHCb vertex detector and the close proximity of the detector to the beams, and allows beam parameters such as positions, angles and widths to be determined. The results of the two methods have comparable precision and are in good agreement. Combining the two methods, an overall precision of 3.5\\% in the absolute lumi...

  17. Absolute calibration of TFTR helium proportional counters

    International Nuclear Information System (INIS)

    Strachan, J.D.; Diesso, M.; Jassby, D.; Johnson, L.; McCauley, S.; Munsat, T.; Roquemore, A.L.; Loughlin, M.

    1995-06-01

    The TFTR helium proportional counters are located in the central five (5) channels of the TFTR multichannel neutron collimator. These detectors were absolutely calibrated using a 14 MeV neutron generator positioned at the horizontal midplane of the TFTR vacuum vessel. The neutron generator position was scanned in centimeter steps to determine the collimator aperture width to 14 MeV neutrons and the absolute sensitivity of each channel. Neutron profiles were measured for TFTR plasmas with time resolution between 5 msec and 50 msec depending upon count rates. The He detectors were used to measure the burnup of 1 MeV tritons in deuterium plasmas, the transport of tritium in trace tritium experiments, and the residual tritium levels in plasmas following 50:50 DT experiments

  18. Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements

    Science.gov (United States)

    Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.

    2014-01-01

    Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when

  19. Stimulus Probability Effects in Absolute Identification

    Science.gov (United States)

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  20. Absolute gravity measurements in California

    Science.gov (United States)

    Zumberge, M. A.; Sasagawa, G.; Kappus, M.

    1986-08-01

    An absolute gravity meter that determines the local gravitational acceleration by timing a freely falling mass with a laser interferometer has been constructed. The instrument has made measurements at 11 sites in California, four in Nevada, and one in France. The uncertainty in the results is typically 10 microgal. Repeated measurements have been made at several of the sites; only one shows a substantial change in gravity.