Measurement error in geometric morphometrics.
Fruciano, Carmelo
2016-06-01
Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset.
STUDY ON NEW METHOD OF IDENTIFYING GEOMETRIC ERROR PARAMETERS FOR NC MACHINE TOOLS
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The methods of identifying geometric error parameters for NC machine tools are introduced. According to analyzing and comparing the different methods, a new method-displacement method with 9 lines is developed based on the theories of the movement errors of multibody system (MBS). A lot of experiments are also made to obtain 21 terms geometric error parameters by using the error identification software based on the new method.
A Study of Quantum Error Correction by Geometric Algebra and Liquid-State NMR Spectroscopy
Sharf, Y; Somaroo, S S; Havel, T F; Knill, E H; Laflamme, R; Sharf, Yehuda; Cory, David G.; Somaroo, Shyamal S.; Havel, Timothy F.; Knill, Emanuel; Laflamme, Raymond
2000-01-01
Quantum error correcting codes enable the information contained in a quantum state to be protected from decoherence due to external perturbations. Applied to NMR, quantum coding does not alter normal relaxation, but rather converts the state of a ``data'' spin into multiple quantum coherences involving additional ancilla spins. These multiple quantum coherences relax at differing rates, thus permitting the original state of the data to be approximately reconstructed by mixing them together in an appropriate fashion. This paper describes the operation of a simple, three-bit quantum code in the product operator formalism, and uses geometric algebra methods to obtain the error-corrected decay curve in the presence of arbitrary correlations in the external random fields. These predictions are confirmed in both the totally correlated and uncorrelated cases by liquid-state NMR experiments on 13C-labeled alanine, using gradient-diffusion methods to implement these idealized decoherence models. Quantum error correcti...
Study of geometric errors detection method for NC machine tools based on non-contact circular track
Yan, Kejun; Liu, Jun; Gao, Feng; Wang, Huan
2008-12-01
This paper presents a non-contact measuring method of geometric errors for NC machine tools based on circular track testing method. Let the machine spindle move along a circular path, the position error of every tested position in the circle can be obtained using two laser interferometers. With a volumetric error model, the 12 components of geometric error apart from angular error components can be derived. It has characteristics of wide detection range and high precision. Being obtained geometric errors respectively, it is of great significance for the error compensation of NC machine tools. This method has been tested on a MCV-510 NC machine tool. The experiment result has been proved to be feasible for this method.
Geometric Error Analysis in Applied Calculus Problem Solving
Usman, Ahmed Ibrahim
2017-01-01
The paper investigates geometric errors students made as they tried to use their basic geometric knowledge in the solution of the Applied Calculus Optimization Problem (ACOP). Inaccuracies related to the drawing of geometric diagrams (visualization skills) and those associated with the application of basic differentiation concepts into ACOP…
Forward error correction based on algebraic-geometric theory
A Alzubi, Jafar; M Chen, Thomas
2014-01-01
This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.
Effects of imbalance and geometric error on precision grinding machines
Energy Technology Data Exchange (ETDEWEB)
Bibler, J.E.
1997-06-01
To study balancing in grinding, a simple mechanical system was examined. It was essential to study such a well-defined system, as opposed to a large, complex system such as a machining center. The use of a compact, well-defined system enabled easy quantification of the imbalance force input, its phase angle to any geometric decentering, and good understanding of the machine mode shapes. It is important to understand a simple system such as the one I examined given that imbalance is so intimately coupled to machine dynamics. It is possible to extend the results presented here to industrial machines, although that is not part of this work. In addition to the empirical testing, a simple mechanical system to look at how mode shapes, balance, and geometric error interplay to yield spindle error motion was modelled. The results of this model will be presented along with the results from a more global grinding model. The global model, presented at ASPE in November 1996, allows one to examine the effects of changing global machine parameters like stiffness and damping. This geometrically abstract, one-dimensional model will be presented to demonstrate the usefulness of an abstract approach for first-order understanding but it will not be the main focus of this thesis. 19 refs., 36 figs., 10 tables.
Steen, W.H.A.
1984-01-01
Geometric errors that occur in oblique cephalometric radiographic projections of the edentulous mandible were calculated for different focal-spot-to-object distances (1500, 3000, and 6000 mm). The horizontal errors from tolerance of the porion and nasion fixation in the cephalostat were calculated.
Directory of Open Access Journals (Sweden)
Pooyan Vahidi Pashsaki
2016-06-01
Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.
Systematics for checking geometric errors in CNC lathes
Araújo, R. P.; Rolim, T. L.
2015-10-01
Non-idealities presented in machine tools compromise directly both the geometry and the dimensions of machined parts, generating distortions in the project. Given the competitive scenario among different companies, it is necessary to have knowledge of the geometric behavior of these machines in order to be able to establish their processing capability, avoiding waste of time and materials as well as satisfying customer requirements. But despite the fact that geometric tests are important and necessary to clarify the use of the machine correctly, therefore preventing future damage, most users do not apply such tests on their machines for lack of knowledge or lack of proper motivation, basically due to two factors: long period of time and high costs of testing. This work proposes a systematics for checking straightness and perpendicularity errors in CNC lathes demanding little time and cost with high metrological reliability, to be used on factory floors of small and medium-size businesses to ensure the quality of its products and make them competitive.
Directory of Open Access Journals (Sweden)
DR.S.C.JAYSWAL
2011-07-01
Full Text Available This experimental work presents a technique to determine the better surface quality by controlling the surface roughness and geometrical error. In machining operations, achieving desired surface quality features of the machined product is really a challenging job. Because, these quality features are highly correlated and areexpected to be influenced directly or indirectly by the direct effect of process parameters or their interactive effects. Thus The four input process parameters such as spindle speed, depth of cut, feed rate, and stepover have been selected to minimize the surface roughness and geometrical error simultaneously by using the robustdesign concept of Taguchi L9(34 method coupled with Response surface concept. Mathematical models for surface roughness and geometrical error were obtained from response surface analysis to predict values of surface roughness and geometrical error. S/N ratio and ANOVA analyses were also performed to obtain for significant parameters influencing surface roughness and geometrical error.
Multi-Fidelity Uncertainty Quantification for Geometric Manufacturing Errors in Turbulent Flow
Ahlfeld, Richard; Laizet, Sylvain; Geraci, Gianluca; Iaccarino, Gianluca; Montomoli, Francesco
2016-11-01
Geometric manufacturing errors of a curved surface can have a large effect on the behavior of turbulent flows. Previously, geometric variability has only been investigated using RANS simulations. Since aleatory uncertainty propagation is strongly affected by epistemic uncertainty, it is difficult to tell to what extent observed variations are caused by geometric variation or are due to RANS modeling errors. In this work, an uncertainty quantification study for flow over periodic hills with random geometry was carried out using 8 DNS as well as 81 RANS simulations to investigate this issue. The results show that RANS models can lead to highly biased and incorrectly shaped probability distributions. However, the results of DNS and RANS can be combined into a hybrid approach with higher accuracy than RANS and lower cost than DNS. It is shown that RANS prediction accuracy can be significantly improved by correcting RANS response surfaces with DNS data in a multi-fidelity non-intrusive Polynomial Chaos approach based on weighted regression. In summary, this work presents two novelties: a high-fidelity propagation of geometric uncertainty using DNS and a DNS/RANS hybrid approach to improve the probabilistic prediction accuracy of RANS models.
Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
MODELING AND COMPENSATION TECHNIQUE FOR THE GEOMETRIC ERRORS OF FIVE-AXIS CNC MACHINE TOOLS
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
One of the important trends in precision machining is the development of real-time error compensation technique.The error compensation for multi-axis CNC machine tools is very difficult and attractive.The modeling for the geometric error of five-axis CNC machine tools based on multi-body systems is proposed.And the key technique of the compensation-identifying geometric error parameters-is developed.The simulation of cutting workpiece to verify the modeling based on the multi-body systems is also considered.
Li, Xiaoqiang; Han, Kai; Yu, Honghan; Zhang, Yongsheng; Li, Dongsheng
2016-08-01
Two point incremental forming receives widespread study with its advantages of economy and flexibility in small batch products, such as aircraft parts. Aircraft parts, however, are rigorous in their shape errors. In this paper, one real airplane part is selected and formed with different process parameters to investigate the shape error level of part. Comparing the geometric errors caused by different process parameters, such as tool diameter, step size, feed rate and tool path, it is found that the geometric errors reduce as tool diameter increases. Meanwhile, the effect of step size is not linear. Influence law of feed rate is various with different other parameters. The bidirectional tool path, having opposite processing direction at adjacent layer, reduces the errors.
Universal geometric error modeling of the CNC machine tools based on the screw theory
Tian, Wenjie; He, Baiyan; Huang, Tian
2011-05-01
The methods to improve the precision of the CNC (Computerized Numerical Control) machine tools can be classified into two categories: error prevention and error compensation. Error prevention is to improve the precision via high accuracy in manufacturing and assembly. Error compensation is to analyze the source errors that affect on the machining error, to establish the error model and to reach the ideal position and orientation by modifying the trajectory in real time. Error modeling is the key to compensation, so the error modeling method is of great significance. Many researchers have focused on this topic, and proposed many methods, but we can hardly describe the 6-dimensional configuration error of the machine tools. In this paper, the universal geometric error model of CNC machine tools is obtained utilizing screw theory. The 6-dimensional error vector is expressed with a twist, and the error vector transforms between different frames with the adjoint transformation matrix. This model can describe the overall position and orientation errors of the tool relative to the workpiece entirely. It provides the mathematic model for compensation, and also provides a guideline in the manufacture, assembly and precision synthesis of the machine tools.
Correcting incompatible DN values and geometric errors in nighttime lights time series images
Energy Technology Data Exchange (ETDEWEB)
Zhao, Naizhuo [Texas Tech Univ., Lubbock, TX (United States); Zhou, Yuyu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Samson, Eric L. [Mayan Esteem Project, Farmington, CT (United States)
2014-09-19
The Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) nighttime lights imagery has proven to be a powerful remote sensing tool to monitor urbanization and assess socioeconomic activities at large scales. However, the existence of incompatible digital number (DN) values and geometric errors severely limit application of nighttime light image data on multi-year quantitative research. In this study we extend and improve previous studies on inter-calibrating nighttime lights image data to obtain more compatible and reliable nighttime lights time series (NLT) image data for China and the United States (US) through four steps: inter-calibration, geometric correction, steady increase adjustment, and population data correction. We then use gross domestic product (GDP) data to test the processed NLT image data indirectly and find that sum light (summed DN value of pixels in a nighttime light image) maintains apparent increase trends with relatively large GDP growth rates but does not increase or decrease with relatively small GDP growth rates. As nighttime light is a sensitive indicator for economic activity, the temporally consistent trends between sum light and GDP growth rate imply that brightness of nighttime lights on the ground is correctly represented by the processed NLT image data. Finally, through analyzing the corrected NLT image data from 1992 to 2008, we find that China experienced apparent nighttime lights development in 1992-1997 and 2001-2008 respectively and the US suffered from nighttime lights decay in large areas after 2001.
Geometrical analysis of registration errors in point-based rigid-body registration using invariants.
Shamir, Reuben R; Joskowicz, Leo
2011-02-01
Point-based rigid registration is the method of choice for aligning medical datasets in diagnostic and image-guided surgery systems. The most clinically relevant localization error measure is the Target Registration Error (TRE), which is the distance between the image-defined target and the corresponding target defined on another image or on the physical anatomy after registration. The TRE directly depends on the Fiducial Localization Error (FLE), which is the discrepancy between the selected and the actual (unknown) fiducial locations. Since the actual locations of targets usually cannot be measured after registration, the TRE is often estimated by the Fiducial Registration Error (FRE), which is the RMS distance between the fiducials in both datasets after registration, or with Fitzpatrick's TRE (FTRE) formula. However, low FRE-TRE and FTRE-TRE correlations have been reported in clinical practice and in theoretical studies. In this article, we show that for realistic FLE classes, the TRE and the FRE are uncorrelated, regardless of the target location and the number of fiducials and their configuration, and regardless of the FLE magnitude distribution. We use a geometrical approach and classical invariant theory to model the FLE and derive its relation to the TRE and FRE values. We show that, for these FLE classes, the FTRE and TRE are also uncorrelated. Finally, we show with simulations on clinical data that the FRE-TRE correlation is low also in the neighborhood of the FLE-FRE invariant classes. Consequently, and contrary to common practice, the FRE and FTRE may not always be used as surrogates for the TRE.
Institute of Scientific and Technical Information of China (English)
Abdul Wahid Khan; Chen Wuyi
2010-01-01
A systematic geometric model has been presented for calibration of a newly designed 5-axis turbine blade grinding machine.This machine is designed to serve a specific purpose to attain high accuracy and high efficiency grinding of turbine blades by eliminating the hand grinding process.Although its topology is RPPPR (P:prismatic;R:rotary),its design is quite distinct from the competitive machine tools.As error quantification is the only way to investigate,maintain and improve its accuracy,calibration is recommended for its performance assessment and acceptance testing.Systematic geometric error modeling technique is implemented and 52 position dependent and position independent errors are identified while considering the machine as five rigid bodies by eliminating the set-up errors ofworkpiece and cutting tool.39 of them are found to have influential errors and are accommodated for finding the resultant effect between the cutting tool and the workpiece in workspace volume.Rigid body kinematics techniques and homogenous transformation matrices are used for error synthesis.
Directory of Open Access Journals (Sweden)
Shengxiang Jia
2003-01-01
Full Text Available This article presents a dynamic model of three shafts and two pair of gears in mesh, with 26 degrees of freedom, including the effects of variable tooth stiffness, pitch and profile errors, friction, and a localized tooth crack on one of the gears. The article also details howgeometrical errors in teeth can be included in a model. The model incorporates the effects of variations in torsional mesh stiffness in gear teeth by using a common formula to describe stiffness that occurs as the gears mesh together. The comparison between the presence and absence of geometrical errors in teeth was made by using Matlab and Simulink models, which were developed from the equations of motion. The effects of pitch and profile errors on the resultant input pinion angular velocity coherent-signal of the input pinion's average are discussed by investigating some of the common diagnostic functions and changes to the frequency spectra results.
Using geometric algebra to study optical aberrations
Energy Technology Data Exchange (ETDEWEB)
Hanlon, J.; Ziock, H.
1997-05-01
This paper uses Geometric Algebra (GA) to study vector aberrations in optical systems with square and round pupils. GA is a new way to produce the classical optical aberration spot diagrams on the Gaussian image plane and surfaces near the Gaussian image plane. Spot diagrams of the third, fifth and seventh order aberrations for square and round pupils are developed to illustrate the theory.
An analysis of the effects of initial velocity errors on geometric pairing
Schricker, Bradley C.; Ford, Louis
2007-04-01
For a number of decades, among the most prevalent training media in the military has been Tactical Engagement Simulation (TES) training. TES has allowed troops to train for practical missions in highly realistic combat environments without the associated risks involved with live weaponry and munitions. This has been possible because current TES has relied largely upon the Multiple Integrated Laser Engagement System (MILES) and similar systems for a number of years for direct-fire weapons, using a laser to pair the shooter to the potential target(s). Emerging systems, on the other hand, will use a pairing method called geometric pairing (geo-pairing), which uses a set of data about both the shooter and target, such as locations, weapon orientations, velocities, and weapon projectile velocities, nearby terrain to resolve an engagement. A previous paper [1] introduces various potential sources of error for a geo-pairing solution. This paper goes into greater depth regarding the impact of errors that originate within initial velocity errors, beginning with a short introduction into the TES system (TESS). The next section will explain the modeling characteristics of the projectile motion followed by a mathematical analysis illustrating the impacts of errors related to those characteristics. A summary and conclusion containing recommendations will close this paper.
The Most Common Geometric and Semantic Errors in CityGML Datasets
Biljecki, F.; Ledoux, H.; Du, X.; Stoter, J.; Soon, K. H.; Khoo, V. H. S.
2016-10-01
To be used as input in most simulation and modelling software, 3D city models should be geometrically and topologically valid, and semantically rich. We investigate in this paper what is the quality of currently available CityGML datasets, i.e. we validate the geometry/topology of the 3D primitives (Solid and MultiSurface), and we validate whether the semantics of the boundary surfaces of buildings is correct or not. We have analysed all the CityGML datasets we could find, both from portals of cities and on different websites, plus a few that were made available to us. We have thus validated 40M surfaces in 16M 3D primitives and 3.6M buildings found in 37 CityGML datasets originating from 9 countries, and produced by several companies with diverse software and acquisition techniques. The results indicate that CityGML datasets without errors are rare, and those that are nearly valid are mostly simple LOD1 models. We report on the most common errors we have found, and analyse them. One main observation is that many of these errors could be automatically fixed or prevented with simple modifications to the modelling software. Our principal aim is to highlight the most common errors so that these are not repeated in the future. We hope that our paper and the open-source software we have developed will help raise awareness for data quality among data providers and 3D GIS software producers.
Rodis, Panteleimon
2011-01-01
The topic of this paper is to present a study of the connection errors in networks of linear features that are mapped in Geographical Information Systems (GIS), as well as algorithms that detect them and the notion of geometrical reduction and its use in spatial data algorithms. In datasets of networks usually there can be found errors, when a number of elements of the network are not connected according to the specifications of the network. This can occur due to errors in the digitization process of the network or because these elements are connected in reality in a not legal way. These errors can be topological, when the network elements are not correctly connected, or errors that violate the specifications of the network. Examples of specifications that their violation can result this situation is when in a telecommunication network there is the restriction that channels of high capacity should not be connected directly to home networks but only through local sub-networks, or when in a road network there i...
Kang, Ki Mun; Jeong, Bae Kwon; Choi, Hoon-Sik; Yoo, Seung Hoon; Hwang, Ui-Jung; Lim, Young Kyung; Jeong, Hojin
2015-09-08
We have investigated the combined effect of tissue heterogeneity and its variation associated with geometric error in stereotactic body radiotherapy (SBRT) for lung cancer. The treatment plans for eight lung cancer patients were calculated using effective path length (EPL) correction and Monte Carlo (MC) algorithms, with both having the same beam configuration for each patient. These two kinds of plans for individual patients were then subsequently recalculated with adding systematic and random geometric errors. In the ordinary treatment plans calculated with no geometric offset, the EPL calculations, compared with the MC calculations, largely overestimated the doses to PTV by ~ 21%, whereas the overestimation were markedly lower in GTV by ~ 12% due to relatively higher density of GTV than of PTV. When recalculating the plans for individual patients with assigning the systematic and random geometric errors, no significant changes in the relative dose distribution, except for overall shift, were observed in the EPL calculations, whereas largely altered in the MC calculations with a consistent increase in dose to GTV. Considering the better accuracy of MC than EPL algorithms, the present results demonstrated the strong coupling of tissue heterogeneity and geometric error, thereby emphasizing the essential need for simultaneous correction for tissue heterogeneity and geometric targeting error in SBRT of lung cancer.
Energy Technology Data Exchange (ETDEWEB)
Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua, E-mail: huli@radonc.wustl.edu [Department of Radiation Oncology, Washington University, St. Louis, Missouri 63110 (United States); Anastasio, Mark A. [Department of Biomedical Engineering, Washington University, St. Louis, Missouri 63110 (United States); Low, Daniel A. [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States)
2015-02-15
Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets
Directory of Open Access Journals (Sweden)
Cai Ligang
2017-01-01
Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.
Error performance analysis in K-tier uplink cellular networks using a stochastic geometric approach
Afify, Laila H.
2015-09-14
In this work, we develop an analytical paradigm to analyze the average symbol error probability (ASEP) performance of uplink traffic in a multi-tier cellular network. The analysis is based on the recently developed Equivalent-in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performance characterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important communication system parameters and goes beyond signal-to-interference-plus-noise ratio characterization. That is, the presented model accounts for the modulation scheme, constellation type, and signal recovery techniques to model the ASEP. To this end, we derive single integral expressions for the ASEP for different modulation schemes due to aggregate network interference. Finally, all theoretical findings of the paper are verified via Monte Carlo simulations.
Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You
2013-11-01
A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.
Study on the Grey Polynomial Geometric Programming
Institute of Scientific and Technical Information of China (English)
LUODang
2005-01-01
In the model of geometric programming, values of parameters cannot be gotten owing to data fluctuation and incompletion. But reasonable bounds of these parameters can be attained. This is to say, parameters of this model can be regarded as interval grey numbers. When the model contains grey numbers, it is hard for common programming method to solve them. By combining the common programming model with the grey system theory,and using some analysis strategies, a model of grey polynomial geometric programming, a model of 8 positioned geometric programming and their quasi-optimum solution or optimum solution are put forward. At the same time, we also developed an algorithm for the problem.This approach brings a new way for the application research of geometric programming. An example at the end of this paper shows the rationality and feasibility of the algorithm.
FLUORESCENCE OVERLAY ANTIGEN MAPPING OF THE EPIDERMAL BASEMENT-MEMBRANE ZONE .1. GEOMETRIC ERRORS
BRUINS, S; DEJONG, MCJM; HEERES, K; WILKINSON, MHF; JONKMAN, MF; VANDERMEER, JB
To identify in tissue sections the relative positions of antigen distributions close to the resolving power of the microscope, we have developed the fluorescence overlay antigen mapping (FOAM) procedure. As this technique makes high demands on the geometric fidelity of the overlay image, it is
FLUORESCENCE OVERLAY ANTIGEN MAPPING OF THE EPIDERMAL BASEMENT-MEMBRANE ZONE .1. GEOMETRIC ERRORS
BRUINS, S; DEJONG, MCJM; HEERES, K; WILKINSON, MHF; JONKMAN, MF; VANDERMEER, JB
1994-01-01
To identify in tissue sections the relative positions of antigen distributions close to the resolving power of the microscope, we have developed the fluorescence overlay antigen mapping (FOAM) procedure. As this technique makes high demands on the geometric fidelity of the overlay image, it is essen
FLUORESCENCE OVERLAY ANTIGEN MAPPING OF THE EPIDERMAL BASEMENT-MEMBRANE ZONE .1. GEOMETRIC ERRORS
BRUINS, S; DEJONG, MCJM; HEERES, K; WILKINSON, MHF; JONKMAN, MF; VANDERMEER, JB
1994-01-01
To identify in tissue sections the relative positions of antigen distributions close to the resolving power of the microscope, we have developed the fluorescence overlay antigen mapping (FOAM) procedure. As this technique makes high demands on the geometric fidelity of the overlay image, it is essen
Aydin, Umit; Dogrusoz, Yesim Serinagaoglu
2011-09-01
In this article, we aimed to reduce the effects of geometric errors and measurement noise on the inverse problem of Electrocardiography (ECG) solutions. We used the Kalman filter to solve the inverse problem in terms of epicardial potential distributions. The geometric errors were introduced into the problem via wrong determination of the size and location of the heart in simulations. An error model, which is called the enhanced error model (EEM), was modified to be used in inverse problem of ECG to compensate for the geometric errors. In this model, the geometric errors are modeled as additive Gaussian noise and their noise variance is added to the measurement noise variance. The Kalman filter method includes a process noise component, whose variance should also be estimated along with the measurement noise. To estimate these two noise variances, two different algorithms were used: (1) an algorithm based on residuals, (2) expectation maximization algorithm. The results showed that it is important to use the correct noise variances to obtain accurate results. The geometric errors, if ignored in the inverse solution procedure, yielded incorrect epicardial potential distributions. However, even with a noise model as simple as the EEM, the solutions could be significantly improved.
Guo, Shijie; Jiang, Gedong; Zhang, Dongsheng; Mei, Xuesong
2017-04-01
Position-independent geometric errors (PIGEs) are the fundamental errors of a five-axis machine tool. In this paper, to identify ten PIGEs peculiar to the rotary axes of five-axis machine tools with a tilting head, the mathematic model of the ten PIGEs is deduced and four measuring patterns are proposed. The measuring patterns and identifying method are validated on a five-axis machine tool with a tilting head, and the ten PIGEs of the machine tool are obtained. The sensitivities of the four adjustable PIGEs of the machine tool in different measuring patterns are analyzed by the Morris global sensitivity analysis method and the modifying method, and the procedure of the four adjustable PIGEs of the machine tool is given accordingly. Experimental results show that after and before modifying the four adjustable PIGEs, the average compensate rate reached 52.7%. It is proved that the proposed measuring, identifying, analyzing and modifying method are effective for error measurement and precision improvement of the five-axis machine tool.
Institute of Scientific and Technical Information of China (English)
范晋伟; 罗建平; 蒙顺政; 李伟; 雒驼
2012-01-01
研究带摆角头五轴数控机床几何误差建模方法及误差补偿技术.基于多体系统运动学理论,阐述带摆角头五轴数控机床的拓扑结构及其低序体阵列,建了五轴差模型.根据该误差模型研究了带摆角头五轴数控机床的几何误差软件补偿计算方法,分别建立理想条件与实际条件下刀具路线、数控指令及刀具轨迹三者间的相互映射关系,给出数控指令修正值的具体算法及迭代计算求解终止判别条件,通过该方法可获得修正后的数控指令,从而解决软件误差补偿的关键问题该补偿方法直接、简明,具有很好的通用性.%The research is carried on the geometric error modeling and compensation technology of five—axis CNC machine tool with swing head.Based on the theory of multi-body system kinematics, after the topological structure of five-axis CNC machine tools with swing head and their number arrays of low-order body are expatiated, the geometric error modeling of five—axis CNC machine tools is put forward-Besides the above, the calculation method of software error compensation for five-axis CNC machine tools with swing head is researched according to the geometric error modeling. The ideal & actual mapping relationship a-mong the cutter line ,NC command and cutter trajectory is established, and also the method of the correcting NC command and Iterative terminate condition for solving correction value of CNC instruction is given. It can obtain the correcting NC command through this method,so as to solve the key problems of software error compensation.This method of compensation is direct,concise and universal.
Energy Technology Data Exchange (ETDEWEB)
Sawant, A [UT Southwestern Medical Center, Dallas, TX (United States)
2015-06-15
Purpose: Respiratory correlated 4DCT images are generated under the assumption of a regular breathing cycle. This study evaluates the error in 4DCT-based target position estimation in the presence of irregular respiratory motion. Methods: A custom-made programmable externally-and internally-deformable lung motion phantom was placed inside the CT bore. An abdominal pressure belt was placed around the phantom to mimic clinical 4DCT acquisitio and the motion platform was programmed with a sinusoidal (±10mm, 10 cycles per minute) motion trace and 7 motion traces recorded from lung cancer patients. The same setup and motion trajectories were repeated in the linac room and kV fluoroscopic images were acquired using the on-board imager. Positions of 4 internal markers segmented from the 4DCT volumes were overlaid upon the motion trajectories derived from the fluoroscopic time series to calculate the difference between estimated (4DCT) and “actual” (kV fluoro) positions. Results: With a sinusoidal trace, absolute errors of the 4DCT estimated markers positions vary between 0.78mm and 5.4mm and RMS errors are between 0.38mm to 1.7mm. With irregular patient traces, absolute errors of the 4DCT estimated markers positions increased significantly by 100 to 200 percent, while the corresponding RMS error values have much smaller changes. Significant mismatches were frequently found at peak-inhale or peak-exhale phase. Conclusion: As expected, under conditions of well-behaved, periodic sinusoidal motion, the 4DCT yielded much better estimation of marker positions. When an actual patient trace is used 4DCT-derived positions showed significant mismatches with the fluoroscopic trajectories, indicating the potential for geometric and therefore dosimetric errors in the presence of cycle-to-cycle respiratory variations.
Study of Errors among Nursing Students
Directory of Open Access Journals (Sweden)
Ella Koren
2007-09-01
Full Text Available The study of errors in the health system today is a topic of considerable interest aimed at reducing errors through analysis of the phenomenon and the conclusions reached. Errors that occur frequently among health professionals have also been observed among nursing students. True, in most cases they are actually “near errors,” but these could be a future indicator of therapeutic reality and the effect of nurses' work environment on their personal performance. There are two different approaches to such errors: (a The EPP (error prone person approach lays full responsibility at the door of the individual involved in the error, whether a student, nurse, doctor, or pharmacist. According to this approach, handling consists purely in identifying and penalizing the guilty party. (b The EPE (error prone environment approach emphasizes the environment as a primary contributory factor to errors. The environment as an abstract concept includes components and processes of interpersonal communications, work relations, human engineering, workload, pressures, technical apparatus, and new technologies. The objective of the present study was to examine the role played by factors in and components of personal performance as compared to elements and features of the environment. The study was based on both of the aforementioned approaches, which, when combined, enable a comprehensive understanding of the phenomenon of errors among the student population as well as a comparison of factors contributing to human error and to error deriving from the environment. The theoretical basis of the study was a model that combined both approaches: one focusing on the individual and his or her personal performance and the other focusing on the work environment. The findings emphasize the work environment of health professionals as an EPE. However, errors could have been avoided by means of strict adherence to practical procedures. The authors examined error events in the
Hong, Seunghwan; Choi, Yoonjo; Park, Ilsuk; Sohn, Hong-Gyoo
2017-01-17
Geometric correction of SAR satellite imagery is the process to adjust the model parameters that define the relationship between ground and image coordinates. To achieve sub-pixel geolocation accuracy, the adoption of the appropriate geometric correction model and parameters is important. Until now, various geometric correction models have been developed and applied. However, it is still difficult for general users to adopt a suitable geometric correction models having sufficient precision. In this regard, this paper evaluated the orbit-based and time-offset-based models with an error simulation. To evaluate the geometric correction models, Radarsat-1 images that have large errors in satellite orbit information and TerraSAR-X images that have a reportedly high accuracy in satellite orbit and sensor information were utilized. For Radarsat-1 imagery, the geometric correction model based on the satellite position parameters has a better performance than the model based on time-offset parameters. In the case of the TerraSAR-X imagery, two geometric correction models had similar performance and could ensure sub-pixel geolocation accuracy.
Hong, Seunghwan; Choi, Yoonjo; Park, Ilsuk; Sohn, Hong-Gyoo
2017-01-01
Geometric correction of SAR satellite imagery is the process to adjust the model parameters that define the relationship between ground and image coordinates. To achieve sub-pixel geolocation accuracy, the adoption of the appropriate geometric correction model and parameters is important. Until now, various geometric correction models have been developed and applied. However, it is still difficult for general users to adopt a suitable geometric correction models having sufficient precision. In this regard, this paper evaluated the orbit-based and time-offset-based models with an error simulation. To evaluate the geometric correction models, Radarsat-1 images that have large errors in satellite orbit information and TerraSAR-X images that have a reportedly high accuracy in satellite orbit and sensor information were utilized. For Radarsat-1 imagery, the geometric correction model based on the satellite position parameters has a better performance than the model based on time-offset parameters. In the case of the TerraSAR-X imagery, two geometric correction models had similar performance and could ensure sub-pixel geolocation accuracy. PMID:28106729
Directory of Open Access Journals (Sweden)
Seunghwan Hong
2017-01-01
Full Text Available Geometric correction of SAR satellite imagery is the process to adjust the model parameters that define the relationship between ground and image coordinates. To achieve sub-pixel geolocation accuracy, the adoption of the appropriate geometric correction model and parameters is important. Until now, various geometric correction models have been developed and applied. However, it is still difficult for general users to adopt a suitable geometric correction models having sufficient precision. In this regard, this paper evaluated the orbit-based and time-offset-based models with an error simulation. To evaluate the geometric correction models, Radarsat-1 images that have large errors in satellite orbit information and TerraSAR-X images that have a reportedly high accuracy in satellite orbit and sensor information were utilized. For Radarsat-1 imagery, the geometric correction model based on the satellite position parameters has a better performance than the model based on time-offset parameters. In the case of the TerraSAR-X imagery, two geometric correction models had similar performance and could ensure sub-pixel geolocation accuracy.
Analysis of Pronominal Errors: A Case Study.
Oshima-Takane, Yuriko
1992-01-01
Reports on a study of a normally developing boy who made pronominal errors for about 10 months. Comprehension and production data clearly indicate that the child persistently made pronominal errors because of semantic confusion in the use of first- and second-person pronouns. (28 references) (GLR)
Wen, Xiulan; Xu, Youxiong; Li, Hongsheng; Wang, Fenglin; Sheng, Danghong
2012-09-01
Straightness error is an important parameter in measuring high-precision shafts. New generation geometrical product specification(GPS) requires the measurement uncertainty characterizing the reliability of the results should be given together when the measurement result is given. Nowadays most researches on straightness focus on error calculation and only several research projects evaluate the measurement uncertainty based on "The Guide to the Expression of Uncertainty in Measurement(GUM)". In order to compute spatial straightness error(SSE) accurately and rapidly and overcome the limitations of GUM, a quasi particle swarm optimization(QPSO) is proposed to solve the minimum zone SSE and Monte Carlo Method(MCM) is developed to estimate the measurement uncertainty. The mathematical model of minimum zone SSE is formulated. In QPSO quasi-random sequences are applied to the generation of the initial position and velocity of particles and their velocities are modified by the constriction factor approach. The flow of measurement uncertainty evaluation based on MCM is proposed, where the heart is repeatedly sampling from the probability density function(PDF) for every input quantity and evaluating the model in each case. The minimum zone SSE of a shaft measured on a Coordinate Measuring Machine(CMM) is calculated by QPSO and the measurement uncertainty is evaluated by MCM on the basis of analyzing the uncertainty contributors. The results show that the uncertainty directly influences the product judgment result. Therefore it is scientific and reasonable to consider the influence of the uncertainty in judging whether the parts are accepted or rejected, especially for those located in the uncertainty zone. The proposed method is especially suitable when the PDF of the measurand cannot adequately be approximated by a Gaussian distribution or a scaled and shifted t-distribution and the measurement model is non-linear.
A Comparative Study on Error Analysis
DEFF Research Database (Denmark)
Wu, Xiaoli; Zhang, Chun
2015-01-01
Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production....... Finally, pedagogical implication of CFL is discussed and future research is suggested. Keywords: error analysis, comparative sentences, comparative structure ‘‘bǐ - 比’, Chinese as a foreign language (CFL), written production...
DEFF Research Database (Denmark)
Hyvönen, N.; Majander, H.; Staboulis, Stratos
2017-01-01
that render reconstructing the internal conductivity impossible if they are not taken into account. This work numerically demonstrates that one can compensate for inaccurate modeling of the object boundary in two spatial dimensions by finding compatible locations and sizes for the electrodes as a part...... of a reconstruction algorithm. The numerical studies, which are based on both simulated and experimental data, are complemented by proving that the employed complete electrode model is approximately conformally invariant, which suggests that the obtained reconstructions in mismodeled domains reflect conformal images......Electrical impedance tomography aims at reconstructing the conductivity inside a physical body from boundary measurements of current and voltage at a finite number of contact electrodes. In many practical applications, the shape of the imaged object is subject to considerable uncertainties...
Hyvönen, N.; Majander, H.; Staboulis, S.
2017-03-01
Electrical impedance tomography aims at reconstructing the conductivity inside a physical body from boundary measurements of current and voltage at a finite number of contact electrodes. In many practical applications, the shape of the imaged object is subject to considerable uncertainties that render reconstructing the internal conductivity impossible if they are not taken into account. This work numerically demonstrates that one can compensate for inaccurate modeling of the object boundary in two spatial dimensions by finding compatible locations and sizes for the electrodes as a part of a reconstruction algorithm. The numerical studies, which are based on both simulated and experimental data, are complemented by proving that the employed complete electrode model is approximately conformally invariant, which suggests that the obtained reconstructions in mismodeled domains reflect conformal images of the true targets. The numerical experiments also confirm that a similar approach does not, in general, lead to a functional algorithm in three dimensions.
Brandenburg, Jan Gerit; Alessio, Maristella; Civalleri, Bartolomeo; Peintinger, Michael F; Bredow, Thomas; Grimme, Stefan
2013-09-26
We extend the previously developed geometrical correction for the inter- and intramolecular basis set superposition error (gCP) to periodic density functional theory (DFT) calculations. We report gCP results compared to those from the standard Boys-Bernardi counterpoise correction scheme and large basis set calculations. The applicability of the method to molecular crystals as the main target is tested for the benchmark set X23. It consists of 23 noncovalently bound crystals as introduced by Johnson et al. (J. Chem. Phys. 2012, 137, 054103) and refined by Tkatchenko et al. (J. Chem. Phys. 2013, 139, 024705). In order to accurately describe long-range electron correlation effects, we use the standard atom-pairwise dispersion correction scheme DFT-D3. We show that a combination of DFT energies with small atom-centered basis sets, the D3 dispersion correction, and the gCP correction can accurately describe van der Waals and hydrogen-bonded crystals. Mean absolute deviations of the X23 sublimation energies can be reduced by more than 70% and 80% for the standard functionals PBE and B3LYP, respectively, to small residual mean absolute deviations of about 2 kcal/mol (corresponding to 13% of the average sublimation energy). As a further test, we compute the interlayer interaction of graphite for varying distances and obtain a good equilibrium distance and interaction energy of 6.75 Å and -43.0 meV/atom at the PBE-D3-gCP/SVP level. We fit the gCP scheme for a recently developed pob-TZVP solid-state basis set and obtain reasonable results for the X23 benchmark set and the potential energy curve for water adsorption on a nickel (110) surface.
Study of WATCH GRB error boxes
DEFF Research Database (Denmark)
Gorosabel, J.; Castro-Tirado, A. J.; Lund, Niels
1995-01-01
We have studied the first WATCH GRB Catalogue ofγ-ray Bursts in order to find correlations between WATCH GRB error boxes and a great variety of celestial objects present in 33 different catalogues. No particular class of objects has been found to be significantly correlated with the WATCH GRBs....
Wang, Deming; Yang, Zhengyi
2008-03-01
The use of polynomial functions for modeling geometric distortion in magnetic resonance imaging (MRI) that arises from scanner's hardware imperfection is studied in detail. In this work, the geometric distortion data from four representative MRI systems were used. Modeling of these data using polynomial functions of the fourth, fifth, sixth, and seventh orders was carried out. In order to investigate how this modeling performed for different size and shape of the volume of interest, the modeling was carried out for three different volumes of interest (VOI): a cube, a cylinder, and a sphere. The modeling's goodness was assessed using both the maximum and mean absolute errors. The modeling results showed that (i) for the cube VOI there appears to be an optimal polynomial function that gives the least modeling errors and the sixth order polynomial was found to be the optimal polynomial function for the size of the cubic VOI considered in the present work; (ii) for the cylinder VOI, all four polynomials performed approximately equally well but a trend of a slight decrease in the mean absolute error with the increasing order of the polynomial was noted; and (iii) for the sphere VOI, the maximum absolute error showed some variations with the order of the polynomial, with the fourth order polynomial producing the smallest maximum absolute errors. It is further noted that extrapolation could lead to very large errors so any extrapolation needs to be avoided. A detailed analysis on the modeling errors is presented.
Studying geometric structures in meso-scale flows
Directory of Open Access Journals (Sweden)
Christos H. Halios
2014-11-01
Full Text Available Geometric shapes of coherent structures such as ramp or cliff like signals, step changes and waves, are commonly observed in meteorological temporal series and dominate the turbulent energy and mass exchange between the atmospheric surface layer and the layers above, and also relate with low-dimensional chaotic systems. In this work a simple linear technique to extract geometrical shapes has been applied at a dataset which was obtained at a location experiencing a number of different mesoscale modes. It was found that the temperature field appears much better organized than the wind field, and that cliff-ramp structures are dominant in the temperature time series. The occurrence of structural shapes was related with the dominant flow patterns and the status of the flow field. Temperature positive cliff-ramps and ramp-cliffs appear mainly during night time and under weak flow field, while temperature step and sine structures do not show a clear preference for the period of day, flow or temperature pattern. Uniformly stable, weak flow conditions dominate across all the wind speed structures. A detailed analysis of the flow field during two case studies revealed that structural shapes might be part of larger flow structures, such as a sea-breeze front or down-slope winds. During stagnant conditions structural shapes that were associated with deceleration of the flow were observed, whilst during ventilation conditions shapes related with the acceleration of the flow.
Numerical Study of Urban Canyon Microclimate Related to Geometrical Parameters
Directory of Open Access Journals (Sweden)
Andrea de Lieto Vollaro
2014-11-01
Full Text Available In this study a microclimate analysis on a particular urban configuration: the—street canyon—has been carried out. The analysis, conducted by performing numerical simulations using the finite volumes commercial code ANSYS-Fluent, shows the flow field in an urban environment, taking into account three different aspect ratios (H/W. This analysis can be helpful in the study on urban microclimate and on the heat exchanges with the buildings. Fluid-dynamic fields on vertical planes within the canyon, have been evaluated. The results show the importance of the geometrical configuration, in relation to the ratio between the height (H of the buildings and the width (W of the road. This is a very important subject from the point of view of “Smart Cities”, considering the urban canyon as a subsystem of a larger one (the city, which is affected by climate changes.
Vannitsem, Stéphane; Lucarini, Valerio
2016-06-01
We study a simplified coupled atmosphere-ocean model using the formalism of covariant Lyapunov vectors (CLVs), which link physically-based directions of perturbations to growth/decay rates. The model is obtained via a severe truncation of quasi-geostrophic equations for the two fluids, and includes a simple yet physically meaningful representation of their dynamical/thermodynamical coupling. The model has 36 degrees of freedom, and the parameters are chosen so that a chaotic behaviour is observed. There are two positive Lyapunov exponents (LEs), sixteen negative LEs, and eighteen near-zero LEs. The presence of many near-zero LEs results from the vast time-scale separation between the characteristic time scales of the two fluids, and leads to nontrivial error growth properties in the tangent space spanned by the corresponding CLVs, which are geometrically very degenerate. Such CLVs correspond to two different classes of ocean/atmosphere coupled modes. The tangent space spanned by the CLVs corresponding to the positive and negative LEs has, instead, a non-pathological behaviour, and one can construct robust large deviations laws for the finite time LEs, thus providing a universal model for assessing predictability on long to ultra-long scales along such directions. Interestingly, the tangent space of the unstable manifold has substantial projection on both atmospheric and oceanic components. The results show the difficulties in using hyperbolicity as a conceptual framework for multiscale chaotic dynamical systems, whereas the framework of partial hyperbolicity seems better suited, possibly indicating an alternative definition for the chaotic hypothesis. They also suggest the need for an accurate analysis of error dynamics on different time scales and domains and for a careful set-up of assimilation schemes when looking at coupled atmosphere-ocean models.
Vannitsem, Stephane
2015-01-01
We study a simplified coupled atmosphere-ocean model using the formalism of covariant Lyapunov vectors (CLVs), which link physically-based directions of perturbations to growth/decay rates. The model is obtained via a severe truncation of quasi-geostrophic equations for the two fluids, and includes a simple yet physically meaningful representation of their dynamical/thermodynamical coupling. The model has 36 degrees of freedom, and the parameters are chosen so that a chaotic behaviour is observed. One finds two positive Lyapunov exponents (LEs), sixteen negative LEs, and eighteen near-zero LEs. The presence of many near-zero LEs results from the vast time-scale separation between the characteristic time scales of the two fluids, and leads to nontrivial error growth properties in the tangent space spanned by the corresponding CLVs, which are geometrically very degenerate. Such CLVs correspond to two different classes of ocean/atmosphere coupled modes. The tangent space spanned by the CLVs corresponding to the ...
Studying developmental variation with Geometric Morphometric Image Analysis (GMIA).
Mayer, Christine; Metscher, Brian D; Müller, Gerd B; Mitteroecker, Philipp
2014-01-01
The ways in which embryo development can vary across individuals of a population determine how genetic variation translates into adult phenotypic variation. The study of developmental variation has been hampered by the lack of quantitative methods for the joint analysis of embryo shape and the spatial distribution of cellular activity within the developing embryo geometry. By drawing from the strength of geometric morphometrics and pixel/voxel-based image analysis, we present a new approach for the biometric analysis of two-dimensional and three-dimensional embryonic images. Well-differentiated structures are described in terms of their shape, whereas structures with diffuse boundaries, such as emerging cell condensations or molecular gradients, are described as spatial patterns of intensities. We applied this approach to microscopic images of the tail fins of larval and juvenile rainbow trout. Inter-individual variation of shape and cell density was found highly spatially structured across the tail fin and temporally dynamic throughout the investigated period.
Quantitative study of geometrical scaling in charm production at HERA
Stebel, Tomasz
2013-07-01
The method of ratios was applied to search for geometrical scaling in charm production in deep inelastic scattering. Recent combined data from the H1 and ZEUS experiments were used. Two forms of geometrical scaling were tested: an originally proposed scaling that results from the Golec-Biernat-Wusthoff model and scaling motivated by a dipole representation, which takes into account charm mass. It turns out that in both cases some residual scaling is present and charm mass inclusion improves scaling quality.
A circumzenithal arc to study optics concepts with geometrical optics
Isik, Hakan
2017-05-01
This paper describes the formation of a circumzenithal arc for the purpose of teaching light and optics. A circumzenithal arc, an optic formation rarely witnessed by people, is formed in this study using a water-filled cylindrical glass illuminated by sunlight. Sunlight refracted at the top and side surfaces of the glass of water is dispersed into its constituent colours. First, multi-colour arcs are observed on paper at the bottom of the glass. Then, a single arc for each colour is observed on the floor when the rays are allowed to propagate to the furthest points from the glass. The change in observations is explained by formulating an equation for the geometry of the situation. The formula relates each point on the first refracting surface for an incoming light ray to a point further from the second refracting surface. Then, a parallel graph is drawn to visualize the superposition of colours to the formation of a single arc. The geometrical optics studies in this paper exemplify the concept of Snell’s law, total internal reflection and dispersion. The duration of the observation on a circumzenithal arc is limited by the altitude of the Sun in the sky. This study depends on the use of astronomy software to track solar altitude. Pedagogical aspects of the study are discussed for inquiry-based teaching and learning of light and optics concepts.
Geometric Calibration and Accuracy Verification of the GF-3 Satellite.
Zhao, Ruishan; Zhang, Guo; Deng, Mingjun; Xu, Kai; Guo, Fengcheng
2017-08-29
The GF-3 satellite is the first multi-polarization synthetic aperture radar (SAR) imaging satellite in China, which operates in the C band with a resolution of 1 m. Although the SAR satellite system was geometrically calibrated during the in-orbit commissioning phase, there are still some system errors that affect its geometric positioning accuracy. In this study, these errors are classified into three categories: fixed system error, time-varying system error, and random error. Using a multimode hybrid geometric calibration of spaceborne SAR, and considering the atmospheric propagation delay, all system errors can be effectively corrected through high-precision ground control points and global atmospheric reference data. The geometric calibration experiments and accuracy evaluation for the GF-3 satellite are performed using ground control data from several regions. The experimental results show that the residual system errors of the GF-3 SAR satellite have been effectively eliminated, and the geometric positioning accuracy can be better than 3 m.
Numerical Study of Geometric Multigrid Methods on CPU--GPU Heterogeneous Computers
Feng, Chunsheng; Xu, Jinchao; Zhang, Chen-Song
2012-01-01
The geometric multigrid method (GMG) is one of the most efficient solving techniques for discrete algebraic systems arising from many types of partial differential equations. GMG utilizes a hierarchy of grids or discretizations and reduces the error at a number of frequencies simultaneously. Graphics processing units (GPUs) have recently burst onto the scientific computing scene as a technology that has yielded substantial performance and energy-efficiency improvements. A central challenge in implementing GMG on GPUs, though, is that computational work on coarse levels cannot fully utilize the capacity of a GPU. In this work, we perform numerical studies of GMG on CPU--GPU heterogeneous computers. Furthermore, we compare our implementation with an efficient CPU implementation of GMG and with the most popular fast Poisson solver, Fast Fourier Transform, in the cuFFT library developed by NVIDIA.
Studying developmental variation with Geometric Morphometric Image Analysis (GMIA.
Directory of Open Access Journals (Sweden)
Christine Mayer
Full Text Available The ways in which embryo development can vary across individuals of a population determine how genetic variation translates into adult phenotypic variation. The study of developmental variation has been hampered by the lack of quantitative methods for the joint analysis of embryo shape and the spatial distribution of cellular activity within the developing embryo geometry. By drawing from the strength of geometric morphometrics and pixel/voxel-based image analysis, we present a new approach for the biometric analysis of two-dimensional and three-dimensional embryonic images. Well-differentiated structures are described in terms of their shape, whereas structures with diffuse boundaries, such as emerging cell condensations or molecular gradients, are described as spatial patterns of intensities. We applied this approach to microscopic images of the tail fins of larval and juvenile rainbow trout. Inter-individual variation of shape and cell density was found highly spatially structured across the tail fin and temporally dynamic throughout the investigated period.
Geometric Studies of Shunt and Lead Orientation in EEC Devices
Werner, F. M.; Solin, S. A.
2014-03-01
Electric field sensors are ubiquitous in modern technology, from field effect transistors (FETs) in circuit boards to point-of-care testing (POCT) devices used in detecting the presence of specific protein markers in blood. The transport properties of these devices are limited by two general categories: intrinsic material properties and extrinsic geometric effects. Devices with a maximum electric field resolution of 3.05V/cm were previously reported. The metal semiconductor hybrid (MSH) devices are constructed by forming a Schottky interface between a mesa of nGaAs and Ti, while four ohmic leads surround the perimeter of the mesa and are used for four point resistance measurements. These devices exhibit extraordinary electroconductance (EEC) and make it possible to correlate measured four point resistance to changes in the local electric field. While maximizing the EEC response by optimizing the intrinsic material properties has been theoretically investigated, we present a phenomenological study of the impact of lead orientation and shunt geometry in the sensing capabilities of these devices. S.A.S. is a co-founder of and has a financial interest in PixelEXX, a start-up company whose mission is to market imaging arrays.
Study on geometric correction of airborne multiangular imagery
Institute of Scientific and Technical Information of China (English)
LIU; Qiang; (刘强); LIU; Qinhuo; (柳钦火); XIAO; Qing; (肖青); TIAN; Guoliang; (田国良)
2002-01-01
An automatic image matching algorithm, and its application to geometric correction of airborne multiangular remote sensed imagery, are presented in this paper. The image-matching algorithm is designed to find correct match for images containing localized geometric distortion and spectral variation. Mathematical tools such as wavelet decomposition, B-spline, and multi-variant correlation estimator are integrated in the frame of pyramidal matching. The simulated experiment and our practice in correcting airborne multiangular images show that the matching algorithm is robust to the few random abnormal points and can achieve subpixel match accuracy in most area of the image. After geometric correction and registration, multiangular observations for each ground pixel are extracted and sun/view geometry is also simultaneously derived.
Yoshida, Tatsusada; Hayashi, Takahisa; Mashima, Akira; Chuman, Hiroshi
2015-10-01
One of the most challenging problems in computer-aided drug discovery is the accurate prediction of the binding energy between a ligand and a protein. For accurate estimation of net binding energy ΔEbind in the framework of the Hartree-Fock (HF) theory, it is necessary to estimate two additional energy terms; the dispersion interaction energy (Edisp) and the basis set superposition error (BSSE). We previously reported a simple and efficient dispersion correction, Edisp, to the Hartree-Fock theory (HF-Dtq). In the present study, an approximation procedure for estimating BSSE proposed by Kruse and Grimme, a geometrical counterpoise correction (gCP), was incorporated into HF-Dtq (HF-Dtq-gCP). The relative weights of the Edisp (Dtq) and BSSE (gCP) terms were determined to reproduce ΔEbind calculated with CCSD(T)/CBS or /aug-cc-pVTZ (HF-Dtq-gCP (scaled)). The performance of HF-Dtq-gCP (scaled) was compared with that of B3LYP-D3(BJ)-bCP (dispersion corrected B3LYP with the Boys and Bernadi counterpoise correction (bCP)), by taking ΔEbind (CCSD(T)-bCP) of small non-covalent complexes as 'a golden standard'. As a critical test, HF-Dtq-gCP (scaled)/6-31G(d) and B3LYP-D3(BJ)-bCP/6-31G(d) were applied to the complex model for HIV-1 protease and its potent inhibitor, KNI-10033. The present results demonstrate that HF-Dtq-gCP (scaled) is a useful and powerful remedy for accurately and promptly predicting ΔEbind between a ligand and a protein, albeit it is a simple correction procedure.
Study of the Geometric Stiffening Effect: Comparison of Different Formulations
Energy Technology Data Exchange (ETDEWEB)
Mayo, Juana M., E-mail: juana@us.es; Garcia-Vallejo, Daniel; Dominguez, Jaime [Universidad de Sevilla, Departamento de Ingenieria Mecanica y de los Materiales (Spain)
2004-05-15
This paper reviews different formulations to account for the stress stiffening or geometric stiffening effect arising from deflections large enough to cause significant changes in the configuration of the system The importance of such effect on many engineering applications, such as the dynamic behavior of helicopter blades, flexible rotor arms, turbine blades, etc., is well known. The analysis is carried out only for one-dimensional elements in 2D.Formulations based on the floating frame of reference approach are computationally very efficient, as the use of the component synthesis method allows for a reduced number of coordinates. However, something must be done for them to account for the geometric stiffening effect. The easiest method is the application of the substructuring technique, because the formulation is not modified. This, however, is not the most efficient approach. In problems where deformation is moderated, the simple inclusion of the geometric stiffness matrix is enough. On the other hand, if the deformation is large, higher-order terms must be included in the strain energy. In order to achieve an efficient and stable formulation, an explicit geometrically nonlinear beam element was developed. The formulations that use absolute coordinates are, generally, computationally more costly than the previous ones, as they must use a large number of degrees of freedom. However, the geometric stiffening effect can be automatically accounted for with these formulations. The aim of this work is to investigate the applicability of the different existing formulations in order to help the user select the right one for his particular application.
The psoas major muscle: a three-dimensional geometric study.
Santaguida, P L; McGill, S M
1995-03-01
The purpose of this study was to use anatomical data obtained from cadavers, and geometrical scaling data obtained from MRI scans of living subjects, to assess the line of action and mechanical function of the psoas major muscle in three dimensions about each lumbar spine level. In addition, the line of action of the psoas major was documented as a function of lordosis. A total of seven cadavers were dissected from which fibre/tendon architecture was measured, while MRI scans were performed on 15 males to obtain centroid paths and area scales of the muscle over its length. In this way, the curving path of muscle line of action was accommodated together with force and moment predictions that recognized the presence of a tendon at lower lumbar levels (up to L3 in some subjects) significantly increasing the stress. Results confirm that the mechanics of the psoas cannot be adequately represented with a series of straight line vectors from vertebral origins to insertion. Moreover, the mechanical action of the psoas major does not change as a function of lumbar spine lordosis as the muscle path of action changes in accordance with changes in spine posture. Functionally, contrary to claims, the psoas cannot act as a 'derotator' of the spine, does not impose large shear forces on the spine in any posture except at L5-S1, and cannot have major affects to 'control lordosis'. It has the potential to stabilize the lumbar spine with compressive loading and with bilateral activation, to laterally flex it, and can create large anterior shear forces but only at L5-S1.
Case study: error rates and paperwork design.
Drury, C G
1998-01-01
A job instruction document, or workcard, for civil aircraft maintenance produced a number of paperwork errors when used operationally. The design of the workcard was compared to the guidelines of Patel et al [1994, Applied Ergonomics, 25 (5), 286-293]. All of the errors occurred in work instructions which did not meet these guidelines, demonstrating that the design of documentation does affect operational performance.
Doing Socrates experiment right: controlled rearing studies of geometrical knowledge in animals.
Vallortigara, Giorgio; Sovrano, Valeria Anna; Chiandetti, Cinzia
2009-02-01
The issue of whether encoding of geometric information for navigational purposes crucially depends on environmental experience or whether it is innately predisposed in the brain has been recently addressed in controlled rearing studies. Non-human animals can make use of the geometric shape of an environment for spatial reorientation and in some circumstances reliance on purely geometric information (metric properties and sense) can overcome use of local featural information. Animals reared in home cages of different geometric shapes proved to be equally capable of learning and performing navigational tasks based on geometric information. The findings suggest that effective use of geometric information for spatial reorientation does not require experience in environments with right angles and metrically distinct surfaces.
A mechanical device to study geometric phases and curvatures
Gil, Salvador
2010-04-01
A simple mechanical device is introduced that can be used to illustrate the parallel transport of a vector along a curved surface and the geometric phase shift that occurs when a vector is carried along a loop on a curved surface. Its connection with the Foucault pendulum and Berry phases is discussed. The experimental results are in close agreement with the theoretical expectations. The experiment is inexpensive and conceptually easy to understand and perform.
Generation 1.5 Written Error Patterns: A Comparative Study
Doolan, Stephen M.; Miller, Donald
2012-01-01
In an attempt to contribute to existing research on Generation 1.5 students, the current study uses quantitative and qualitative methods to compare error patterns in a corpus of Generation 1.5, L1, and L2 community college student writing. This error analysis provides one important way to determine if error patterns in Generation 1.5 student…
Longuski, James M.; Mcronald, Angus D.
1988-01-01
In previous work the problem of injecting the Galileo and Ulysses spacecraft from low earth orbit into their respective interplanetary trajectories has been discussed for the single stage (Centaur) vehicle. The central issue, in the event of spherically distributed injection errors, is what happens to the vehicle? The difficulties addressed in this paper involve the multi-stage problem since both Galileo and Ulysses will be utilizing the two-stage IUS system. Ulysses will also include a third stage: the PAM-S. The solution is expressed in terms of probabilities for total percentage of escape, orbit decay and reentry trajectories. Analytic solutions are found for Hill's Equations of Relative Motion (more recently called Clohessy-Wiltshire Equations) for multi-stage injections. These solutions are interpreted geometrically on the injection sphere. The analytic-geometric models compare well with numerical solutions, provide insight into the behavior of trajectories mapped on the injection sphere and simplify the numerical two-dimensional search for trajectory families.
A Study Regarding the Spontaneous Use of Geometric Shapes in Young Children's Drawings
Villarroel, José Domingo; Sanz Ortega, Olga
2017-01-01
The studies regarding how the comprehension of geometric shapes evolves in childhood are largely based on the assessment of children's responses during the course of tasks linked to the recognition, classification or explanation of prototypes and models. Little attention has been granted to the issue as to what extent the geometric shape turns out…
Sarfati, L; Ranchon, F; Vantard, N; Schwiertz, V; Gauthier, N; He, S; Kiouris, E; Gourc-Berthod, C; Guédat, M G; Alloux, C; Gustin, M-P; You, B; Trillet-Lenoir, V; Freyer, G; Rioufol, C
2015-02-01
Medication errors (ME) in oncology are known to cause serious iatrogenic complications. However, MEs still occur at each step in the anticancer chemotherapy process, particularly when injections are prepared in the hospital pharmacy. This study assessed whether a ME simulation program would help prevent ME-associated iatrogenic complications. The 5-month prospective study, consisting of three phases, was undertaken in the centralized pharmaceutical unit of a university hospital of Lyon, France. During the first simulation phase, 25 instruction sheets each containing one simulated error were inserted among various instruction sheets issued to blinded technicians. The second phase consisted of activity aimed at raising pharmacy technicians' awareness of risk of medication errors associated with antineoplastic drugs. The third phase consisted of re-enacting the error simulation process 3 months after the awareness campaign. The rate and severity of undetected medication errors were measured during the two simulation (first and third) phases. The potential seriousness of the ME was assessed using the NCC MERP(®) index. The rate of undetected medication errors decreased from 12 in the first simulation phase (48%) to five in the second simulation phase (20%, P = 0.04). The number of potential deaths due to administration of a faulty preparation decreased from three to zero. Awareness of iatrogenic risk through error simulation allowed pharmacy technicians to improve their ability to identify errors. This study is the first demonstration of the successful application of a simulation-based learning tool for reducing errors in the preparation of injectable anticancer drugs. Such a program should form part of the continuous quality improvement of risk management strategies for cancer patients. © 2014 John Wiley & Sons Ltd.
A Classroom Research Study on Oral Error Correction
Coskun, Abdullah
2010-01-01
This study has the main objective to present the findings of a small-scale classroom research carried out to collect data about my spoken error correction behaviors by means of self-observation. With this study, I aimed to analyze how and which spoken errors I corrected during a specific activity in a beginner's class. I used Lyster and Ranta's…
Implications of Error Analysis Studies for Academic Interventions
Mather, Nancy; Wendling, Barbara J.
2017-01-01
We reviewed 13 studies that focused on analyzing student errors on achievement tests from the Kaufman Test of Educational Achievement-Third edition (KTEA-3). The intent was to determine what instructional implications could be derived from in-depth error analysis. As we reviewed these studies, several themes emerged. We explain how a careful…
Study of thin-film resistor resistance error
Directory of Open Access Journals (Sweden)
Spirin V. G.
2009-10-01
Full Text Available A relationship between a thin-film resistor resistance error and mask misalignment with a substrate conductive layer at the second photolithography stage for a thin-film resistor design in which the resistive element does not overlap conductor pads is studied. The error value is at a maximum when the resistor aspect ratio is equal to 1.0.
Emergency Nurses as Second Victims of Error: A Qualitative Study.
Ajri-Khameslou, Mehdi; Abbaszadeh, Abbas; Borhani, Fariba
There are many nurses who are victims of errors in the hospital environment. It is quite essential to perceive the outcome of mistakes in nurses' profession. The aim of this scientific study was to interpret the causes that place nurses in danger of errors in emergency departments and also the consequences resulting from confronting the errors in the job environment. This research was designed to pursue a qualitative approach following content analysis. Through the purposeful sampling, 18 emergency nurses were selected to participate in this study. In-depth semi-structured interviews were used for data collection. Participants were selected by purposive sampling. Data collection continued until saturation was reached. The results of data analysis were presented in three different categories: the psychological reactions to error, learning from errors, and avoiding reactions. The current study revealed that errors could create positive and negative impacts on the emergency nurses' attitude. Confronting the errors through learning from the mistakes can result in the improvement of patients' safety whereas the negative outcomes can provoke destructive effects on nurses' career. Nurses are considered as victims of errors; therefore, they need support and protection to enhance their career.
Algebraic geometric codes with applications
Institute of Scientific and Technical Information of China (English)
CHEN Hao
2007-01-01
The theory of linear error-correcting codes from algebraic geomet-ric curves (algebraic geometric (AG) codes or geometric Goppa codes) has been well-developed since the work of Goppa and Tsfasman, Vladut, and Zink in 1981-1982. In this paper we introduce to readers some recent progress in algebraic geometric codes and their applications in quantum error-correcting codes, secure multi-party computation and the construction of good binary codes.
Dose variations caused by setup errors in intracranial stereotactic radiotherapy: A PRESAGE study
Energy Technology Data Exchange (ETDEWEB)
Teng, Kieyin [School of Medical Sciences, RMIT University, Melbourne (Australia); Gagliardi, Frank [School of Medical Sciences, RMIT University, Melbourne (Australia); William Buckland Radiotherapy Centre, Melbourne (Australia); Alqathami, Mamdooh [School of Medical Sciences, RMIT University, Melbourne (Australia); Ackerly, Trevor [William Buckland Radiotherapy Centre, Melbourne (Australia); Geso, Moshi, E-mail: moshi.geso@rmit.edu.au [School of Medical Sciences, RMIT University, Melbourne (Australia)
2014-01-01
Stereotactic radiotherapy (SRT) requires tight margins around the tumor, thus producing a steep dose gradient between the tumor and the surrounding healthy tissue. Any setup errors might become clinically significant. To date, no study has been performed to evaluate the dosimetric variations caused by setup errors with a 3-dimensional dosimeter, the PRESAGE. This research aimed to evaluate the potential effect that setup errors have on the dose distribution of intracranial SRT. Computed tomography (CT) simulation of a CIRS radiosurgery head phantom was performed with 1.25-mm slice thickness. An ideal treatment plan was generated using Brainlab iPlan. A PRESAGE was made for every treatment with and without errors. A prescan using the optical CT scanner was carried out. Before treatment, the phantom was imaged using Brainlab ExacTrac. Actual radiotherapy treatments with and without errors were carried out with the Novalis treatment machine. Postscan was performed with an optical CT scanner to analyze the dose irradiation. The dose variation between treatments with and without errors was determined using a 3-dimensional gamma analysis. Errors are clinically insignificant when the passing ratio of the gamma analysis is 95% and above. Errors were clinically significant when the setup errors exceeded a 0.7-mm translation and a 0.5° rotation. The results showed that a 3-mm translation shift in the superior-inferior (SI), right-left (RL), and anterior-posterior (AP) directions and 2° couch rotation produced a passing ratio of 53.1%. Translational and rotational errors of 1.5 mm and 1°, respectively, generated a passing ratio of 62.2%. Translation shift of 0.7 mm in the directions of SI, RL, and AP and a 0.5° couch rotation produced a passing ratio of 96.2%. Preventing the occurrences of setup errors in intracranial SRT treatment is extremely important as errors greater than 0.7 mm and 0.5° alter the dose distribution. The geometrical displacements affect dose delivery
Muniz Oliva, Waldyr
2002-01-01
Geometric Mechanics here means mechanics on a pseudo-riemannian manifold and the main goal is the study of some mechanical models and concepts, with emphasis on the intrinsic and geometric aspects arising in classical problems. The first seven chapters are written in the spirit of Newtonian Mechanics while the last two ones as well as two of the four appendices describe the foundations and some aspects of Special and General Relativity. All the material has a coordinate free presentation but, for the sake of motivation, many examples and exercises are included in order to exhibit the desirable flavor of physical applications.
Study on Simulation of Machining Errors Caused by Cutting Force
Institute of Scientific and Technical Information of China (English)
SHAO Xiaodong; ZHANG Liu; LIN Zhaoxu
2006-01-01
Machining errors caused by cutting force are studied in this paper, and an algorithm to simulate errors is putted forward. In the method, continuous machining process is separated into many machining moments. The deformation of work-piece and cutter at every moment is calculated by finite element method. The machined work-piece is gained by Boolean operation between deformed work-piece and cutter. By analyzing data of final work-piece, machining errors are predicted. The method is proved true by experiment.
A study of mathematical thoughts in the geometrical design of bertam (wild Bornean sago) weaving
Wan Bakar, Wan Norliza; Ahmad Shukri, Fuziatul Norsyiha; Ramli, Masnira
2013-04-01
Kelarai, a design for weaving, which is made up of various motifs can be produced by a variety of natural products. The main focus of the study is on Kelarai Bertam which is selected based on the firm and robust quality of the wood which was often used as the building materials of traditional houses. The objective of the study is to investigate the geometrical design in the weaving art of kelarai bertam, the stimulation of mathematical thoughts in geometrical designs and the evolution of geometrical designs in the weaving art of kelarai bertam. The research method utilized in the study was the triangulation method consisting of observation, interview and analysis. The findings revealed that quadrilateral forms and symmetrical forms such as reflection, translation, rotation and magnification were present in the weaving of kelarai bertam. The design of kelarai has undergone the process of evolution beginning from the Siku Keluang motif and has now expanded to 20 exquisite motifs. The evolution of the design and motifs resulted from the observation made upon the natives' house and wall decorations from Indonesia. In fact, geometrical concept has long been applied in the weaving art but this was not realized by the individual weaver. It is suggested that extensive research can be conducted on the combination of different weaving materials such as wild bertam and bamboo which will produce different geometrical designs such as polygon.
Geometric frustration in gadolinium gallium garnet: a Monte Carlo study
Petrenko, Oleg A.; Paul, Don McK.
1999-06-01
We have studied the magnetic properties of the frustrated triangular antiferromagnet Gd3Ga5O12 (GGG) by means of classical Monte Carlo simulations. Low-temperature specific heat, magnetization, susceptibility, autocorrelation function and neutron scattering function have been calculated for several models including different types of magnetic interactions and with the presence of an external magnetic field. In order to reproduce the experimentally observed properties of GGG, the simulation model must include nearest neighbor exchange interactions and also dipolar forces. In zero field there is a tendency to form incommensurate short-range magnetic order around positions in reciprocal space where antiferromagnetic Bragg peaks appear in an applied magnetic field.
A Study of Van Hiele of Geometric Thinking among 1st through 6th Graders
Ma, Hsiu-Lan; Lee, De-Chih; Lin, Szu-Hsing; Wu, Der-Bang
2015-01-01
This study presents partial results from the project "A Study of perceptual apprehensive, operative apprehensive, sequential apprehensive, and discursive apprehensive for elementary school students (POSD)", which was undertaken to explore gender differences and passing rate of van Hiele's geometric thinking level. The participants were…
Geometrical Studies of Complex Geological Media Using Scaling Laws
Energy Technology Data Exchange (ETDEWEB)
Huseby, O.K. [Institutt for Energiteknikk, Kjeller (Norway)
1996-12-31
This doctoral thesis applies scaling concepts to characterize the morphology of geological porous media and fracture networks and relates scaling exponents to the sample`s physical properties. The first part of the thesis applies multifractal scaling (MS) to the study of the morphology of random porous media. MS is used to characterize scanning electron microscope (SEM) images of chalk samples from the North Sea, and dipmeter micro resistivity signals from several wells in different North Sea reservoirs. The second part of the thesis develops and characterizes a model of a stochastic fracture network. Here, the application of the scaling concept differs from that in the first part, since the scaling concept is applied to percolation structures and not used as a characterization tool based on MS. Several methods and concepts must be introduced to characterize geological data and to understand the fracture model and some of them are described in four enclosed research papers. Paper 1, on the SEM images, suggests that pore space of sedimentary chalk is multifractal. Papers 2 and 3, on the dipmeter signals from reservoir wells, present a new method for extracting information about geological formations from a micro resistivity log. The main conclusion of Paper 4 on fracture networks, is that the influence of fracture shapes on percolation thresholds, block densities and topology could be explained using the concept of excluded volume. 114 refs., 48 figs., 6 tabs.
Model Study of Wave Overtopping of Marine Structure for a Wide Range of Geometric Parameters
DEFF Research Database (Denmark)
Kofoed, Jens Peter
2000-01-01
The objective of the study described in this paper is to enable estimation of wave overtopping rates for slopes/ramps given by a wide range of geometric parameters when subjected to varying wave conditions. To achieve this a great number of model tests are carried out in a wave tank using irregular...
A Study of Error Analysis from Students’ Sentences in Writing
Directory of Open Access Journals (Sweden)
Rizki Ananda
2014-10-01
Full Text Available This study was to investigate the types of sentence errors and their frequency made by first grade students from a high school in Banda Aceh in their writing of English. The participants for this study were 44 first graders chosen by random sampling. The research method used was quantitative as the data was analyzed with a statistical procedure. The data was obtained from written tests for a descriptive text entitled “My school” of 120-140 word length. This study found that three out of four sentence errors in the students’ writing were fragmented sentences whilst nearly a quarter of the errors were run-on or comma splice sentences. There were only a few choppy sentence errors and no stringy sentence errors. The data revealed five types of fragmented sentences: these were the absence of a subject, the absence of a verb, the absence of both a subject and a verb, the absence of a verb in a dependent clause, and the absence of an independent clause.
Institute of Scientific and Technical Information of China (English)
程强; 刘广博; 刘志峰; 玄东升; 常文芬
2012-01-01
The volumetric error coupled by geometric errors of parts is the main reason affecting the machining accuracy. How to determine the influence degree on the processing precision generated by the geometric errors of parts, thus distribute the geometric errors of parts economically and reasonably, is currently a difficult problem of machine tool design. Based on the theory of multi-body system and sensitivity analysis, a new method of identifying the key geometric error sources parameters is proposed. Taking a four-axis precision horizontal machining center as example, the precision model of the machine center is built up with the theory of multi-body system, thus a mathematical model for error sensitivity analysis of four-axis computer numerical control machine tools is established with matrix differential method, finally the key geometric error sources which affect the machining precision are identified after sensitivity coefficient of error are calculated and analyzed. Calculation and example show that geometric error factors of major parts that have relatively significant influence on comprehensive spatial error of the machine tools can be identified effectively, thus important theoretical basis is provided for improving precision of machine tools reasonably and economically.%零部件几何误差耦合而成的机床空间误差是影响其加工精度的主要原因,如何确定各零部件几何误差对加工精度的影响程度从而经济合理地分配机床零部件的几何精度是目前机床设计所面临的一个难题.基于多体系统理论,在敏感度分析的基础上提出一种识别关键性几何误差源参数的新方法.以一台四轴精密卧式加工中心为例,基于多体系统理论构建加工中心的精度模型,并利用矩阵微分法建立四轴数控机床误差敏感度分析的数学模型,通过计算与分析误差敏感度系数,最终识别出影响机床加工精度的关键性几何误差.计算和试验分析表
On Geometric Infinite Divisibility
Sandhya, E.; Pillai, R. N.
2014-01-01
The notion of geometric version of an infinitely divisible law is introduced. Concepts parallel to attraction and partial attraction are developed and studied in the setup of geometric summing of random variables.
A Study of Syntactic Errors in College English Writing
Institute of Scientific and Technical Information of China (English)
LV Yue-yue; LI Jing
2014-01-01
Writing is an integrated task which can test language learners’ability towards vocabulary, grammar, syntax, etc, and is therefore favored by many examinations both at home and abroad. However, Chinese students tend to perform poorly in English writing. The paper randomly chooses 30 compositions from sophomores from a Beijing university and picks out the syntactic er-rors, which can hardly be corrected merely by reference books and is therefore of great value. These errors are classified into eight types and the reasons are put forward in order for students to avoid them in future studies.
Error and reinforcement processing in ADHD : An electrophysiological study
Groen, Yvonne
2011-01-01
Introduction and Objective(s) Current explanatory models of ADHD suggest abnormal reinforcement sensitivity, but the exact nature of this deficit is unclear. In this study we investigate electrophysiological reactions to positive/negative reinforcement as well as correct/error responses to gain more
The study of technical error analysis on BMD using DEXA
Energy Technology Data Exchange (ETDEWEB)
Kang, Yeong Han [Daegu Catholic University Hospital, Daegu (Korea, Republic of); Jo, Gwang Ho [Daegu Catholic University, Daegu (Korea, Republic of)
2006-12-15
This study was conducted to search for the type of technical error in DEXA (dual-energy X-ray absorptiometry) and the effect of error to measurement of BMD. The changes of BMD (g/cm{sup 2}, T-score) by patients information (Age, Weight, Height, Manopause age) input error and Confirming ROI error were investigated. Using spine phantom, we canned 10 times by age (5, 10), weight (10, 20 kg), height (5, 10 cm), manopause age (5, 10) increase and decrease respectively. Scanning region (L-spine, femur, Forearm) of 10 patients was calculated by changing ROI respectively. Analysis of difference for mean (precision 1%) were carried out. There error of patient information (Age, Weight, Height, Manopause age) was not changed differently. In confirming ROI, the BMD and T-score of L-spine involving T-12 was decreased to 0.063 g/cm{sup 2}, 0.3 and involving L-5 increased to 0.077 g/cm{sup 2}, 0.5. In narrowing 1 cm of vertical line of ROI, the BMD and T-score decreased to 0.006 g/cm{sup 2}, 0.1 and in 2 cm, 0.021 g/cm{sup 2}, 0.15, each. In hip ROI, Upper and left shift (0.5 cm) of line was not influenced BMD and T-score. In 0.5 cm lower shift (lesser trochanter below), the BMD and T-score increased 0.031 g/cm{sup 2}, 0.3 and in 1 cm 0.094 g/cm{sup 2}, 0.65, each. In forearm ROI, the BMD and T-score decreased 0.042 g/cm{sup 2}, 0.9 involving 1 cm lower wrist. And expanding 1 cm of vertical line, the BMD and T-score decreased 0.008 g/cm{sup 2}, 0.1 and in 2 cm, 0.021 g/cm{sup 2}, 0.3, each. The L-spine, hip, forearm ROI error was changed differently. There are so many kinds of technical error in BMD processing. Errors according to age, weight, height, manopause age did not influent to BMD (g/cm{sup 2}) and T-score. There are mean differences BMD and T-score in confirming ROI. For the precision exam, in L-spine processing, L1-4 have to confirmed without shift of ROI vertical line. In hip processing, the ROI have to included greater trochanter, femur head and lesser trochanter. In
Post-error slowing in sequential action: An aging study
Directory of Open Access Journals (Sweden)
Marit eRuitenberg
2014-02-01
Full Text Available Previous studies demonstrated significant differences in the learning and performance of discrete movement sequences across the lifespan: Young adults (18-28 years showed more indications for the development of (implicit motor chunks and explicit sequence knowledge than middle-aged (55-62 years; Verwey et al., 2011 and elderly participants (75-88 years; Verwey, 2010. Still, even in the absence of indications for motor chunks, the middle-aged and elderly participants showed some performance improvement too. This was attributed to a sequence learning mechanism in which individual reactions are primed by implicit sequential knowledge. The present work further examined sequential movement skill across these age groups. We explored the consequences of making an error on the execution of a subsequent sequence, and investigated whether this is modulated by aging. To that end, we re-analyzed the data from our previous studies. Results demonstrate that sequencing performance is slowed after an error has been made in the previous sequence. Importantly, for young adults and middle-aged participants the observed slowing was also accompanied by increased accuracy after an error. We suggest that slowing in these age groups involves both functional and non-functional components, while slowing in elderly participants is non-functional. Moreover, using action sequences (instead of single key-presses may allow to better track the effects on performance of making an error.
Positioning errors in digital panoramic radiographs: A study
Directory of Open Access Journals (Sweden)
A Cicilia Subbulakshmi
2016-01-01
Full Text Available Panoramic radiography is a unique and a very useful extraoral film technique that allows the dentist to view the entire dentition and related structures, from condyle to condyle, on one film. Capturing a wide range of structures on a single film grounds the odds of errors in the digital panoramic radiographs. Improper positioning of the patient complicates it more, reducing the diagnostic usefulness of these radiographs. Wide knowledge about the common positioning errors and the ways to rectify it benefits the dentists in interpretation and diagnosis. Aim: This study is aimed at analyzing the 10 common positional errors (anteriorly positioned, posteriorly positioned, head tilted upwards, head tilted downwards, head twisted to one side, head tipped, overlapping of spine in lower anterior region, tongue not placed close to palate, patient movement, and ghost images in 200 digital panoramic radiographs selected randomly. Materials and Methods: Two hundred digital panoramic radiographic images of the patients above 6 years of age were selected randomly from the stored data in the system, projected on the white screen, and studied. The radiographs were analyzed by two oral medicine and radiology specialists, by recording separately, and then the results were analyzed. Results: The most common error was failure to place the tongue close to the palate, which leads to the presence of radiolucent airspace obscuring the roots of the maxillary teeth.
A study of medication errors in a tertiary care hospital
Directory of Open Access Journals (Sweden)
Nrupal Patel
2016-01-01
Full Text Available Objective: To determine the nature and types of medication errors (MEs, to evaluate occurrence of drug-drug interactions (DDIs, and assess rationality of prescription orders in a tertiary care teaching hospital. Materials and Methods: A prospective, observational study was conducted in General Medicine and Pediatric ward of Civil Hospital, Ahmedabad during October 2012 to January 2014. MEs were categorized as prescription error, dispensing error, and administration error (AE. The case records and treatment charts were reviewed. The investigator also accompanied the staff nurse during the ward rounds and interviewed patients or care taker to gather information, if necessary. DDIs were assessed by Medscape Drug Interaction Checker software (version 4.4. Rationality of prescriptions was assessed using Phadke′s criteria. Results: A total of 1109 patients (511 in Medicine and 598 in Pediatric ward were included during the study period. Total number of MEs was 403 (36% of which, 195 (38% were in Medicine and 208 (35% were in Pediatric wards. The most common ME was PEs 262 (65% followed by AEs 126 (31%. A potential significant DDIs were observed in 191 (17% and serious DDIs in 48 (4% prescriptions. Majority of prescriptions were semirational 555 (53% followed by irrational 317 (30%, while 170 (17% prescriptions were rational. Conclusion: There is a need to establish ME reporting system to reduce its incidence and improve patient care and safety.
Directory of Open Access Journals (Sweden)
Diana Marcela Perez Berrio
2014-07-01
Full Text Available Meniscus alloimplants have been used as a source of tissue for replacement in case of breakage or irreparable damage. To determine possible changes by conservation, the study proposed to geometrically evaluate fresh menisci and menisci preserved in 98% glycerin. 15 medial menisci from eight albino rabbits of New Zealand breed were used, divided into three groups: five fresh menisci (GI; five menisci preserved in 98% glycerin for eight months (GII, and five menisci preserved in 98% glycerin for eight months and then rehydrated in 0.9% saline solution for 24 hours (GIII. All menisci were measured with vernier caliper at seven points of their geometric structure. The study established that there were no statistical differences in the measurements of GII and GIII when compared to GI; there was no difference either in the measurements of GIII when compared to GII, thus rehydration in antibiotic saline solution for 24 hours can be considered unnecessary.
Studies on a Double Poisson-Geometric Insurance Risk Model with Interference
Directory of Open Access Journals (Sweden)
Yujuan Huang
2013-01-01
Full Text Available This paper mainly studies a generalized double Poisson-Geometric insurance risk model. By martingale and stopping time approach, we obtain adjustment coefficient equation, the Lundberg inequality, and the formula for the ruin probability. Also the Laplace transformation of the time when the surplus reaches a given level for the first time is discussed, and the expectation and its variance are obtained. Finally, we give the numerical examples.
Textile surface design: a study of the geometrical knowledge in African American Freedom Quilts
Directory of Open Access Journals (Sweden)
Franciele Menegucci
2016-06-01
Full Text Available This paper discusses the presence of materials, technical and technological knowledge originating in the African culture in surface and textile construction work through the analysis of geometric patterns and their compositions derived, present in work with the quilt technique. This relationship is discussed from the perspective of geometry and the textile surface design showing how reflection on the ethnic and cultural diversity can contribute to the affirmation and appreciation of certain crops and their knowledge that, in many instances, are rootless and wrongly attributed the European and euro descendants cultures. This paper seeks to highlight the African origin of some geometric patterns lavishly designed in textile surfaces, often played, taught and disseminated without their proper provenances are granted. This reflection will demonstrate the contributions of African culture and African descent in the textile surface design, legitimizing the systematization of knowledge and African knowledge that can be understood through the study of its artifacts.
On geometric complexity of earthquake focal zone and fault system: A statistical study
Kagan, Yan Y
2008-01-01
We discuss various methods used to investigate the geometric complexity of earthquakes and earthquake faults, based both on a point-source representation and the study of interrelations between earthquake focal mechanisms. We briefly review the seismic moment tensor formalism and discuss in some detail the representation of double-couple (DC) earthquake sources by normalized quaternions. Non-DC earthquake sources like the CLVD focal mechanism are also considered. We obtain the characterization of the earthquake complex source caused by summation of disoriented DC sources. We show that commonly defined geometrical fault barriers correspond to the sources without any CLVD component. We analyze the CMT global earthquake catalog to examine whether the focal mechanism distribution suggests that the CLVD component is likely to be zero in tectonic earthquakes. Although some indications support this conjecture, we need more extensive and significantly more accurate data to answer this question fully.
Hilburger, Mark W.; Starnes, James H., Jr.
2004-01-01
The results of a parametric study of the effects of initial imperfections on the buckling and postbuckling response of three unstiffened thinwalled compression-loaded graphite-epoxy cylindrical shells with different orthotropic and quasi-isotropic shell-wall laminates are presented. The imperfections considered include initial geometric shell-wall midsurface imperfections, shell-wall thickness variations, local shell-wall ply-gaps associated with the fabrication process, shell-end geometric imperfections, nonuniform applied end loads, and variations in the boundary conditions including the effects of elastic boundary conditions. A high-fidelity nonlinear shell analysis procedure that accurately accounts for the effects of these imperfections on the nonlinear responses and buckling loads of the shells is described. The analysis procedure includes a nonlinear static analysis that predicts stable response characteristics of the shells and a nonlinear transient analysis that predicts unstable response characteristics.
Error Propagation in Geodetic Networks Studied by FEMLAB
DEFF Research Database (Denmark)
Borre, Kai
2009-01-01
thousand points. This leads to so large matrix problems that one starts thinking of using continous network models. They result in one or more differential equations with corresponding boundary conditions. The Green’s function works like the covariance matrix in the discrete case. If we can find the Green......’s function we also can study error propagation through large networks. Exactly this idea is exploited for error propagation studies in large geodetic networks. To solve the boundary value problems we have used the FEMLAB software. It is a powerful tool for this type of problems. The M-file was created...... and estimate the solution by using the principle of least squares. Contemporary networks often contain several thousand points. This leads to so large matrix problems that one starts thinking of using continous network models. They result in one or more differential equations with corresponding boundary...
Study of systematic errors in the luminosity measurement
Energy Technology Data Exchange (ETDEWEB)
Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics
1993-04-01
The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).
Shen, Jesse; Kadoury, Samuel; Labelle, Hubert; Parent, Stefan
2016-12-15
Consecutive case series analysis. To evaluate the surgical outcomes of patients with thoracic adolescent idiopathic scoliosis (AIS) in relation to different degrees of geometric torsion. AIS is a three-dimensional (3D) deformity of the spine. A 3D classification of AIS, however, remains elusive because there is no widely accepted 3D parameter in the clinical practice. Recently, a new method of estimating geometric torsion has been proposed and detected two potential new 3D subgroups based on geometric torsion values. This is an analysis of 93 patients with Lenke type-1 deformity from our institution. 3D reconstructions were obtained using biplanar radiographs both pre- and postoperatively. Geometric torsion was computed using a novel technique by approximating local arc lengths at the neutral vertebra in the thoracolumbar segment. An inter- and intragroup statistical analysis was performed to compare clinical indices of patients with different torsion values. A qualitative assessment was also performed on each patient by two senior staff surgeons. Statistically significant differences were observed in clinical indices between high (2.85 mm) and low torsion (0.83 mm) Lenke type 1 subgroups. Preoperatively, the high torsion group showed higher Cobb angle values in the thoracic segment (71.18° vs. 63.74°), as well as higher angulation in the thoracolumbar plane of maximum deformity (67.79° vs. 53.30°). Postoperatively, a statistically significant difference was found in the orientation of the plane of maximum deformity in the thoracolumbar segment between the high and low torsion groups (47.95° vs. 30.03°). Results from the qualitative evaluation of surgical results showed different results between the two staff surgeons. These results suggest a link between preoperative torsion values and surgical outcomes within Lenke type 1 deformities. These results will need to be validated by an independent group, as it is a single-center study. 4.
Nursing student medication errors involving tubing and catheters: a descriptive study.
Wolf, Zane Robinson; Hicks, Rodney W; Altmiller, Geralyn; Bicknell, Patricia
2009-08-01
This retrospective case study examined reports (N=27) of medication errors made by nursing students involving tubing and catheter misconnections. Characteristics of misconnection errors included attributes of events recorded on MEDMARX error reports of the United States Pharmacopeia. Two near miss errors or Category B errors (medication error occurred, did not reach patient) were identified, with 21 Category C medication errors (occurred, with no resulting patient harm), and four Category D errors (need for increased patient monitoring, no patient harm) reported. Reported intravenous tubing errors were more frequent than other type of tubing errors and problems with clamps were present in 12 error reports. Registered nurses discovered most of the errors; some were implicated in the mistakes along with the students.
A comparative study of voluntarily reported medication errors among ...
African Journals Online (AJOL)
Pharmacotherapy Group, Faculty of Pharmacy, University of Benin, Benin City, ... errors among adult patients in intensive care (IC) and non- .... category include system error, documentation .... the importance of patient safety and further.
A STUDY ON PREVALENCE OF REFRACTIVE ERRORS IN SCHOOL CHILDREN
Directory of Open Access Journals (Sweden)
Kolli Sree Karuna
2014-08-01
Full Text Available ‘’Sarvendriya nam nayanam pradhanam” Of all the organs in the body, eyes are the most important. The blindness or defect in vision decreases the productivity of the nation in addition to increased dependability. The refractive errors in the school children throw them in to defective future. Nutrition deficiency, mental strain, wrong reading habits etc are some of the causes for this defect in these children. Vision is essential for all the children, for the academic and overall development of the now children who are the future Indian Citizens. An attempt was made to study the prevalence of refractive errors in school children. The Lions clubs International has come forward to present the spectacles to all the needy children to correct the refractive errors. MATERIALS & METHODS: By Quantitative method--History taking from all the students by questionnaire method using a preformed structural format and all the visual acuity was clinically examined thoroughly using Snellen’s chart, pinhole occlude for all the students. Colour vision was also tested using Ishihara chart.500 students participated in cross sectional study. The results were analyzed using Microsoft excel. 21.4% eat carrot daily, 15.9% eat weekly one, 20.2% eat weekly twice, 27.1% eat monthly once, 23.8% eat monthly twice, and 26.4% do not eat carrot at all. Defective vision is more prevalent in children eating carrot once in a month. 6.7% eat green leafy vegetables daily, 21% eat weekly once, 21.9% eat weekly twice, 13.6% eat monthly once, 27.3% eat monthly twice, and 33.3% do not eat at all. Defective vision is more common in children who do not eat green leafy vegetables at all.19.9% eat fruits daily, 24.9% eat weekly once, 21.3% eat weekly twice, 20% eat monthly once, 6.7% eat monthly twice and the remaining 50% do not eat fruits at all. Defective vision is more common in children who do not eat fruits at all. All the students with refractive errors were provided with
Energy Technology Data Exchange (ETDEWEB)
Veen, Berlinda J. van der; Younis, Imad Al [Leiden University Medical Centre, Department of Nuclear Medicine, Leiden (Netherlands); Ajmone-Marsan, Nina; Bax, Jeroen J. [Leiden University Medical Centre, Department of Cardiology, Leiden (Netherlands); Westenberg, Jos J.M.; Roos, Albert de [Leiden University Medical Centre, Department of Radiology, Leiden (Netherlands); Stokkel, Marcel P.M. [Antoni van Leeuwenhoek Hospital, Department of Nuclear Medicine, Netherlands Cancer Institute, Amsterdam (Netherlands)
2012-03-15
Left ventricular dyssynchrony may predict response to cardiac resynchronization therapy and may well predict adverse cardiac events. Recently, a geometrical approach for dyssynchrony analysis of myocardial perfusion scintigraphy (MPS) was introduced. In this study the feasibility of this geometrical method to detect dyssynchrony was assessed in a population with a normal MPS and in patients with documented ventricular dyssynchrony. For the normal population 80 patients (40 men and 40 women) with normal perfusion (summed stress score {<=}2 and summed rest score {<=}2) and function (left ventricular ejection fraction 55-80%) on MPS were selected; 24 heart failure patients with proven dyssynchrony on MRI were selected for comparison. All patients underwent a 2-day stress/rest MPS protocol. Perfusion, function and dyssynchrony parameters were obtained by the Corridor4DM software package (Version 6.1). For the normal population time to peak motion was 42.8 {+-} 5.1% RR cycle, SD of time to peak motion was 3.5 {+-} 1.4% RR cycle and bandwidth was 18.2 {+-} 6.0% RR cycle. No significant gender-related differences or differences between rest and post-stress acquisition were found for the dyssynchrony parameters. Discrepancies between the normal and abnormal populations were most profound for the mean wall motion (p value <0.001), SD of time to peak motion (p value <0.001) and bandwidth (p value <0.001). It is feasible to quantify ventricular dyssynchrony in MPS using the geometrical approach as implemented by Corridor4DM. (orig.)
Geometric morphometrics as a tool for improving the comparative study of behavioural postures
Fureix, Carole; Hausberger, Martine; Seneque, Emilie; Morisset, Stéphane; Baylac, Michel; Cornette, Raphaël; Biquand, Véronique; Deleporte, Pierre
2011-07-01
Describing postures has always been a central concern when studying behaviour. However, attempts to compare postures objectively at phylogenetical, populational, inter- or intra-individual levels generally either rely upon a few key elements or remain highly subjective. Here, we propose a novel approach, based on well-established geometric morphometrics, to describe and to analyse postures globally (i.e. considering the animal's body posture in its entirety rather than focusing only on a few salient elements, such as head or tail position). Geometric morphometrics is concerned with describing and comparing variation and changes in the form (size and shape) of organisms using the coordinates of a series of homologous landmarks (i.e. positioned in relation to skeletal or muscular cues that are the same for different species for every variety of form and function and that have derived from a common ancestor, i.e. they have a common evolutionary ancestry, e.g. neck, wings, flipper/hand). We applied this approach to horses, using global postures (1) to characterise behaviours that correspond to different arousal levels, (2) to test potential impact of environmental changes on postures. Our application of geometric morphometrics to horse postures showed that this method can be used to characterise behavioural categories, to evaluate the impact of environmental factors (here human actions) and to compare individuals and groups. Beyond its application to horses, this promising approach could be applied to all questions involving the analysis of postures (evolution of displays, expression of emotions, stress and welfare, behavioural repertoires…) and could lead to a whole new line of research.
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R
2016-01-25
Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in 3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required.
Geometrical Bioelectrodynamics
Ivancevic, Vladimir G
2008-01-01
This paper proposes rigorous geometrical treatment of bioelectrodynamics, underpinning two fast-growing biomedical research fields: bioelectromagnetism, which deals with the ability of life to produce its own electromagnetism, and bioelectromagnetics, which deals with the effect on life from external electromagnetism. Keywords: Bioelectrodynamics, exterior geometrical machinery, Dirac-Feynman quantum electrodynamics, functional electrical stimulation
Localized Geometric Query Problems
Augustine, John; Maheshwari, Anil; Nandy, Subhas C; Roy, Sasanka; Sarvattomananda, Swami
2011-01-01
A new class of geometric query problems are studied in this paper. We are required to preprocess a set of geometric objects $P$ in the plane, so that for any arbitrary query point $q$, the largest circle that contains $q$ but does not contain any member of $P$, can be reported efficiently. The geometric sets that we consider are point sets and boundaries of simple polygons.
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This paper discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 4x2 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and pi/4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to-ground communication links with enough channel capacity to support voice, data and video links from cubesats, unmanned air vehicles (UAV), and commercial aircraft.
Aerogel Antennas Communications Study Using Error Vector Magnitude Measurements
Miranda, Felix A.; Mueller, Carl H.; Meador, Mary Ann B.
2014-01-01
This presentation discusses an aerogel antennas communication study using error vector magnitude (EVM) measurements. The study was performed using 2x4 element polyimide (PI) aerogel-based phased arrays designed for operation at 5 GHz as transmit (Tx) and receive (Rx) antennas separated by a line of sight (LOS) distance of 8.5 meters. The results of the EVM measurements demonstrate that polyimide aerogel antennas work appropriately to support digital communication links with typically used modulation schemes such as QPSK and 4 DQPSK. As such, PI aerogel antennas with higher gain, larger bandwidth and lower mass than typically used microwave laminates could be suitable to enable aerospace-to- ground communication links with enough channel capacity to support voice, data and video links from CubeSats, unmanned air vehicles (UAV), and commercial aircraft.
Modelling Soft Error Probability in Firmware: A Case Study
Directory of Open Access Journals (Sweden)
DG Kourie
2012-06-01
Full Text Available This case study involves an analysis of firmware that controls explosions in mining operations. The purpose is to estimate the probability that external disruptive events (such as electro-magnetic interference could drive the firmware into a state which results in an unintended explosion. Two probabilistic models are built, based on two possible types of disruptive events: a single spike of interference, and a burst of multiple spikes of interference.The models suggest that the system conforms to the IEC 61508 Safety Integrity Levels, even under very conservative assumptions of operation.The case study serves as a platform for future researchers to build on when probabilistic modelling soft errors in other contexts.
Relative Effects of Trajectory Prediction Errors on the AAC Autoresolver
Lauderdale, Todd
2011-01-01
Trajectory prediction is fundamental to automated separation assurance. Every missed alert, false alert and loss of separation can be traced to one or more errors in trajectory prediction. These errors are a product of many different sources including wind prediction errors, inferred pilot intent errors, surveillance errors, navigation errors and aircraft weight estimation errors. This study analyzes the impact of six different types of errors on the performance of an automated separation assurance system composed of a geometric conflict detection algorithm and the Advanced Airspace Concept Autoresolver resolution algorithm. Results show that, of the error sources considered in this study, top-of-descent errors were the leading contributor to missed alerts and failed resolution maneuvers. Descent-speed errors were another significant contributor, as were cruise-speed errors in certain situations. The results further suggest that increasing horizontal detection and resolution standards are not effective strategies for mitigating these types of error sources.
Directory of Open Access Journals (Sweden)
D. C. Oliveira
2009-09-01
Full Text Available Filtering hydrocyclone is a device that was developed and patented by the Particulate System Research Group at the Federal University of Uberlandia. This equipment consists of a hydrocyclone whose conical section is replaced by a conical filtering wall. Thus, during the operation of these devices, besides the underflow and overflow streams, there is another stream of liquid, resulting from the filtrate produced in the porous cone. In the present work, the influence of some geometric variables of a filtering hydrocyclone was analyzed by an experimental and CFD study. The geometric variables analyzed were underflow orifice diameter (D U and vortex finder length (ℓ. Data from a conventional hydrocyclone of the same configuration were also obtained. The results indicated that the performance of hydrocyclones is significantly influenced by the conical filtering wall. The incorporation of the filtering medium decreased the Euler numbers and increased the total efficiency of the hydrocyclones. Depending on the specific functions of the separator (as a classifier or concentrator the best values of D U and ℓ were also found for the filtering hydrocyclone.
Institute of Scientific and Technical Information of China (English)
彭留永; 王宣雅; 周建涛; 裴红星
2013-01-01
In order to eliminate geometric distortion error of marking point caused by 2-D galvanometer scan before the objective in laser marking system , the causes of the distortion was analyzed in detail .On the basis of the ideal formula of galvanometer deflection angle (α,β) , using the least squares curve fitting method , a fitting polynomial for compensating the errors of (α,β) with the marking point coordinates (x,y) was obtained and the distortion error of the galvanometer laser marking point was corrected .After correction, the maximum geometric distortion error of the laser marking point can be reduced from 3.2mm down to less than 20μm with smaller amount of calculation and faster speed .The results show that this error correction algorithm can meet the requirements of high-speed, high-precision laser marking.At the same time, new compensation formula can be calculated by changing the objective lens focal length to be applied to the different parameters laser marking system .% 为了消除激光打标系统中由2维振镜物镜前扫描引起的打标点几何畸变误差，分析误差产生的原因，在振镜扫描角度（α，β）理想计算公式的基础上，采用最小二乘曲线拟合的方法，得到用打标点坐标（ x，y）表示的补偿（α，β）误差的拟合多项式，从而对激光打标点的畸变误差进行了校正。通过校正，可以将激光打标点的最大几何畸变误差由3.2 mm降至20μm以内，且计算量小、速度快。结果表明，该误差校正算法可以满足高速、高精度激光打标的需要；同时可通过改变物镜焦距计算出新的补偿公式，以应用于不同参量的激光打标系统。
Institute of Scientific and Technical Information of China (English)
樊宇; 王宇楠; 王俊杰; 曹奇
2011-01-01
Reducing noise error is an important link for point cloud data processing in reverse engineering, which has a great impact on the precision of the ultimate ideal model.For the point cloud data from laser scanning, this paper puts forward a new triangle filter method for reducing noise error of points clouds data based on geometric relations.Research shows that the triangle filter method can be better to reducing noise error.%点云去噪是逆向工程中点云数据处理中的一个重要环节,其对最终理想模型的精度将产生很大的影响.针对激光扫描光刀法扫描的点云数据,本文提出了一种基于几何关系的三角形滤波法则,能够较好的进行去噪处理.
Collinson, Glyn A.; Dorelli, John Charles; Avanov, Leon A.; Lewis, Gethyn R.; Moore, Thomas E.; Pollock, Craig; Kataria, Dhiren O.; Bedington, Robert; Arridge, Chris S.; Chornay, Dennis J.; Gliese,Ulrik; Mariano, Al.; Barrie, Alexander C; Tucker, Corey; Owen, Christopher J.; Walsh, Andrew P.; Shappirio, Mark D.; Adrian, Mark L.
2012-01-01
We report our findings comparing the geometric factor (GF) as determined from simulations and laboratory measurements of the new Dual Electron Spectrometer (DES) being developed at NASA Goddard Space Flight Center as part of the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission. Particle simulations are increasingly playing an essential role in the design and calibration of electrostatic analyzers, facilitating the identification and mitigation of the many sources of systematic error present in laboratory calibration. While equations for laboratory measurement of the Geometric Factpr (GF) have been described in the literature, these are not directly applicable to simulation since the two are carried out under substantially different assumptions and conditions, making direct comparison very challenging. Starting from first principles, we derive generalized expressions for the determination of the GF in simulation and laboratory, and discuss how we have estimated errors in both cases. Finally, we apply these equations to the new DES instrument and show that the results agree within errors. Thus we show that the techniques presented here will produce consistent results between laboratory and simulation, and present the first description of the performance of the new DES instrument in the literature.
Marzougui, M.; Hammami, M.; Maad, R. Ben
2016-10-01
The main purpose of this study is focused on experimental investigation of cooling performance of various minichannel designs. The hydraulic dimension of one of the heat sink is 3 mm while that of the other is 2 mm. Deionised water was used as the coolant for studies conducted in both the heat sinks. Tests were done for a wide range of flow rates (0.7 l-9 l h-1) and heat inputs (5-40 kW/m2). Irrespective of the hydraulic diameter and the geometric configuration, profits and boundaries of each channel shape are analyzed and discussed in the clarity of experimental data. The total thermal resistance and the average heat transfer coefficient are compared for the various channels inspected.
Directory of Open Access Journals (Sweden)
Trunev A. P.
2014-05-01
Full Text Available In this article we have investigated the solutions of Maxwell's equations, Navier-Stokes equations and the Schrödinger associated with the solutions of Einstein's equations for empty space. It is shown that in some cases the geometric instability leading to turbulence on the mechanism of alternating viscosity, which offered by N.N. Yanenko. The mechanism of generation of matter from dark energy due to the geometric turbulence in the Big Bang has been discussed
The Nuclear Shape Phase Transitions Studied within the Geometric Collective Model
Directory of Open Access Journals (Sweden)
Khalaf A. M.
2013-04-01
Full Text Available In the framework of the Geometric Collective Model (GCM, quantum phase transition between spherical and deformed shapes of doubly even nuclei are investigated. The validity of the model is examined for the case of lanthanide chains Nd / Sm and actinide chains Th / U. The parameters of the model were obtained by performing a computer simulated search program in order to obtain minimum root mean square deviations be- tween the calculated and the experimental excitation energies. Calculated potential en- ergy surfaces (PES’s describing all deformation effects of each nucleus are extracted. Our systematic studies on lanthanide and actinide chains have revealed a shape transi- tion from spherical vibrator to axially deformed rotor when moving from the lighter to the heavier isotopes.
Bermejo, Fernando; Di Paolo, Ezequiel A.; Hüg, Mercedes X.; Arias, Claudia
2015-01-01
The sensorimotor approach proposes that perception is constituted by the mastery of lawful sensorimotor regularities or sensorimotor contingencies (SMCs), which depend on specific bodily characteristics and on actions possibilities that the environment enables and constrains. Sensory substitution devices (SSDs) provide the user information about the world typically corresponding to one sensory modality through the stimulation of another modality. We investigate how perception emerges in novice adult participants equipped with vision-to-auditory SSDs while solving a simple geometrical shape recognition task. In particular, we examine the distinction between apparatus-related SMCs (those originating mostly in properties of the perceptual system) and object-related SMCs (those mostly connected with the perceptual task). We study the sensorimotor strategies employed by participants in three experiments with three different SSDs: a minimalist head-mounted SSD, a traditional, also head-mounted SSD (the vOICe) and an enhanced, hand-held echolocation device. Motor activity and fist-person data are registered and analyzed. Results show that participants are able to quickly learn the necessary skills to distinguish geometric shapes. Comparing the sensorimotor strategies utilized with each SSD we identify differential features of the sensorimotor patterns attributable mostly to the device, which account for the emergence of apparatus-based SMCs. These relate to differences in sweeping strategies between SSDs. We identify, also, components related to the emergence of object-related SMCs. These relate mostly to exploratory movements around the border of a shape. The study provides empirical support for SMC theory and discusses considerations about the nature of perception in sensory substitution. PMID:26106340
Error-disturbance uncertainty relations studied in neutron optics
Sponar, Stephan; Sulyok, Georg; Demirel, Bulent; Hasegawa, Yuji
2016-09-01
Heisenberg's uncertainty principle is probably the most famous statement of quantum physics and its essential aspects are well described by a formulations in terms of standard deviations. However, a naive Heisenberg-type error-disturbance relation is not valid. An alternative universally valid relation was derived by Ozawa in 2003. Though universally valid Ozawa's relation is not optimal. Recently, Branciard has derived a tight error-disturbance uncertainty relation (EDUR), describing the optimal trade-off between error and disturbance. Here, we report a neutron-optical experiment that records the error of a spin-component measurement, as well as the disturbance caused on another spin-component to test EDURs. We demonstrate that Heisenberg's original EDUR is violated, and the Ozawa's and Branciard's EDURs are valid in a wide range of experimental parameters, applying a new measurement procedure referred to as two-state method.
Comparative study and error analysis of digital elevation model interpolations
Institute of Scientific and Technical Information of China (English)
CHEN Ji-long; WU Wei; LIU Hong-bin
2008-01-01
Researchers in P.R.China commonly create triangulate irregular networks (TINs) from contours and then convert TINs into digital elevation models (DEMs). However, the DEM produced by this method can not precisely describe and simulate key hydrological features such as rivers and drainage borders. Taking a hilly region in southwestern China as a research area and using ArcGISTM software, we analyzed the errors of different interpolations to obtain distributions of the errors and precisions of different algorithms and to provide references for DEM productions. The results show that different interpolation errors satisfy normal distributions, and large error exists near the structure line of the terrain. Furthermore, the results also show that the precision of a DEM interpolated with the Australian National University digital elevation model (ANUDEM) is higher than that interpolated with TIN. The DEM interpolated with TIN is acceptable for generating DEMs in the hilly region of southwestern China.
Study on Unequal Error Protection for Distributed Speech Recognition System
Institute of Scientific and Technical Information of China (English)
XIE Xiang; WANG Si-yao; LIU Jia-kang
2006-01-01
The unequal error protection (UEP) is applied in distributed speech recognition (DSR) system and three schemes are proposed. All of these three schemes are evaluated on the GSM simulating platform for recognizing mandarin digit strings and compared with the equal error protection (EEP) scheme. Experiments show that UEP can protect the data transmitted in DSR system more effectively, which results in a higher word accurate rate of DSR system.
Institute of Scientific and Technical Information of China (English)
于航; 李东升; 王梅宝; 马豪; 张晓丹; 王颖
2016-01-01
立足于接触式低温液位传感器的校准需求，设计并研制了液位传感器动态校准装置，以实现在常温及低温条件下对测量范围为1800 mm、极限误差为±2 mm的电容式传感器的校准。采用齐次坐标变换原理，从导轨直线度误差、定位误差、各连结链空间角度误差、液面波动误差等方面入手，建立了低温液位传感器校准装置的几何误差模型。对低温液位传感器校准装置的测量不确定度进行了评定，结果表明：扩展不确定度为U＝0.53 mm （ k＝2），满足低温液位传感器的校准精度要求。%Based on the need of calibrating the liquid level sensor , dynamic calibration device was designed and developed for liquid level sensor which applied to capacitive sensors ( measuring range is 1 800 mm, and limiting error is ±2 mm) in both normal and low temperature conditions .The principle of homogeneous coordinate transformation was used to establish geometric error model for the calibration device , which mainly contained the rail straightness error , positioning error , spatial angle error of each chain and level fluc-tuation error, and other aspects.Moreover, the uncertainty of calibration device was evaluated .The results show that the expanded un-certainty is U=0.53 mm ( k=2) , which meets the specifications of calibration accuracy of the cryogenic liquid level sensor .
A Corpus-based Study of EFL Learners’ Errors in IELTS Essay Writing
Directory of Open Access Journals (Sweden)
Hoda Divsar
2017-03-01
Full Text Available The present study analyzed different types of errors in the EFL learners’ IELTS essays. In order to determine the major types of errors, a corpus of 70 IELTS examinees’ writings were collected, and their errors were extracted and categorized qualitatively. Errors were categorized based on a researcher-developed error-coding scheme into 13 aspects. Based on the descriptive statistical analyses, the frequency of each error type was calculated and the commonest errors committed by the EFL learners in IELTS essays were identified. The results indicated that the two most frequent errors that IELTS candidates committed were related to word choice and verb forms. Based on the research results, pedagogical implications highlight analyzing EFL learners’ writing errors as a useful basis for instructional purposes including creating pedagogical teaching materials that are in line with learners’ linguistic strengths and weaknesses.
An Analysis of Errors in Written English Sentences: A Case Study of Thai EFL Students
Sermsook, Kanyakorn; Liamnimit, Jiraporn; Pochakorn, Rattaneekorn
2017-01-01
The purposes of the present study were to examine the language errors in a writing of English major students in a Thai university and to explore the sources of the errors. This study focused mainly on sentences because the researcher found that errors in Thai EFL students' sentence construction may lead to miscommunication. 104 pieces of writing…
Tsuruhara, Aki; Inui, Koji; Kakigi, Ryusuke
2014-03-01
The face is one of the most important visual stimuli in human life, and inverted faces are known to elicit different brain responses than upright faces. This study analyzed steady-state visual-evoked magnetic fields (SSVEFs) in eleven healthy participants when they viewed upright and inverted geometrical faces presented at 6Hz. Steady-state visual-evoked responses are useful measurements and have the advantages of robustness and a high signal-to-noise ratio. Spectrum analysis revealed clear responses to both upright and inverted faces at the fundamental stimulation frequency (6 Hz) and harmonics, i.e. SSVEFs. No significant difference was observed in the SSVEF amplitude at 6 Hz between upright and inverted faces, which was different from the transient visual-evoked response, N170. On the other hand, SSVEFs were delayed with the inverted face in the right temporal area, which was similar to N170 and the results of previous steady-state visual-evoked potentials studies. These results suggest that different mechanisms underlie the larger amplitude and delayed latency observed with face inversion, though further studies are needed to fully elucidate these mechanisms. Our study revealed that SSVEFs, which have practical advantages for measurements, could provide novel findings in human face processing.
Mahmood, Feroze; Karthik, Swaminathan; Subramaniam, Balachundhar; Panzica, Peter J; Mitchell, John; Lerner, Adam B; Jervis, Karinne; Maslow, Andrew D
2008-04-01
To study the feasibility of using 3-dimensional (3D) echocardiography in the operating room for mitral valve repair or replacement surgery. To perform geometric analysis of the mitral valve before and after repair. Prospective observational study. Academic, tertiary care hospital. Consecutive patients scheduled for mitral valve surgery. Intraoperative reconstruction of 3D images of the mitral valve. One hundred and two patients had 3D analysis of their mitral valve. Successful image reconstruction was performed in 93 patients-8 patients had arrhythmias or a dilated mitral valve annulus resulting in significant artifacts. Time from acquisition to reconstruction and analysis was less than 5 minutes. Surgeon identification of mitral valve anatomy was 100% accurate. The study confirms the feasibility of performing intraoperative 3D reconstruction of the mitral valve. This data can be used for confirmation and communication of 2-dimensional data to the surgeons by obtaining a surgical view of the mitral valve. The incorporation of color-flow Doppler into these 3D images helps in identification of the commissural or perivalvular location of regurgitant orifice. With improvements in the processing power of the current generation of echocardiography equipment, it is possible to quickly acquire, reconstruct, and manipulate images to help with timely diagnosis and surgical planning.
A theoretical and experimental study on geometric nonlinearity of initially curved cantilever beams
Directory of Open Access Journals (Sweden)
Sushanta Ghuku
2016-03-01
Full Text Available This paper presents a theoretical and experimental study on large deflection behavior of initially curved cantilever beams subjected to various types of loadings. The physical system as a straight cantilever beam subjected to a tip concentrated load is considered in this study. Nonlinear differential equations are obtained for large deflection analysis of such a straight cantilever beam, and this problem is known to involve geometrical nonlinearity. The equations are solved numerically with the help of MATLAB® computational platform to get deflection profiles of the concerned problem. These results are imposed subsequently on the center line of an initially curved beam to get theoretical load-deflection behavior of curved beam problems. To verify the theoretical model, experiment is carried out with the master leaf of a leaf spring bundle by modeling it as an initially curved cantilever beam. The effects of initial clamping and geometry variations in the eye-region are observed from experimental investigation which is commonly neglected in the mathematical formulation. Comparisons of the theoretical results with the experimental results are quite good, but the avenues for further improvement are also reported. The proposed approach is further extended to study large deflection behavior of an initially curved cantilever beam subjected to distributed and combined load. These results are successfully validated with existing results for straight beams and some new results are furnished for initially curved cantilever beams.
An Analysis of Lexical Errors Made by Chinese Students in English Study
Institute of Scientific and Technical Information of China (English)
张丽昕
2011-01-01
Errors made by language learners can be regarded as a reflection of learning process and they help teachers assess students' learning.Lexical errors are one part of the errors made by students.This essay tries to make an analysis about the main types and causes of the lexical errors made by Chinese leamers who study English as a foreign language and put forward some suggestions to solve the lexical problems.
Energy Technology Data Exchange (ETDEWEB)
Badrianto, Muldani Dwi; Riupassa, Robi D.; Basar, Khairul, E-mail: khbasar@fi.itb.ac.id [Nuclear Physics and Biophysics Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung (Indonesia)
2015-09-30
Nuclear batteries have strategic applications and very high economic potential. One Important problem in application of nuclear betavoltaic battery is its low efficiency. Current efficiency of betavoltaic nuclear battery reaches only arround 2%. One aspect that can influence the efficiency of betavoltaic nuclear battery is the geometrical configuration of radioactive source. In this study we discuss the effect of geometrical configuration of radioactive source material to the radiation intensity in betavoltaic nuclear battery system. received by the detector. By obtaining the optimum configurations, the optimum usage of radioactive materials can be determined. Various geometrical configurations of radioactive source material are simulated. It is obtained that usage of radioactive source will be optimum for circular configuration.
Study of Periodic Fabrication Error of Optical Splitter Device Performance
Directory of Open Access Journals (Sweden)
Mohammad Syuhaimi Ab-Rahman
2012-01-01
Full Text Available In this paper, the effect of fabrication errors (FEs on the performance of 1×4 optical power splitter is investigated in details. The FE, which is assumed to take regular shape, is considered in each section of the device. Simulation result show that FE has a significant effect on the output power especially when it occurs in coupling regions.
GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS
Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...
Numerical study of the systematic error in Monte Carlo schemes for semiconductors
Energy Technology Data Exchange (ETDEWEB)
Muscato, Orazio [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Di Stefano, Vincenza [Univ. degli Studi di Messina (Italy). Dipt. di Matematica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) im Forschungsverbund Berlin e.V. (Germany)
2008-07-01
The paper studies the convergence behavior of Monte Carlo schemes for semiconductors. A detailed analysis of the systematic error with respect to numerical parameters is performed. Different sources of systematic error are pointed out and illustrated in a spatially one-dimensional test case. The error with respect to the number of simulation particles occurs during the calculation of the internal electric field. The time step error, which is related to the splitting of transport and electric field calculations, vanishes sufficiently fast. The error due to the approximation of the trajectories of particles depends on the ODE solver used in the algorithm. It is negligible compared to the other sources of time step error, when a second order Runge-Kutta solver is used. The error related to the approximate scattering mechanism is the most significant source of error with respect to the time step. (orig.)
Directory of Open Access Journals (Sweden)
Endang Fauziati
2011-07-01
Full Text Available ABSTRACT: Interlanguage errors are an inevitable sign of human fallibility; therefore, they always exist in foreign language learning. They are very significant since they become the source for studying the system of the learners’ second language (interlanguage. As a language system, interlanguage has at least three prominent characteristics, namely, systematicity, permeability, and fossilization. Applied linguistic researchers are no longer in disagreement with the view that interlanguage is systematic and permeable. However, the premise that interlanguage is fossilized is still debatable. This study deals with interlanguage errors; in particular, this tries to investigate whether the learners’ grammatical errors are fossilized (in a sense that they are dynamic or static. For this purpose, an empirical study was conducted, using an error treatment method. This study used Indonesian senior high school students learning English as the research subjects. The data were collected from the students’ free compositions written prior and after an error treatment. The error treatment was carried out for one semester. The data were analyzed using quantitative and qualitative method. The result indicates that the learners’ grammatical errors were dynamic. At a certain period of the learning course, some grammatical errors emerged. As a result of the error treatment, some errors were fluctuating; some became stabilized; while others were destabilized. The fluctuating errors tended to be destabilized while the stabilized errors were also likely to be destabilized. Error treatment was proved to have significant contribution to the destabilization process; that is to say, it helped develop the learners’ IL system. A conclusion drawn from this study is that the learners’ grammatical errors are dynamic. They are destabilzable. Thus, there is a possibility for the learners to acquire complete target language grammar. Key Words: error treatment
A Study of the Anechoic Performance of Rice Husk-Based, Geometrically Tapered, Hollow Absorbers
Directory of Open Access Journals (Sweden)
Muhammad Nadeem Iqbal
2014-01-01
Full Text Available Although solid, geometrically tapered microwave absorbers are preferred due to their better performance, they are bulky and must have a thickness on the order of λ or more. The goal of this study was to design lightweight absorbers that can reduce the electromagnetic reflections to less than −10 dB. We used a very simple approach; two waste materials, that is, rice husks and tire dust in powder form, were used to fabricate two independent samples. We measured and used their dielectric properties to determine and compare the propagation constants and quarter-wave thickness. The quarter-wave thickness for the tire dust was 3 mm less than that of the rice husk material, but we preferred the rice-husk material. This preference was based on the fact that our goal was to achieve minimum backward reflections, and the rice-husk material, with its low dielectric constant, high loss factor, large attenuation per unit length, and ease of fabrication, provided a better opportunity to achieve that goal. The performance of the absorbers was found to be better (lower than −20 dB, and comparison of the results proved that the hollow design with 58% less weight was a good alternative to the use of solid absorbers.
Geometric diffusion of quantum trajectories.
Yang, Fan; Liu, Ren-Bao
2015-07-16
A quantum object can acquire a geometric phase (such as Berry phases and Aharonov-Bohm phases) when evolving along a path in a parameter space with non-trivial gauge structures. Inherent to quantum evolutions of wavepackets, quantum diffusion occurs along quantum trajectories. Here we show that quantum diffusion can also be geometric as characterized by the imaginary part of a geometric phase. The geometric quantum diffusion results from interference between different instantaneous eigenstate pathways which have different geometric phases during the adiabatic evolution. As a specific example, we study the quantum trajectories of optically excited electron-hole pairs in time-reversal symmetric insulators, driven by an elliptically polarized terahertz field. The imaginary geometric phase manifests itself as elliptical polarization in the terahertz sideband generation. The geometric quantum diffusion adds a new dimension to geometric phases and may have applications in many fields of physics, e.g., transport in topological insulators and novel electro-optical effects.
Concomitant prescribing and dispensing errors at a Brazilian hospital: a descriptive study
Directory of Open Access Journals (Sweden)
Maria das Dores Graciano Silva
2011-01-01
Full Text Available OBJECTIVE: To analyze the prevalence and types of prescribing and dispensing errors occurring with high-alert medications and to propose preventive measures to avoid errors with these medications. INTRODUCTION: The prevalence of adverse events in health care has increased, and medication errors are probably the most common cause of these events. Pediatric patients are known to be a high-risk group and are an important target in medication error prevention. METHODS: Observers collected data on prescribing and dispensing errors occurring with high-alert medications for pediatric inpatients in a university hospital. In addition to classifying the types of error that occurred, we identified cases of concomitant prescribing and dispensing errors. RESULTS: One or more prescribing errors, totaling 1,632 errors, were found in 632 (89.6% of the 705 high-alert medications that were prescribed and dispensed. We also identified at least one dispensing error in each high-alert medication dispensed, totaling 1,707 errors. Among these dispensing errors, 723 (42.4% content errors occurred concomitantly with the prescribing errors. A subset of dispensing errors may have occurred because of poor prescription quality. The observed concomitancy should be examined carefully because improvements in the prescribing process could potentially prevent these problems. CONCLUSION: The system of drug prescribing and dispensing at the hospital investigated in this study should be improved by incorporating the best practices of medication safety and preventing medication errors. High-alert medications may be used as triggers for improving the safety of the drug-utilization system.
Medication prescribing errors in a public teaching hospital in India: A prospective study.
Directory of Open Access Journals (Sweden)
Pote S
2007-03-01
Full Text Available Background: To prevent medication errors in prescribing, one needs to know their types and relative occurrence. Such errors are a great cause of concern as they have the potential to cause patient harm. The aim of this study was to determine the nature and types of medication prescribing errors in an Indian setting.Methods: The medication errors were analyzed in a prospective observational study conducted in 3 medical wards of a public teaching hospital in India. The medication errors were analyzed by means of Micromedex Drug-Reax database.Results: Out of 312 patients, only 304 were included in the study. Of the 304 cases, 103 (34% cases had at least one error. The total number of errors found was 157. The drug-drug interactions were the most frequently (68.2% occurring type of error, which was followed by incorrect dosing interval (12% and dosing errors (9.5%. The medication classes involved most were antimicrobial agents (29.4%, cardiovascular agents (15.4%, GI agents (8.6% and CNS agents (8.2%. The moderate errors contributed maximum (61.8% to the total errors when compared to the major (25.5% and minor (12.7% errors. The results showed that the number of errors increases with age and number of medicines prescribed.Conclusion: The results point to the establishment of medication error reporting at each hospital and to share the data with other hospitals. The role of clinical pharmacist in this situation appears to be a strong intervention; and the clinical pharmacist, initially, could confine to identification of the medication errors.
Study of Machining Error Forecast in NC Lathe
Institute of Scientific and Technical Information of China (English)
LIU Jiwei; ZHANG Ying; YANG Zheqing
2006-01-01
This paper brings forward a kind of machining error forecast principle in NC lathe simulation system. It combines the method of math, dynamic, material and mechanism, etc, sums up the factors which can affect the machining error, coalescent knowledge of mechanism manufacture technique and interconvert characteristic, mapped the change of physics factor in cutting process into virtual manufacture system by mathematical model. On the platform of Windows 2000 and Visual C++, applying program is developed by use of C++. The lean warehouse of MATLAB is transferred in order to command MATLAB on the language platform of MATLAB, and then the curve of the results is drawn by the outcome of calculation, which is based on the mathematical model in order to manifest the simulation results in the pattern of data and curve.
Study on error analysis and accuracy improvement for aspheric profile measurement
Gao, Huimin; Zhang, Xiaodong; Fang, Fengzhou
2017-06-01
Aspheric surfaces are important to the optical systems and need high precision surface metrology. Stylus profilometry is currently the most common approach to measure axially symmetric elements. However, if the asphere has the rotational alignment errors, the wrong cresting point would be located deducing the significantly incorrect surface errors. This paper studied the simulated results of an asphere with rotational angles around X-axis and Y-axis, and the stylus tip shift in X, Y and Z direction. Experimental results show that the same absolute value of rotational errors around X-axis would cause the same profile errors and different value of rotational errors around Y-axis would cause profile errors with different title angle. Moreover, the greater the rotational errors, the bigger the peak-to-valley value of profile errors. To identify the rotational angles in X-axis and Y-axis, the algorithms are performed to analyze the X-axis and Y-axis rotational angles respectively. Then the actual profile errors with multiple profile measurement around X-axis are calculated according to the proposed analysis flow chart. The aim of the multiple measurements strategy is to achieve the zero position of X-axis rotational errors. Finally, experimental results prove the proposed algorithms achieve accurate profile errors for aspheric surfaces avoiding both X-axis and Y-axis rotational errors. Finally, a measurement strategy for aspheric surface is presented systematically.
Study on Segmented Reflector Lamp Design Based on Error Analysis
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
This paper discusses the basic principle and design m ethod for light distribution of car lamp, introduces an important development: h igh efficient and flexible car lamp with reflecting light distribution-segmente d reflector (multi-patch) car lamp, and puts out a design method for segmented reflector based on error analysis. Unlike classical car lamp with refractive lig ht distribution, the method of reflecting light distribution gives car lamp desi gn more flexibility. In the case of guarantying the li...
A STUDY ON PREVALENCE OF REFRACTIVE ERRORS IN SCHOOL CHILDREN
Kolli Sree Karuna
2014-01-01
‘’Sarvendriya nam nayanam pradhanam” Of all the organs in the body, eyes are the most important. The blindness or defect in vision decreases the productivity of the nation in addition to increased dependability. The refractive errors in the school children throw them in to defective future. Nutrition deficiency, mental strain, wrong reading habits etc are some of the causes for this defect in these children. Vision is essential for all the children, for the academic and overal...
Energy Technology Data Exchange (ETDEWEB)
Collinson, Glyn A. [Heliophysics Science Division, NASA Goddard Space Flight Center, Greenbelt, Maryland 20071 (United States); Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Surrey (United Kingdom); Dorelli, John C.; Moore, Thomas E.; Pollock, Craig; Mariano, Al; Shappirio, Mark D.; Adrian, Mark L. [Heliophysics Science Division, NASA Goddard Space Flight Center, Greenbelt, Maryland 20071 (United States); Avanov, Levon A. [Innovim, 7501 Greenway Center Drive, Maryland Trade Center III, Greenbelt, Maryland 20770 (United States); Heliophysics Science Division, NASA Goddard Space Flight Center, Greenbelt, Maryland 20071 (United States); Lewis, Gethyn R.; Kataria, Dhiren O.; Bedington, Robert; Owen, Christopher J.; Walsh, Andrew P. [Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Surrey (United Kingdom); Arridge, Chris S. [Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Surrey (United Kingdom); The Centre for Planetary Sciences, UCL/Birkbeck (United Kingdom); Chornay, Dennis J. [University of Maryland, 7403 Hopkins Avenue, College Park, Maryland 20740 (United States); Heliophysics Science Division, NASA Goddard Space Flight Center, Greenbelt, Maryland 20071 (United States); Gliese, Ulrik [SGT, Inc., 7515 Mission Drive, Suite 30, Lanham, Maryland 20706 (United States); Heliophysics Science Division, NASA Goddard Space Flight Center, Greenbelt, Maryland 20071 (United States); Barrie, Alexander C. [Millennium Engineering and Integration, 2231 Crystal Dr., Arlington, Virginia 22202 (United States); Heliophysics Science Division, NASA Goddard Space Flight Center, Greenbelt, Maryland 20071 (United States); Tucker, Corey [Global Science and Technology Inc., 7855 Walker Drive, Greenbelt, Maryland 20770 (United States); Heliophysics Science Division, NASA Goddard Space Flight Center, Greenbelt, Maryland 20071 (United States)
2012-03-15
We report our findings comparing the geometric factor (GF) as determined from simulations and laboratory measurements of the new Dual Electron Spectrometer (DES) being developed at NASA Goddard Space Flight Center as part of the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission. Particle simulations are increasingly playing an essential role in the design and calibration of electrostatic analyzers, facilitating the identification and mitigation of the many sources of systematic error present in laboratory calibration. While equations for laboratory measurement of the GF have been described in the literature, these are not directly applicable to simulation since the two are carried out under substantially different assumptions and conditions, making direct comparison very challenging. Starting from first principles, we derive generalized expressions for the determination of the GF in simulation and laboratory, and discuss how we have estimated errors in both cases. Finally, we apply these equations to the new DES instrument and show that the results agree within errors. Thus we show that the techniques presented here will produce consistent results between laboratory and simulation, and present the first description of the performance of the new DES instrument in the literature.
Chisolm, Eric
2012-01-01
This is an introduction to geometric algebra, an alternative to traditional vector algebra that expands on it in two ways: 1. In addition to scalars and vectors, it defines new objects representing subspaces of any dimension. 2. It defines a product that's strongly motivated by geometry and can be taken between any two objects. For example, the product of two vectors taken in a certain way represents their common plane. This system was invented by William Clifford and is more commonly known as Clifford algebra. It's actually older than the vector algebra that we use today (due to Gibbs) and includes it as a subset. Over the years, various parts of Clifford algebra have been reinvented independently by many people who found they needed it, often not realizing that all those parts belonged in one system. This suggests that Clifford had the right idea, and that geometric algebra, not the reduced version we use today, deserves to be the standard "vector algebra." My goal in these notes is to describe geometric al...
Energy Technology Data Exchange (ETDEWEB)
Na, Jonggeol; Jung, Ikhwan; Kshetrimayum, Krishnadash S.; Park, Seongho; Park, Chansaem; Han, Chonghun [Seoul National University, Seoul (Korea, Republic of)
2014-12-15
Driven by both environmental and economic reasons, the development of small to medium scale GTL(gas-to-liquid) process for offshore applications and for utilizing other stranded or associated gas has recently been studied increasingly. Microchannel GTL reactors have been preferred over the conventional GTL reactors for such applications, due to its compactness, and additional advantages of small heat and mass transfer distance desired for high heat transfer performance and reactor conversion. In this work, multi-microchannel reactor was simulated by using commercial CFD code, ANSYS FLUENT, to study the geometric effect of the microchannels on the heat transfer phenomena. A heat generation curve was first calculated by modeling a Fischer-Tropsch reaction in a single-microchannel reactor model using Matlab-ASPEN integration platform. The calculated heat generation curve was implemented to the CFD model. Four design variables based on the microchannel geometry namely coolant channel width, coolant channel height, coolant channel to process channel distance, and coolant channel to coolant channel distance, were selected for calculating three dependent variables namely, heat flux, maximum temperature of coolant channel, and maximum temperature of process channel. The simulation results were visualized to understand the effects of the design variables on the dependent variables. Heat flux and maximum temperature of cooling channel and process channel were found to be increasing when coolant channel width and height were decreased. Coolant channel to process channel distance was found to have no effect on the heat transfer phenomena. Finally, total heat flux was found to be increasing and maximum coolant channel temperature to be decreasing when coolant channel to coolant channel distance was decreased. Using the qualitative trend revealed from the present study, an appropriate process channel and coolant channel geometry along with the distance between the adjacent
Institute of Scientific and Technical Information of China (English)
张达
2015-01-01
This paper describes the architecture and application methods ballbars by the detection of three-axis CNC vertical milling machine, identify the main factors affecting the geometric precision CNC milling machine is a Y-axis backlash , X-axis and Y-axis and Z-axis the verticality , Z-axis and Y-axis reverse jump punch and so on . And analyzes the causes of these errors are generated , the impact on the machining accuracy and to propose appropriate measures to solve the problem .%本文介绍了球杆仪的结构和应用方法，通过对立式三轴联动数控铣床的检测，找出影响该数控铣床几何精度的主要因素是 Y 轴反向间隙、X轴与 Y 轴和 Z 轴的垂直度、Z 轴和 Y 轴反向跃冲等，并分析了这些误差产生的原因、对加工精度的影响以及提出了相应问题的解决措施。
Mohammadipour, Amir H; Alavi, Seyed Hafez
2009-03-01
This study attempts to optimize the geometric cross-section dimensions of raised pedestrian crosswalks (RPC), employing safety and comfort measures which reflect environmental conditions and drivers behavioral patterns in Qazvin, Iran. Geometric characteristics including street width, ramp lengths, top flat crown length and height, and 4672 spot speed observations of 23 implemented RPCs were considered. The authors established geometric and analytical equations to satisfactorily express the discomfort that vehicle occupants experience while traversing an RPC and the crossing risk to pedestrians. Artificial neural networks (ANN) are reputed for their capability to learn and generalize complex engineering phenomena and were therefore adopted to cope with the highly nonlinear relationship between the before-RPC spot speeds, the geometric characteristics, and spot speeds on the RPC. This on-RPC spot speed has been utilized for computing the above-mentioned criteria. Combining these criteria, a new judgment index was created to identify the optimum RPC which fulfills the highest comfort and safety levels. It was observed that the variable with the highest impact is the second ramp length, followed by the first ramp length, top flat crown length, before-RPC spot speed, height, and street width, in order of magnitude.
Directory of Open Access Journals (Sweden)
Seth M. Weinberg
2013-11-01
Full Text Available Introduction: Previous research suggests that aspects of facial surface morphology are heritable. Traditionally, heritability studies have used a limited set of linear distances to quantify facial morphology and often employ statistical methods poorly designed to deal with biological shape. In this preliminary report, we use a combination of 3D photogrammetry and landmark-based morphometrics to explore which aspects of face shape show the strongest evidence of heritability in a sample of twins. Methods: 3D surface images were obtained from 21 twin pairs (10 monozygotic, 11 same-sex dizygotic. Thirteen 3D landmarks were collected from each facial surface and their coordinates subjected to geometric morphometric analysis. This involved superimposing the individual landmark configurations and then subjecting the resulting shape coordinates to a principal components analysis. The resulting PC scores were then used to calculate rough narrow-sense heritability estimates. Results: Three principal components displayed evidence of moderate to high heritability and were associated with variation in the breadth of orbital and nasal structures, upper lip height and projection, and the vertical and forward projection of the root of the nose due to variation in the position of nasion. Conclusions: Aspects of facial shape, primarily related to variation in length and breadth of central midfacial structures, were shown to demonstrate evidence of strong heritability. An improved understanding of which facial features are under strong genetic control is an important step in the identification of specific genes that underlie normal facial variation.
ERROR ANALYSIS IN THE TRAVEL WRITING MADE BY THE STUDENTS OF ENGLISH STUDY PROGRAM
Vika Agustina; Esti Junining
2015-01-01
This study was conducted to identify the kinds of errors in surface strategy taxonomy and to know the dominant type of errors made by the fifth semester students of English Department of one State University in Malang-Indonesia in producing their travel writing. The type of research of this study is document analysis since it analyses written materials, in this case travel writing texts. The analysis finds that the grammatical errors made by the students based on surfa...
Study on analysis from sources of error for Airborne LIDAR
Ren, H. C.; Yan, Q.; Liu, Z. J.; Zuo, Z. Q.; Xu, Q. Q.; Li, F. F.; Song, C.
2016-11-01
With the advancement of Aerial Photogrammetry, it appears that to obtain geo-spatial information of high spatial and temporal resolution provides a new technical means for Airborne LIDAR measurement techniques, with unique advantages and broad application prospects. Airborne LIDAR is increasingly becoming a new kind of space for earth observation technology, which is mounted by launching platform for aviation, accepting laser pulses to get high-precision, high-density three-dimensional coordinate point cloud data and intensity information. In this paper, we briefly demonstrates Airborne laser radar systems, and that some errors about Airborne LIDAR data sources are analyzed in detail, so the corresponding methods is put forwarded to avoid or eliminate it. Taking into account the practical application of engineering, some recommendations were developed for these designs, which has crucial theoretical and practical significance in Airborne LIDAR data processing fields.
ERROR ANALYSIS IN THE TRAVEL WRITING MADE BY THE STUDENTS OF ENGLISH STUDY PROGRAM
Directory of Open Access Journals (Sweden)
Vika Agustina
2015-05-01
Full Text Available This study was conducted to identify the kinds of errors in surface strategy taxonomy and to know the dominant type of errors made by the fifth semester students of English Department of one State University in Malang-Indonesia in producing their travel writing. The type of research of this study is document analysis since it analyses written materials, in this case travel writing texts. The analysis finds that the grammatical errors made by the students based on surface strategy taxonomy theory consist of four types. They are (1 omission, (2 addition, (3 misformation and (4 misordering. The most frequent errors occuring in misformation are in the use of tense form. Secondly, the errors are in omission of noun/verb inflection. The next error, there are many clauses that contain unnecessary phrase added there.
Institute of Scientific and Technical Information of China (English)
Wei LIU; Zhenyuan JIA; Fuji WANG; Yongshun ZHANG; Dongming GUO
2008-01-01
The geometrical nonlinearity of a giant magne-tostrictive thin film (GMF) can be clearly detected under the magnetostriction effect. Thus, using geometrical linear elastic theory to describe the strain, stress, and constitutive relationship of GMF is inaccurate. According to nonlinear elastic theory, a nonlinear deformation model of the bimorph GMF is established based on assumptions that the magnetostriction effect is equivalent to the effect of body force loaded on the GMF. With Taylor series method, the numerical solution is deduced. Experiments on TbDyFe/Polyimide (PI)/SmFe and TbDyFe/Cu/SmFe are then conducted to verify the proposed model, respectively. Results indicate that the nonlinear deflection curve model is in good conformity with the experimental data.
Directory of Open Access Journals (Sweden)
Yogeesha C.B
2014-09-01
Full Text Available The classical methods have limited scope in practical applications as some of them involve objective functions which are not continuous and/or differentiable. Evolutionary Computation is a subfield of artificial intelligence that involves combinatorial optimization problems. Travelling Salesperson Problem (TSP, which considered being a classic example for Combinatorial Optimization problem. It is said to be NP-Complete problem that cannot be solved conventionally particularly when number of cities increase. So Evolutionary techniques is the feasible solution to such problem. This paper explores an evolutionary technique: Geometric Hopfield Neural Network model to solve Travelling Salesperson Problem. Paper also achieves the results of Geometric TSP and compares the result with one of the existing widely used nature inspired heuristic approach Ant Colony Optimization Algorithms (ACA/ACO to solve Travelling Salesperson Problem.
Bani-Yaseen, Abdulilah Dawoud; Al-Balawi, Mona
2014-01-01
The solvatochromic, spectral, and geometrical properties of nifenazone (NIF), a pyrazole-nicotinamide drug, were experimentally and computationally investigated in several neat solvents and in hydro-organic binary systems such as water-acetonitrile and water-dioxane systems. The bathochromic spectral shift observed in NIF absorption spectra when reducing the polarity of the solvent was correlated with the orientation polarizability (?f). Unlike aprotic solvents, a satisfactory correlation bet...
Bani-Yaseen, Abdulilah Dawoud; Al-Balawi, Mona
2014-01-01
The solvatochromic, spectral, and geometrical properties of nifenazone (NIF), a pyrazole-nicotinamide drug, were experimentally and computationally investigated in several neat solvents and in hydro-organic binary systems such as water-acetonitrile and water-dioxane systems. The bathochromic spectral shift observed in NIF absorption spectra when reducing the polarity of the solvent was correlated with the orientation polarizability (?f). Unlike aprotic solvents, a satisfactory correlation bet...
Koppensteiner, Matthias; Zangerl, Christian
2017-04-01
considered, the consistency is obvious. Scanline measurements and analyses provide siginificant results for discontinuity properties under the described circumstances. Considering sampling biases, the obtained dataset is even benefiting from the randomized sampling process, due to the natural terrain. The scanline survey provides a statistical database which can be used for rock mass characterization. Geometrical rock mass characterization is essential to model the in-situ block size distribution, to estimate the degree of fracturing and rock mass anisotropy for quarry oder tunnelling projects or define the mechanical rock mass properties based on classifications systems. The study should contribute a reference for the development and application of other methods for investigating discontinuity properties in instable rock masses.
Variation in Measurement Error in Asymmetry Studies: A New Model, Simulations and Application
Stefan Van Dongen
2015-01-01
The importance of measurement error in studies of asymmetry has been acknowledged for a long time. It is now common practice to acquire independent repeated measurements of trait values and to estimate the degree of measurement error relative to the amount of asymmetry. Methods also allow obtaining unbiased estimates of asymmetry, both at the population and individual level. One aspect that has been ignored is potential between-individual variation in measurement error. In this paper, I deve...
Are Divorce Studies Trustworthy? The Effects of Survey Nonresponse and Response Errors
Mitchell, Colter
2010-01-01
Researchers rely on relationship data to measure the multifaceted nature of families. This article speaks to relationship data quality by examining the ramifications of different types of error on divorce estimates, models predicting divorce behavior, and models employing divorce as a predictor. Comparing matched survey and divorce certificate information from the 1995 Life Events and Satisfaction Study (N = 1,811) showed that nonresponse error is responsible for the majority of the error in ...
Medication Errors in the Home: A Multisite Study of Children With Cancer
Roblin, Douglas W.; Weingart, Saul N.; Houlahan, Kathleen E.; Degar, Barbara; Billett, Amy; Keuker, Christopher; Biggins, Colleen; Li, Justin; Wasilewski, Karen; Mazor, Kathleen M.
2013-01-01
OBJECTIVE: As home medication use increases, medications previously managed by nurses are now managed by patients and their families. Our objective was to describe the types of errors occurring in the home medication management of children with cancer. METHODS: In a prospective observational study at 3 pediatric oncology clinics in the northeastern and southeastern United States, patients undergoing chemotherapy and their parents were recruited from November 2007 through April 2011. We reviewed medical records and checked prescription doses. A trained nurse visited the home, reviewed medication bottles, and observed administration. Two physicians independently made judgments regarding whether an error occurred and its severity. Overall rates of errors were weighted to account for clustering within sites. RESULTS: We reviewed 963 medications and observed 242 medication administrations in the homes of 92 patients. We found 72 medication errors. Four errors led to significant patient injury. An additional 40 errors had potential for injury: 2 were life-threatening, 13 were serious, and 25 were significant. Error rates varied between study sites (40–121 errors per 100 patients); the weighted overall rate was 70.2 errors per 100 patients (95% confidence interval [CI]: 58.9–81.6). The weighted rate of errors with injury was 3.6 (95% CI: 1.7–5.5) per 100 patients and with potential to injure the patient was 36.3 (95% CI: 29.3–43.3) per 100 patients. Nonchemotherapy medications were more often involved in an error than chemotherapy. CONCLUSIONS: Medication errors were common in this multisite study of outpatient pediatric cancer care. Rates of preventable medication-related injuries in this outpatient population were comparable or higher than those found in studies of hospitalized patients. PMID:23629608
Laser measurement and analysis of reposition error in polishing systems
Liu, Weisen; Wang, Junhua; Xu, Min; He, Xiaoying
2015-10-01
In this paper, robotic reposition error measurement method based on laser interference remote positioning is presented, the geometric error is analyzed in the polishing system based on robot and the mathematical model of the tilt error is presented. Studies show that less than 1 mm error is mainly caused by the tilt error with small incident angle. Marking spot position with interference fringe enhances greatly the error measurement precision, the measurement precision of tilt error can reach 5 um. Measurement results show that reposition error of the polishing system is mainly from the tilt error caused by the motor A, repositioning precision is greatly increased after polishing system improvement. The measurement method has important applications in the actual error measurement with low cost, simple operation.
Keuning, Jos; Hemker, Bas
2014-01-01
The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…
Keuning, Jos; Hemker, Bas
2014-01-01
The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…
Tsuji, Toshikazu; Irisa, Toshihiro; Ohata, Shunichi; Kokubu, Chiyo; Kanaya, Akiko; Sueyasu, Masanori; Egashira, Nobuaki; Masuda, Satohiro
2015-01-01
There are many reports regarding various medical institutions' attempts at incident prevention, but the relationship between incident types and impact on patients in drug name errors has not been studied. Therefore, we analyzed the relationship between them, while also assessing the relationship between preparation and inspection errors. Furthermore, the present study aimed to clarify the incident types that lead to severe patient damage. The investigation object in this study was restricted to "drug name errors", preparation and inspection errors in them were classified into three categories (similarity of drug efficacy, similarity of drug name, similarity of drug appearance) or two groups (drug efficacy similarity (+) group, drug efficacy similarity (-) group). Then, the relationship between preparation and inspection errors was investigated in three categories, the relationship between incident types and impact on patients was examined in two groups. The frequency of preparation errors was liable to be caused by the following order: similarity of drug efficacy > similarity of drug name > similarity of drug appearance. In contrast, the rate of inspection errors was liable to be caused by the following order: similarity of drug efficacy < similarity of drug name < similarity of drug appearance. In addition, the number of preparation errors in the drug efficacy similarity (-) group was fewer than that in the drug efficacy similarity (+) group. However, the rate of inspection errors in the drug efficacy similarity (-) group was significantly higher than that in the drug efficacy similarity (+) group. Furthermore, the occupancy rate of preparation errors, incidents more than Level 0, 1, and 2 in the drug efficacy similarity (-) group increased gradually according to the rise of patient damage. Our results suggest that preparation errors caused by the similarity of drug appearance and/or drug name are likely to lead to the incidents (inspection errors
Patanwala, Asad E; Sanders, Arthur B; Thomas, Michael C; Acquisto, Nicole M; Weant, Kyle A; Baker, Stephanie N; Merritt, Erica M; Erstad, Brian L
2012-05-01
The primary objective of this study is to determine the activities of pharmacists that lead to medication error interception in the emergency department (ED). This was a prospective, multicenter cohort study conducted in 4 geographically diverse academic and community EDs in the United States. Each site had clinical pharmacy services. Pharmacists at each site recorded their medication error interceptions for 250 hours of cumulative time when present in the ED (1,000 hours total for all 4 sites). Items recorded included the activities of the pharmacist that led to medication error interception, type of orders, phase of medication use process, and type of error. Independent evaluators reviewed all medication errors. Descriptive analyses were performed for all variables. A total of 16,446 patients presented to the EDs during the study, resulting in 364 confirmed medication error interceptions by pharmacists. The pharmacists' activities that led to medication error interception were as follows: involvement in consultative activities (n=187; 51.4%), review of medication orders (n=127; 34.9%), and other (n=50; 13.7%). The types of orders resulting in medication error interceptions were written or computerized orders (n=198; 54.4%), verbal orders (n=119; 32.7%), and other (n=47; 12.9%). Most medication error interceptions occurred during the prescribing phase of the medication use process (n=300; 82.4%) and the most common type of error was wrong dose (n=161; 44.2%). Pharmacists' review of written or computerized medication orders accounts for only a third of medication error interceptions. Most medication error interceptions occur during consultative activities. Copyright © 2011. Published by Mosby, Inc.
Did I say dog or cat? A study of semantic error detection and correction in children.
Hanley, J Richard; Cortis, Cathleen; Budd, Mary-Jane; Nozari, Nazbanou
2016-02-01
Although naturalistic studies of spontaneous speech suggest that young children can monitor their speech, the mechanisms for detection and correction of speech errors in children are not well understood. In particular, there is little research on monitoring semantic errors in this population. This study provides a systematic investigation of detection and correction of semantic errors in children between the ages of 5 and 8years as they produced sentences to describe simple visual events involving nine highly familiar animals (the moving animals task). Results showed that older children made fewer errors and corrected a larger proportion of the errors that they made than younger children. We then tested the prediction of a production-based account of error monitoring that the strength of the language production system, and specifically its semantic-lexical component, should be correlated with the ability to detect and repair semantic errors. Strength of semantic-lexical mapping, as well as lexical-phonological mapping, was estimated individually for children by fitting their error patterns, obtained from an independent picture-naming task, to a computational model of language production. Children's picture-naming performance was predictive of their ability to monitor their semantic errors above and beyond age. This relationship was specific to the strength of the semantic-lexical part of the system, as predicted by the production-based monitor.
An on-chip study on the influence of geometrical confinement and chemical gradient on cell polarity.
Zheng, Wenfu; Xie, Yunyan; Sun, Kang; Wang, Dong; Zhang, Yi; Wang, Chen; Chen, Yong; Jiang, Xingyu
2014-09-01
Cell polarity plays key roles in tissue development, regeneration, and pathological processes. However, how the cells establish and maintain polarity is still obscure so far. In this study, by employing microfluidic techniques, we explored the influence of geometrical confinement and chemical stimulation on the cell polarity and their interplay. We found that teardrop shape-induced anterior/posterior polarization of cells displayed homogeneous distribution of epidermal growth factor receptor, and the polarity could be maintained in a uniform epidermal growth factor (EGF) solution, but be broken by a reverse gradient of EGF, implying different mechanism of geometrical and chemical cue-induced cell polarity. Further studies indicated that a teardrop pattern could cause polarized distribution of microtubule-organization center and nucleus-Golgi complex, and this polarity was weakened when the cells were released from the confinement. Our study provides the evidence regarding the difference between geometrical and chemical cue-induced cell polarity and would be useful for understanding relationship between polarity and directional migration of cells.
Fredh, Anna; Scherman, Jonas Bengtsson; Fog, Lotte S; Munck af Rosenschöld, Per
2013-03-01
The purpose of the present study was to investigate the ability of commercial patient quality assurance (QA) systems to detect linear accelerator-related errors. Four measuring systems (Delta(4®), OCTAVIUS(®), COMPASS, and Epiqa™) designed for patient specific quality assurance for rotational radiation therapy were compared by measuring four clinical rotational intensity modulated radiation therapy plans as well as plans with introduced intentional errors. The intentional errors included increasing the number of monitor units, widening of the MLC banks, and rotation of the collimator. The measurements were analyzed using the inherent gamma evaluation with 2% and 2 mm criteria and 3% and 3 mm criteria. When applicable, the plans with intentional errors were compared with the original plans both by 3D gamma evaluation and by inspecting the dose volume histograms produced by the systems. There was considerable variation in the type of errors that the various systems detected; the failure rate for the plans with errors varied between 0% and 72%. When using 2% and 2 mm criteria and 95% as a pass rate the Delta(4®) detected 15 of 20 errors, OCTAVIUS(®) detected 8 of 20 errors, COMPASS detected 8 of 20 errors, and Epiqa™ detected 20 of 20 errors. It was also found that the calibration and measuring procedure could benefit from improvements for some of the patient QA systems. The various systems can detect various errors and the sensitivity to the introduced errors depends on the plan. There was poor correlation between the gamma evaluation pass rates of the QA procedures and the deviations observed in the dose volume histograms.
Geometric systematic prostate biopsy.
Chang, Doyoung; Chong, Xue; Kim, Chunwoo; Jun, Changhan; Petrisor, Doru; Han, Misop; Stoianovici, Dan
2017-04-01
The common sextant prostate biopsy schema lacks a three-dimensional (3D) geometric definition. The study objective was to determine the influence of the geometric distribution of the cores on the detection probability of prostate cancer (PCa). The detection probability of significant (>0.5 cm(3)) and insignificant (geometric distribution of the cores was optimized to maximize the probability of detecting significant cancer for various prostate sizes (20-100cm(3)), number of biopsy cores (6-40 cores) and biopsy core lengths (14-40 mm) for transrectal and transperineal biopsies. The detection of significant cancer can be improved by geometric optimization. With the current sextant biopsy, up to 20% of tumors may be missed at biopsy in a 20 cm(3) prostate due to the schema. Higher number and longer biopsy cores are required to sample with an equal detection probability in larger prostates. Higher number of cores increases both significant and insignificant tumor detection probability, but predominantly increases the detection of insignificant tumors. The study demonstrates mathematically that the geometric biopsy schema plays an important clinical role, and that increasing the number of biopsy cores is not necessarily helpful.
Institute of Scientific and Technical Information of China (English)
FANG Mingqiang
2006-01-01
Numerous published results have shown the importance of the Western Pacific Warm Pool (WPWP) surface centroid movement in ENSO-(El Nino/Southern Oscillation) related studies. However, some recent research conclusions make it necessary to clarify the differences of the currently exiting two types of WPWP surface centroid: the geometric centroid and the thermal (heat) centroid. This study analyzes the physical backgrounds of the two types of centroid and points out their differences, which suggest that different types of centroid may serve different study purposes. This study also shows gion sea surface temperature (SST) anomaly and can also be regarded as an important indicator of ENSO events.
Islam, Kamrul; Duke, Kajsa; Mustafy, Tanvir; Adeeb, Samer M; Ronsky, Janet L; El-Rich, Marwan
2015-01-01
The biomechanics of the patellofemoral (PF) joint is complex in nature, and the aetiology of such manifestations of PF instability as patellofemoral pain syndrome (PFPS) is still unclear. At this point, the particular factors affecting PFPS have not yet been determined. This study has two objectives: (1) The first is to develop an alternative geometric method using a three-dimensional (3D) registration technique and linear mapping to investigate the PF joint contact stress using an indirect measure: the depth of virtual penetration (PD) of the patellar cartilage surface into the femoral cartilage surface. (2) The second is to develop 3D PF joint models using the finite element analysis (FEA) to quantify in vivo cartilage contact stress and to compare the peak contact stress location obtained from the FE models with the location of the maximum PD. Magnetic resonance images of healthy and PFPS subjects at knee flexion angles of 15°, 30° and 45° during isometric loading have been used to develop the geometric models. The results obtained from both approaches demonstrated that the subjects with PFPS show higher PD and contact stresses than the normal subjects. Maximum stress and PD increase with flexion angle, and occur on the lateral side in healthy and on the medial side in PFPS subjects. It has been concluded that the alternative geometric method is reliable in addition to being computationally efficient compared with FEA, and has the potential to assess the mechanics of PFPS with an accuracy similar to the FEA.
Long-term academic stress increases the late component of error processing: an ERP study.
Wu, Jianhui; Yuan, Yiran; Duan, Hongxia; Qin, Shaozheng; Buchanan, Tony W; Zhang, Kan; Zhang, Liang
2014-05-01
Exposure to long-term stress has a variety of consequences on the brain and cognition. Few studies have examined the influence of long-term stress on event related potential (ERP) indices of error processing. The current study investigated how long-term academic stress modulates the error related negativity (Ne or ERN) and the error positivity (Pe) components of error processing. Forty-one male participants undergoing preparation for a major academic examination and 20 non-exam participants completed a Go-NoGo task while ERP measures were collected. The exam group reported higher perceived stress levels and showed increased Pe amplitude compared with the non-exam group. Participants' rating of the importance of the exam was positively associated with the amplitude of Pe, but these effects were not found for the Ne/ERN. These results suggest that long-term academic stress leads to greater motivational assessment of and higher emotional response to errors.
[Responsibility due to medication errors in France: a study based on SHAM insurance data].
Theissen, A; Orban, J-C; Fuz, F; Guerin, J-P; Flavin, P; Albertini, S; Maricic, S; Saquet, D; Niccolai, P
2015-03-01
The safe medication practices at the hospital constitute a major public health problem. Drug supply chain is a complex process, potentially source of errors and damages for the patient. SHAM insurances are the biggest French provider of medical liability insurances and a relevant source of data on the health care complications. The main objective of the study was to analyze the type and cause of medication errors declared to SHAM and having led to a conviction by a court. We did a retrospective study on insurance claims provided by SHAM insurances with a medication error and leading to a condemnation over a 6-year period (between 2005 and 2010). Thirty-one cases were analysed, 21 for scheduled activity and 10 for emergency activity. Consequences of claims were mostly serious (12 deaths, 14 serious complications, 5 simple complications). The types of medication errors were a drug monitoring error (11 cases), an administration error (5 cases), an overdose (6 cases), an allergy (4 cases), a contraindication (3 cases) and an omission (2 cases). Intravenous route of administration was involved in 19 of 31 cases (61%). The causes identified by the court expert were an error related to service organization (11), an error related to medical practice (11) or nursing practice (13). Only one claim was due to the hospital pharmacy. The claim related to drug supply chain is infrequent but potentially serious. These data should help strengthen quality approach in risk management. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Kim, S.
2015-12-01
This study presents the investigation of the background error covariance for reduced-rank retrospective optimal interpolation (reduced-rank ROI). Retrospective optimal interpolation (ROI) algorithm which assimilates observations over the analysis window for variance-minimum estimate of an atmospheric state at the initial time of the analysis window is suggested in Song et al. (2009). The assimilation window of ROI is gradually increased. Song and Lim (2011) suggested reduced-rank ROI improved by incorporating eigen-decomposition and covariance inflation. In this study, the background error covariance for reduced-rank ROI algorithm is investigated with Weather Research and Forecasting model (WRF). Reduced-rank ROI is applied by incorporating eigen-decomposition of background error covariance from ensemble. The structure of the background error covariance is investigated from each eigenvectors. The data assimilation experiments with reduced-rank ROI are based on Observing System Simulation Experiments (OSSE). A regularly dense network, a regularly sparse network, and irregularly realistic network are used for observation networks. It is assumed that all observations are located at the model grid points. Analysis error with reduced-rank ROI decreases significantly. Vertical profiles of background error and analysis error shows overall analysis error reduction.
Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A
2010-05-01
Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for
Directory of Open Access Journals (Sweden)
Kharchenko P. M.
2015-10-01
Full Text Available At calculations, we have used the next assumptions: 1. Not excluded systematic errors distributed with equal probability; 2. Random errors are normally distributed; 3. Total error is the composition of not excluded systematic and random errors. In calculating of measurement error of pressure, we proceeded from working formula. The confidence interval of each variable less than instrumental error, therefore, to characterize the total error of the measured value P, we use the instrumental errors of all variables. In estimating of temperature measurement error was consider the systematic and random error. To estimate random error we used measurement data of the specific volume of water on six isotherms. Obtained values were compared with published data. As an approximate estimate of the random error of our experimental data, we can take it as a total for all the isotherms of the specific volume in comparison with the published data. For studied fractions confidence limit of total error of measurement results located in the range of 0,03 ч 0,1%. At temperatures close to the critical increasing influence of errors of reference and the error associated with the introduction of corrections on the thermal expansion of the piezometer. In the two-phase area confidence limit of total error increases and located between 0,08 ч 0,15%. This is due to the sharp increase in this area of reference error of pressure and error in determining to the weight of the substance in the piezometer
Influence of genotyping error in linkage mapping for complex traits – an analytic study
Directory of Open Access Journals (Sweden)
van Houwelingen Hans C
2008-08-01
Full Text Available Abstract Background Despite the current trend towards large epidemiological studies of unrelated individuals, linkage studies in families are still thoroughly being utilized as tools for disease gene mapping. The use of the single-nucleotide-polymorphisms (SNP array technology in genotyping of family data has the potential to provide more informative linkage data. Nevertheless, SNP array data are not immune to genotyping error which, as has been suggested in the past, could dramatically affect the evidence for linkage especially in selective designs such as affected sib pair (ASP designs. The influence of genotyping error on selective designs for continuous traits has not been assessed yet. Results We use the identity-by-descent (IBD regression-based paradigm for linkage testing to analytically quantify the effect of simple genotyping error models under specific selection schemes for sibling pairs. We show, for example, that in extremely concordant (EC designs, genotyping error leads to decreased power whereas it leads to increased type I error in extremely discordant (ED designs. Perhaps surprisingly, the effect of genotyping error on inference is most severe in designs where selection is least extreme. We suggest a genomic control for genotyping errors via a simple modification of the intercept in the regression for linkage. Conclusion This study extends earlier findings: genotyping error can substantially affect type I error and power in selective designs for continuous traits. Designs involving both EC and ED sib pairs are fairly immune to genotyping error. When those designs are not feasible the simple genomic control strategy that we suggest offers the potential to deliver more robust inference, especially if genotyping is carried out by SNP array technology.
Energy Technology Data Exchange (ETDEWEB)
Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam [Dept. of Nuclear Medicine, Severance Hospital, Yonsei University, Seoul (Korea, Republic of); Park, Hoon Hee [Dept. of Radiological Technology, Shingu college, Sungnam (Korea, Republic of)
2013-12-15
This study is aimed to evaluate the effect of T{sub 1/2} upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9{sup 9m}TcO{sub 4}- of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ{sup 2} test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T{sub 1/2} error from change of gradient with -0.25% to +0.25%, if T{sub 1/2} is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T{sub 1/2} error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation
Steinhauser, Marco; Eichele, Heike; Juvodden, Hilde T; Huster, Rene J; Ullsperger, Markus; Eichele, Tom
2012-01-01
Errors in choice tasks are preceded by gradual changes in brain activity presumably related to fluctuations in cognitive control that promote the occurrence of errors. In the present paper, we use connectionist modeling to explore the hypothesis that these fluctuations reflect (mal-)adaptive adjustments of cognitive control. We considered ERP data from a study in which the probability of conflict in an Eriksen-flanker task was manipulated in sub-blocks of trials. Errors in these data were preceded by a gradual decline of N2 amplitude. After fitting a connectionist model of conflict adaptation to the data, we analyzed simulated N2 amplitude, simulated response times (RTs), and stimulus history preceding errors in the model, and found that the model produced the same pattern as obtained in the empirical data. Moreover, this pattern is not found in alternative models in which cognitive control varies randomly or in an oscillating manner. Our simulations suggest that the decline of N2 amplitude preceding errors reflects an increasing adaptation of cognitive control to specific task demands, which leads to an error when these task demands change. Taken together, these results provide evidence that error-preceding brain activity can reflect adaptive adjustments rather than unsystematic fluctuations of cognitive control, and therefore, that these errors are actually a consequence of the adaptiveness of human cognition.
Directory of Open Access Journals (Sweden)
Hua-Zhan Yin
Full Text Available In everyday life, error monitoring and processing are important for improving ongoing performance in response to a changing environment. However, detecting an error is not always a conscious process. The temporal activation patterns of brain areas related to cognitive control in the absence of conscious awareness of an error remain unknown. In the present study, event-related potentials (ERPs in the brain were used to explore the neural bases of unconscious error detection when subjects solved a Chinese anagram task. Our ERP data showed that the unconscious error detection (UED response elicited a more negative ERP component (N2 than did no error (NE and detect error (DE responses in the 300-400-ms time window, and the DE elicited a greater late positive component (LPC than did the UED and NE in the 900-1200-ms time window after the onset of the anagram stimuli. Taken together with the results of dipole source analysis, the N2 (anterior cingulate cortex might reflect unconscious/automatic conflict monitoring, and the LPC (superior/medial frontal gyrus might reflect conscious error recognition.
Lüdde, H. J.; Achenbach, A.; Kalkbrenner, T.; Jankowiak, H. C.; Kirchner, T.
2016-05-01
A recently introduced model to account for geometric screening corrections in an independent-atom-model description of ion-molecule collisions is applied to proton collisions from amino acids and DNA and RNA nucleobases. The correction coefficients are obtained from using a pixel counting method (PCM) for the exact calculation of the effective cross sectional area that emerges when the molecular cross section is pictured as a structure of (overlapping) atomic cross sections. This structure varies with the relative orientation of the molecule with respect to the projectile beam direction and, accordingly, orientation-independent total cross sections are obtained from averaging the pixel count over many orientations. We present net capture and net ionization cross sections over wide ranges of impact energy and analyze the strength of the screening effect by comparing the PCM results with Bragg additivity rule cross sections and with experimental data where available. Work supported by NSERC, Canada.
Bani-Yaseen, Abdulilah Dawoud; Al-Balawi, Mona
2014-08-07
The solvatochromic, spectral, and geometrical properties of nifenazone (NIF), a pyrazole-nicotinamide drug, were experimentally and computationally investigated in several neat solvents and in hydro-organic binary systems such as water-acetonitrile and water-dioxane systems. The bathochromic spectral shift observed in NIF absorption spectra when reducing the polarity of the solvent was correlated with the orientation polarizability (Δf). Unlike aprotic solvents, a satisfactory correlation between λ(max) and Δf was determined (linear correlation of regression coefficient, R, equal to 0.93) for polar protic solvents. In addition, the medium-dependent spectral properties were correlated with the Kamlet-Taft solvatochromic parameters (α, β, and π*) by applying a multiple linear regression analysis (MLRA). The results obtained from this analysis were then employed to establish MLRA relationships for NIF in order to estimate the spectral shift in different solvents, which in turn exhibited excellent correlation (R > 0.99) with the experimental values of ν(max). Density functional theory (DFT) and time-dependent DFT theory calculations coupled with the integral equation formalism-polarizable continuum model (IEF-PCM) were performed to investigate the solvent-dependent spectral and geometrical properties of NIF. The calculations showed good and poor agreements with the experimental results using the CAM-B3LYP and B3LYP functionals, respectively. Experimental and theoretical results confirmed that the chemical properties of NIF are strongly dependent on the polarity of the chosen medium and its hydrogen bonding capability. This in turn supports the hypothesis of the delocalization of the electron density within the pyrazole ring of NIF.
Prospective study of the incidence, nature and causes of dispensing errors in community pharmacies.
Ashcroft, Darren M; Quinlan, Paul; Blenkinsopp, Alison
2005-05-01
Each year over 600 million prescription items are dispensed in community pharmacies in England and Wales. Despite this, there is little published evidence relating to dispensing errors and near misses occurring in this setting. This study sought to determine their incidence, nature and causes. Prospective study over a 4-week period in 35 community pharmacies (9 independent pharmacies and 26 chain pharmacies) in the UK. Pharmacists recorded details of all incidents that occurred during the dispensing process, including information about: the stage at which the error was detected; who found the error; who made the error; type of error; reported cause of error and circumstances associated with the error. 125,395 prescribed items were dispensed during the study period and 330 incidents were recorded relating to 310 prescriptions. 280 (84.8%) incidents were classified as a near miss (rate per 10,000 items dispensed=22.33, 95%CI 19.79-25.10), while the remaining 50 (15.2%) were classified as dispensing errors (rate per 10,000 items dispensed=3.99, 95%CI 2.96-5.26). Selection errors were the most common types of incidents (199, 60.3%), followed by labeling (109, 33.0%) and bagging errors (22, 6.6%). Most of the incidents were caused either by misreading the prescription (90, 24.5%), similar drug names (62, 16.8%), selecting the previous drug or dose from the patient's medication record on the pharmacy computer (42, 11.4%) or similar packaging (28, 7.6%). This study has demonstrated that a wide range of medication errors occur in community pharmacies. On average, for every 10,000 items dispensed, there are around 22 near misses and four dispensing errors. Given the current plans for reporting adverse events in the NHS, greater insight into the likely incidence and nature of dispensing errors will be helpful in designing effective risk management strategies in primary care. Copyright (c) 2004 John Wiley & Sons, Ltd.
Offset Error Compensation in Roundness Measurement
Institute of Scientific and Technical Information of China (English)
朱喜林; 史俊; 李晓梅
2004-01-01
This paper analyses three causes of offset error in roundness measurement and presents corresponding compensation methods.The causes of offset error include excursion error resulting from the deflection of the sensor's line of measurement from the rotational center in measurement (datum center), eccentricity error resulting from the variance between the workpiece's geometrical center and the rotational center, and tilt error resulting from the tilt between the workpiece's geometrical axes and the rotational centerline.
An Empirical Study of Pronunciation Errors in French.
Walz, Joel
1980-01-01
Presents results of a study that sought to test the pronunciation problems of a large number of American students in a beginning college-level French course. Learner difficulties over a 15-week period were used to create a hierarchy of minimal contrasts representing major, secondary, and minor problems for the students in learning French sounds.…
Post-error slowing in sequential action: an aging study
Ruitenberg, Marit F.L.; Abrahamse, Elger L.; Kleine, de Elian; Verwey, Willem B.
2014-01-01
Previous studies demonstrated significant differences in the learning and performance of discrete movement sequences across the lifespan: Young adults (18–28 years) showed more indications for the development of (implicit) motor chunks and explicit sequence knowledge than middle-aged (55–62 years; V
Study of on-machine error identification and compensation methods for micro machine tools
Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng
2016-08-01
Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results
Institute of Scientific and Technical Information of China (English)
ZHOU Junzhe; WANG Chongyu
2005-01-01
The effects of Si doping on geometric and electronic structure of closed carbon nanotube (CNT) are studied by, a first-principles method, DMol. It is found that the local density of states at the Fermi level (EF) increases due to the Si-doping and the non-occupied states above the EF go down toward the lower energy range under an external electronic field. In addition, due to the doping of Si, a sub-tip on the CNT cap is formed, which consisted of the Si atom and its neighbor C atoms. From these results it is concluded that Si-doping is beneficial to the CNT field emission properties.
Jiang, Luyun; Sun, Wei; Gao, Yajun; Zhao, Jianwei
2014-04-14
Thermal stability is one of the main concerns for the synthesis of hollow nanoparticles. In this work, molecular dynamics simulation gave an insight into the atomic reconstruction and energy evolution during the collapse of hollow gold nanoballs, based on which a mechanism was proposed. The stability was found to depend on temperature, its wall thickness and aspect ratio to a great extent. The relationship among these three factors was revealed in geometric thermal phase diagrams (GTPDs). The GTPDs were studied theoretically, and the boundary between different stability regions can be fitted and calculated. Therefore, the GTPDs at different temperatures can be deduced and used as a guide for hollow structure synthesis.
A research agenda: does geocoding positional error matter in health GIS studies?
Jacquez, Geoffrey M
2012-04-01
Until recently, little attention has been paid to geocoding positional accuracy and its impacts on accessibility measures; estimates of disease rates; findings of disease clustering; spatial prediction and modeling of health outcomes; and estimates of individual exposures based on geographic proximity to pollutant and pathogen sources. It is now clear that positional errors can result in flawed findings and poor public health decisions. Yet the current state-of-practice is to ignore geocoding positional uncertainty, primarily because of a lack of theory, methods and tools for quantifying, modeling, and adjusting for geocoding positional errors in health analysis. This paper proposes a research agenda to address this need. It summarizes the basics of the geocoding process, its assumptions, and empirical evidence describing the magnitude of geocoding positional error. An overview of the impacts of positional error in health analysis, including accessibility, disease clustering, exposure reconstruction, and spatial weights estimation is presented. The proposed research agenda addresses five key needs: (1) a lack of standardized, open-access geocoding resources for use in health research; (2) a lack of geocoding validation datasets that will allow the evaluation of alternative geocoding engines and procedures; (3) a lack of spatially explicit geocoding positional error models; (4) a lack of resources for assessing the sensitivity of spatial analysis results to geocoding positional error; (5) a lack of demonstration studies that illustrate the sensitivity of health policy decisions to geocoding positional error.
Directory of Open Access Journals (Sweden)
Jin-Young Lee
2015-01-01
Full Text Available This paper presents a study to analyze and modify the Islamic star pattern using digital algorithm, introducing a method to efficiently modify and control classical geometric patterns through experiments and applications of computer algorithms. This will help to overcome the gap between the closeness of classical geometric patterns and the influx of design by digital technology and to lay out a foundation for efficiency and flexibility in developing future designs and material fabrication by promoting better understanding of the various methods for controlling geometric patterns.
Numerical study of an error model for a strap-down INS
Grigorie, T. L.; Sandu, D. G.; Corcau, C. L.
2016-10-01
The paper presents a numerical study related to a mathematical error model developed for a strap-down inertial navigation system. The study aims to validate the error model by using some Matlab/Simulink software models implementing the inertial navigator and the error model mathematics. To generate the inputs in the evaluation Matlab/Simulink software some inertial sensors software models are used. The sensors models were developed based on the IEEE equivalent models for the inertial sensorsand on the analysis of the data sheets related to real inertial sensors. In the paper are successively exposed the inertial navigation equations (attitude, position and speed), the mathematics of the inertial navigator error model, the software implementations and the numerical evaluation results.
Background: Exposure measurement error is a concern in long-term PM2.5 health studies using ambient concentrations as exposures. We assessed error magnitude by estimating calibration coefficients as the association between personal PM2.5 exposures from validation studies and typ...
Saha, Sourav; Mojumder, Satyajit; Mahboob, Monon; Islam, M. Zahabul
2016-07-01
Tungsten is a promising material and has potential use as battery anode. Tungsten nanowires are gaining attention from researchers all over the world for this wide field of application. In this paper, we investigated effect of temperature and geometric parameters (diameter and aspect ratio) on elastic properties of Tungsten nanowire. Aspect ratios (length to diameter ratio) considered are 8:1, 10:1, and 12:1 while diameter of the nanowire is varied from 1-4 nm. For 2 nm diameter sample (aspect ratio 10:1), temperature is varied (10K ~ 1500K) to observe elastic behavior of Tungsten nanowire under uniaxial tensile loading. EAM potential is used for molecular dynamic simulation. We applied constant strain rate of 109 s-1 to deform the nanowire. Elastic behavior is expressed through stress vs. strain plot. We also investigated the fracture mechanism of tungsten nanowire and radial distribution function. Investigation suggests peculiar behavior of Tungsten nanowire in nano-scale with double peaks in stress vs. strain diagram. Necking before final fracture suggests that actual elastic behavior of the material is successfully captured through atomistic modeling.
SIMulation of Medication Error induced by Clinical Trial drug labeling: the SIMME-CT study.
Dollinger, Cecile; Schwiertz, Vérane; Sarfati, Laura; Gourc-Berthod, Chloé; Guédat, Marie-Gabrielle; Alloux, Céline; Vantard, Nicolas; Gauthier, Noémie; He, Sophie; Kiouris, Elena; Caffin, Anne-Gaelle; Bernard, Delphine; Ranchon, Florence; Rioufol, Catherine
2016-06-01
To assess the impact of investigational drug labels on the risk of medication error in drug dispensing. A simulation-based learning program focusing on investigational drug dispensing was conducted. The study was undertaken in an Investigational Drugs Dispensing Unit of a University Hospital of Lyon, France. Sixty-three pharmacy workers (pharmacists, residents, technicians or students) were enrolled. Ten risk factors were selected concerning label information or the risk of confusion with another clinical trial. Each risk factor was scored independently out of 5: the higher the score, the greater the risk of error. From 400 labels analyzed, two groups were selected for the dispensing simulation: 27 labels with high risk (score ≥3) and 27 with low risk (score ≤2). Each question in the learning program was displayed as a simulated clinical trial prescription. Medication error was defined as at least one erroneous answer (i.e. error in drug dispensing). For each question, response times were collected. High-risk investigational drug labels correlated with medication error and slower response time. Error rates were significantly 5.5-fold higher for high-risk series. Error frequency was not significantly affected by occupational category or experience in clinical trials. SIMME-CT is the first simulation-based learning tool to focus on investigational drug labels as a risk factor for medication error. SIMME-CT was also used as a training tool for staff involved in clinical research, to develop medication error risk awareness and to validate competence in continuing medical education. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
Pourtois, Gilles
2011-01-01
Advanced ERP topographic mapping techniques were used to study error monitoring functions in human adult participants, and test whether proactive attentional effects during the pre-response time period could later influence early error detection mechanisms (as measured by the ERN component) or not. Participants performed a speeded go/nogo task, and made a substantial number of false alarms that did not differ from correct hits as a function of behavioral speed or actual motor response. While errors clearly elicited an ERN component generated within the dACC following the onset of these incorrect responses, I also found that correct hits were associated with a different sequence of topographic events during the pre-response baseline time-period, relative to errors. A main topographic transition from occipital to posterior parietal regions (including primarily the precuneus) was evidenced for correct hits ~170-150 ms before the response, whereas this topographic change was markedly reduced for errors. The same topographic transition was found for correct hits that were eventually performed slower than either errors or fast (correct) hits, confirming the involvement of this distinctive posterior parietal activity in top-down attentional control rather than motor preparation. Control analyses further ensured that this pre-response topographic effect was not related to differences in stimulus processing. Furthermore, I found a reliable association between the magnitude of the ERN following errors and the duration of this differential precuneus activity during the pre-response baseline, suggesting a functional link between an anticipatory attentional control component subserved by the precuneus and early error detection mechanisms within the dACC. These results suggest reciprocal links between proactive attention control and decision making processes during error monitoring.
Khechai, Abdelhak; Tati, Abdelouahab; Guettala, Abdelhamid
2017-05-01
In this paper, an effort is made to understand the effects of geometric singularities on the load bearing capacity and stress distribution in thin laminated plates. Composite plates with variously shaped cutouts are frequently used in both modern and classical aerospace, mechanical and civil engineering structures. Finite element investigation is undertaken to show the effect of geometric singularities on stress distribution. In this study, the stress concentration factors (SCFs) in cross-and-angle-ply laminated as well as in isotropic plates subjected to uniaxial loading are studied using a quadrilateral finite element of four nodes with thirty-two degrees-of-freedom per element. The varying parameters such as the cutout shape and hole sizes (a/b) are considered. The numerical results obtained by the present element are compared favorably with those obtained using the finite element software Freefem++ and the analytic findings published in literature, which demonstrates the accuracy of the present element. Freefem++ is open source software based on the finite element method, which could be helpful to study and improving the analyses of the stress distribution in composite plates with cutouts. The Freefem++ and the quadrilateral finite element formulations will be given in the beginning of this paper. Finally, to show the effect of the fiber orientation angle and anisotropic modulus ratio on the (SCF), number of figures are given for various ratio (a/b).
Study of numerical errors in direct numerical simulation and large eddy simulation
Institute of Scientific and Technical Information of China (English)
YANG Xiao-long; FU Song
2008-01-01
By comparing the energy spectrum and total kinetic energy, the effects of numerical errors (which arise from aliasing and discretization errors), subgrid-scale (SGS) models, and their interactions on direct numerical simulation (DNS) and large eddy simulation (LES) are investigated. The decaying isotropic turbulence is chosen as the test case. To simulate complex geometries, both the spectral method and Pade compact difference schemes are studied. The truncated Navier-Stokes (TNS) equation model with Pade discrete filter is adopted as the SGS model. It is found that the discretization error plays a key role in DNS. Low order difference schemes may be unsuitable. However, for LES, it is found that the SGS model can represent the effect of small scales to large scales and dump the numerical errors. Therefore, reasonable results can also be obtained with a low order discretization scheme.
Systematic Error Study for ALICE charged-jet v2 Measurement
Energy Technology Data Exchange (ETDEWEB)
Heinz, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-07-18
We study the treatment of systematic errors in the determination of v_{2} for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ^{2} according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ^{2} and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methods are equivalent.
Adams, D C; Rohlf, F J
2000-04-11
Ecological character displacement describes a pattern where morphological differences between sympatric species are enhanced through interspecific competition. Although widely considered a pervasive force in evolutionary ecology, few clear-cut examples have been documented. Here we report a case of ecological character displacement between two salamander species, Plethodon cinereus and Plethodon hoffmani. Morphology was quantified by using linear measurements and landmark-based geometric morphometric methods for specimens from allopatric and sympatric populations from two geographic transects in south-central Pennsylvania, and stomach contents were assayed to quantify food resource use. Morphological variation was also assessed in 13 additional allopatric populations. In both transects, we found significant morphological differentiation between sympatric populations that was associated with a reduction in prey consumption in sympatry and a segregation of prey according to prey size. No trophic morphological or resource use differences were found between allopatric populations, and comparisons of sympatric populations with randomly paired allopatric populations revealed that the observed sympatric morphological differentiation was greater than expected by chance. The major trophic anatomical differences between sympatric populations relates to functional and biomechanical differences in jaw closure: sympatric P. hoffmani have a faster closing jaw, whereas sympatric P. cinereus have a slower, stronger jaw. Because salamanders immobilize prey of different sizes in different ways, and because the observed sympatric biomechanical differences in jaw closure are associated with the differences in prey consumption, the observed character displacement has a functional ecological correlate, and we can link changes in form with changes in function in this apparent example of character displacement.
Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants
Directory of Open Access Journals (Sweden)
Paweł Drapikowski
2016-06-01
Full Text Available This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated.
Effects of Lexico-syntactic Errors on Teaching Materials: A Study of Textbooks Written by Nigerians
Directory of Open Access Journals (Sweden)
Peace Chinwendu Israel
2014-01-01
Full Text Available This study examined lexico-syntactic errors in selected textbooks written by Nigerians. Our focus was on the educated bilinguals (acrolect who acquired their primary, secondary and tertiary education in Nigeria and the selected textbooks were textbooks published by Vanity Publishers/Press. The participants (authors cut across the three major ethnic groups in Nigeria – Hausa, Igbo and Yoruba and the selection of the textbooks covered the major disciplines of study. We adopted the descriptive research design and specifically employed the survey method to accomplish the purpose of our exploratory research. The lexico-syntactic errors in the selected textbooks were identified and classified into various categories. These errors were not different from those identified over the years in students’ essays and exam scripts. This buttressed our argument that students are merely the conveyor belt of errors contained in the teaching material and that we can analyse the students’ lexico-syntactic errors in tandem with errors contained in the material used in teaching.
Prediction error in reinforcement learning: a meta-analysis of neuroimaging studies.
Garrison, Jane; Erdeniz, Burak; Done, John
2013-08-01
Activation likelihood estimation (ALE) meta-analyses were used to examine the neural correlates of prediction error in reinforcement learning. The findings are interpreted in the light of current computational models of learning and action selection. In this context, particular consideration is given to the comparison of activation patterns from studies using instrumental and Pavlovian conditioning, and where reinforcement involved rewarding or punishing feedback. The striatum was the key brain area encoding for prediction error, with activity encompassing dorsal and ventral regions for instrumental and Pavlovian reinforcement alike, a finding which challenges the functional separation of the striatum into a dorsal 'actor' and a ventral 'critic'. Prediction error activity was further observed in diverse areas of predominantly anterior cerebral cortex including medial prefrontal cortex and anterior cingulate cortex. Distinct patterns of prediction error activity were found for studies using rewarding and aversive reinforcers; reward prediction errors were observed primarily in the striatum while aversive prediction errors were found more widely including insula and habenula.
Predictive error detection in pianists: A combined ERP and motion capture study
Directory of Open Access Journals (Sweden)
Clemens eMaidhof
2013-09-01
Full Text Available Performing a piece of music involves the interplay of several cognitive and motor processes and requires extensive training to achieve a high skill level. However, even professional musicians commit errors occasionally. Previous event-related potential (ERP studies have investigated the neurophysiological correlates of pitch errors during piano performance, and reported pre-error negativity already occurring approximately 70-100 ms before the error had been committed and audible. It was assumed that this pre-error negativity reflects predictive control processes that compare predicted consequences with actual consequences of one’s own actions. However, in previous investigations, correct and incorrect pitch events were confounded by their different tempi. In addition, no data about the underlying movements were available. In the present study, we exploratively recorded the ERPs and 3D movement data of pianists’ fingers simultaneously while they performed fingering exercises from memory. Results showed a pre-error negativity for incorrect keystrokes when both correct and incorrect keystrokes were performed with comparable tempi. Interestingly, even correct notes immediately preceding erroneous keystrokes elicited a very similar negativity. In addition, we explored the possibility of computing ERPs time-locked to a kinematic landmark in the finger motion trajectories defined by when a finger makes initial contact with the key surface, that is, at the onset of tactile feedback. Results suggest that incorrect notes elicited a small difference after the onset of tactile feedback, whereas correct notes preceding incorrect ones elicited negativity before the onset of tactile feedback. The results tentatively suggest that tactile feedback plays an important role in error-monitoring during piano performance, because the comparison between predicted and actual sensory (tactile feedback may provide the information necessary for the detection of an
Pragmatic geometric model evaluation
Pamer, Robert
2015-04-01
Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to
Reducing medication errors and increasing patient safety: case studies in clinical pharmacology.
Benjamin, David M
2003-07-01
Today, reducing medication errors and improving patient safety have become common topics of discussion for the president of the United States, federal and state legislators, the insurance industry, pharmaceutical companies, health care professionals, and patients. But this is not news to clinical pharmacologists. Improving the judicious use of medications and minimizing adverse drug reactions have always been key areas of research and study for those working in clinical pharmacology. However, added to the older terms of adverse drug reactions and rational therapeutics, the now politically correct expression of medication error has emerged. Focusing on the word error has drawn attention to "prevention" and what can be done to minimize mistakes and improve patient safety. Webster's New Collegiate Dictionary has several definitions of error, but the one that seems to be most appropriate in the context of medication errors is "an act that through ingnorance, deficiency, or accident departs from or fails to achieve what should be done." What should be done is generally known as "the five rights": the right drug, right dose, right route, right time, and right patient. One can make an error of omission (failure to act correctly) or an error of commission (acted incorrectly). This article now summarizes what is currently known about medication errors and translates the information into case studies illustrating common scenarios leading to medication errors. Each case is analyzed to provide insight into how the medication error could have been prevented. "System errors" are described, and the application of failure mode effect analysis (FMEA) is presented to determine the part of the "safety net" that failed. Examples of reengineering the system to make it more "error proof" are presented. An error can be prevented. However, the practice of medicine, pharmacy, and nursing in the hospital setting is very complicated, and so many steps occur from "pen to patient" that there
Testing algebraic geometric codes
Institute of Scientific and Technical Information of China (English)
CHEN Hao
2009-01-01
Property testing was initially studied from various motivations in 1990's.A code C (∩)GF(r)n is locally testable if there is a randomized algorithm which can distinguish with high possibility the codewords from a vector essentially far from the code by only accessing a very small (typically constant) number of the vector's coordinates.The problem of testing codes was firstly studied by Blum,Luby and Rubinfeld and closely related to probabilistically checkable proofs (PCPs).How to characterize locally testable codes is a complex and challenge problem.The local tests have been studied for Reed-Solomon (RS),Reed-Muller (RM),cyclic,dual of BCH and the trace subcode of algebraicgeometric codes.In this paper we give testers for algebraic geometric codes with linear parameters (as functions of dimensions).We also give a moderate condition under which the family of algebraic geometric codes cannot be locally testable.
Testing algebraic geometric codes
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
Property testing was initially studied from various motivations in 1990’s. A code C GF (r)n is locally testable if there is a randomized algorithm which can distinguish with high possibility the codewords from a vector essentially far from the code by only accessing a very small (typically constant) number of the vector’s coordinates. The problem of testing codes was firstly studied by Blum, Luby and Rubinfeld and closely related to probabilistically checkable proofs (PCPs). How to characterize locally testable codes is a complex and challenge problem. The local tests have been studied for Reed-Solomon (RS), Reed-Muller (RM), cyclic, dual of BCH and the trace subcode of algebraicgeometric codes. In this paper we give testers for algebraic geometric codes with linear parameters (as functions of dimensions). We also give a moderate condition under which the family of algebraic geometric codes cannot be locally testable.
Transmuted Complementary Weibull Geometric Distribution
Directory of Open Access Journals (Sweden)
Ahmed Z. A fify
2014-12-01
Full Text Available This paper provides a new generalization of the complementary Weibull geometric distribution that introduced by Tojeiro et al. (2014, using the quadratic rank transmutation map studied by Shaw and Buckley (2007. The new distribution is referred to as transmuted complementary Weibull geometric distribution (TCWGD. The TCWG distribution includes as special cases the complementary Weibull geometric distribution (CWGD, complementary exponential geometric distribution(CEGD,Weibull distribution (WD and exponential distribution (ED. Various structural properties of the new distribution including moments, quantiles, moment generating function and RØnyi entropy of the subject distribution are derived. We proposed the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. A real data set are used to compare the exibility of the transmuted version versus the complementary Weibull geometric distribution.
Directory of Open Access Journals (Sweden)
Bao-Gang Hu
2016-02-01
Full Text Available In this work, we propose a new approach of deriving the bounds between entropy and error from a joint distribution through an optimization means. The specific case study is given on binary classifications. Two basic types of classification errors are investigated, namely, the Bayesian and non-Bayesian errors. The consideration of non-Bayesian errors is due to the facts that most classifiers result in non-Bayesian solutions. For both types of errors, we derive the closed-form relations between each bound and error components. When Fano’s lower bound in a diagram of “Error Probability vs. Conditional Entropy” is realized based on the approach, its interpretations are enlarged by including non-Bayesian errors and the two situations along with independent properties of the variables. A new upper bound for the Bayesian error is derived with respect to the minimum prior probability, which is generally tighter than Kovalevskij’s upper bound.
National Research Council Canada - National Science Library
Xiao, Yi; Ma, Feng; Lv, Yixuan; Cai, Gui; Teng, Peng; Xu, FengGang; Chen, Shanguang
2015-01-01
.... In this study, we examined how error-related negativity (ERN) of a four-choice reaction time task was reduced in the mental fatigue condition and investigated the role of sustained attention in error processing...
Lu, Xiaolin
2005-07-01
This thesis focuses on the study of the different geometrically confined states of polyacrylamide (PAL) in bulk film, single chain globules, and thin films. The thermal analysis, the spectroscopic study, and the morphological investigation were carried out. The main contribution of this thesis is that we have acquired a better understanding about the glass transition (T g) behavior of polymers. Although the glass transition is a well known phenomenon for liquids with strong covalently bonded structures, and is especially noteworthy for amorphous polymers, understanding the glass transition still remains one of the most intriguing puzzles in condensed matter physics at present. The solution of the glass transition puzzle will ultimately influence different fields in polymer science, particularly biophysics and biochemistry. Our approach to this complicated assignment, the glass transition phenomenon, is to examine the glass transition behavior of polymer chains in 3 dimensional confinement for single molecular single chain globules, 1 dimensional confinement for polymer thin films, and 0 dimensional confinement for bulk state polymer. We found that the glass transition temperature of a polymer depends on several factors, such as the inter-chain interlock entanglement, the inter-chain molecular interactions, the intra-chain cohesional entanglement, and the local chain orientation and conformational entropy. These factors have been systematically investigated by carefully preparing the polymer samples in different confined states. The main conclusion is that, although the glass transition is a non-equilibrium dynamic property, the true glass transition can be reached when polymer chains are free of the inter-chain entanglement. A better example is illustrated, in this thesis, of the glass transition behavior for the well-annealed single chain globules. PAL single chain globules are prepared by spray drying from the dilute solution. The size and morphology of the
Basso, A; Corno, M; Marangolo, P
1996-02-01
Impaired naming is a common finding in aphasia but while it is known that naming errors diminish over time, longitudinal studies are rare. In this retrospective study, naming errors of 84 vascular aphasic patients are studied. Errors in oral and written confrontation naming tasks in two successive evaluations are tabulated and coded into one of 10 error types. No Response, Word-Finding Difficulty, Semantic Paraphasia, Unrelated Paraphasia, Phonemic/Orthographic Paraphasia, Neologism, Paraphasic Jargon, Phonemic/Neologistic Jargon, Stereotypy, and Other. All analyses were carried out on the difference scores, that is, the score in the second examination minus the score in the first examination. Results indicate that there is a significant decrease of No Responses (in oral and written naming) and Neologisms (in oral naming), and a significant increase of Orthographic Paraphasias in written naming. Moreover, the difference score for Phonemic/Orthographic Paraphasias was higher in written than oral naming. The difference scores for the other types of error were not statistically significant.
Geometric constraint solving with geometric transformation
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
This paper proposes two algorithms for solving geometric constraint systems. The first algorithm is for constrained systems without loops and has linear complexity. The second algorithm can solve constraint systems with loops. The latter algorithm is of quadratic complexity and is complete for constraint problems about simple polygons. The key to it is to combine the idea of graph based methods for geometric constraint solving and geometric transformations coming from rule-based methods.
Nielsen, S P; Xie, X; Bärenholdt, O
2001-01-01
It is well known among clinicians that Colles fracture patients may have normal projected axial bone mineral density and that bone mass is not synonymous with bone strength. The aim of this work was to investigate whether cross-sectional properties of the distal radius in female patients with recent Colles fracture differ from those of a younger group of normal women without fracture. It was hypothesized that patients with Colles fracture had petite distal radii and that cortical thinning and reduced cortical and trabecular volumetric density are dominant features of this fracture type. We used a multilayer high-precision peripheral quantitative computed tomography (pQCT) device with a long-term precision error of 0.1% for a dedicated phantom during the measurement period (152 d). Clinical measurements were made at an ultradistal site rich in trabecular bone and a less ultradistal site rich in cortical bone. The results show that the following pQCT variables were significantly reduced in the nonfractured radius of the Colles fracture cases: mean ultradistal trabecular volumetric density, mean ultradistal and distal cortical volumetric density, mean ultradistal and distal cortical thickness (p differences). The outer cortical diameter, cross-sectional bone area, and cortical bending moment of inertia were not statistically different in the two groups. Thus, it would appear that Colles fracture cases did not have petite distal radii. The results suggest that the deforming force of Colles fracture has a transaxial direction (fall on outstretched arm), resulting in a crush fracture, and that it is not a bending force. We suggest that Colles fracture occurs as a result of the combined effect of a fall on the out-stretched arm, low trabecular and cortical volumetric bone density, and reduced cortical thickness.
On the importance of Task 1 and error performance measures in PRP dual-task studies
Directory of Open Access Journals (Sweden)
Tilo eStrobach
2015-04-01
Full Text Available The Psychological Refractory Period (PRP paradigm is a dominant research tool in the literature on dual-task performance. In this paradigm a first and second component task (i.e., Task 1 and 2 are presented with variable stimulus onset asynchronies (SOAs and priority to perform Task 1. The main indicator of dual-task impairment in PRP situations is an increasing Task 2-RT with decreasing SOAs. This impairment is typically explained with some task components being processed strictly sequentially in the context of the prominent central bottleneck theory. This assumption could implicitly suggest that processes of Task 1 are unaffected by Task 2 and bottleneck processing, i.e. decreasing SOAs do not increase RTs and error rates of the first task. The aim of the present review is to assess whether PRP dual-task studies included both RT and error data presentations and statistical analyses and whether studies including both data types (i.e., RTs and error rates show data consistent with this assumption (i.e., decreasing SOAs and unaffected RTs and/ or error rates in Task 1. This review demonstrates that, in contrast to RT presentations and analyses, error data is underrepresented in a substantial number of studies. Furthermore, a substantial number of studies with RT and error data showed a statistically significant impairment of Task 1 performance with decreasing SOA. Thus, these studies produced data that is not primarily consistent with the strong assumption that processes of Task 1 are unaffected by Task 2 and bottleneck processing in the context of PRP dual-task situations; this calls for a more careful report and analysis of Task 1 performance in PRP studies and for a more careful consideration of theories proposing additions to the bottleneck assumption, which are sufficiently general to explain Task 1 and Task 2 effects.
Geometric unsharpness calculations
Energy Technology Data Exchange (ETDEWEB)
Anderson, D.J. [International Training and Education Group (INTEG), Oakville, Ontario (Canada)
2008-07-15
The majority of radiographers' geometric unsharpness calculations are normally performed with a mathematical formula. However, a majority of codes and standards refer to the use of a nomograph for this calculation. Upon first review, the use of a nomograph appears more complicated but with a few minutes of study and practice it can be just as effective. A review of this article should provide enlightenment. (author)
Lim, Kok Seng
2010-01-01
Introduction: This study aimed to investigate the errors made by 265 Form 2 male students in simplifying algebraic expressions. Method: A total of 265 Form 2 (Grade 7) male students were selected for this study. 10 high, medium and low ability students in each group were selected for the interviews. 40 items were administered to the respondents to…
Word Order Errors. Swedish-English Contrastive Studies, Report No. 2.
Carlbom, Ulla
The materials employed in this investigation were 769 translations from Swedish into English made by Swedish university students studying English. The principal objective was to study aspects of learner behavior (in treating English word order) to obtain information about the types of errors Swedish students commit in English production and…
James, R.; Brownlow, J. D.
1985-01-01
A study is performed under NASA contract to evaluate data from an AN/FPS-16 radar installed for support of flight programs at Dryden Flight Research Facility of NASA Ames Research Center. The purpose of this study is to provide information necessary for improving post-flight data reduction and knowledge of accuracy of derived radar quantities. Tracking data from six flights are analyzed. Noise and bias errors in raw tracking data are determined for each of the flights. A discussion of an altitude bias error during all of the tracking missions is included. This bias error is defined by utilizing pressure altitude measurements made during survey flights. Four separate filtering methods, representative of the most widely used optimal estimation techniques for enhancement of radar tracking data, are analyzed for suitability in processing both real-time and post-mission data. Additional information regarding the radar and its measurements, including typical noise and bias errors in the range and angle measurements, is also presented. This report is in two parts. This is part 2, a discussion of the modeling of propagation path errors.
Samsiah, A; Othman, Noordin; Jamshed, Shazia; Hassali, Mohamed Azmi
2016-01-01
To explore and understand participants' perceptions and attitudes towards the reporting of medication errors (MEs). A qualitative study using in-depth interviews of 31 healthcare practitioners from nine publicly funded, primary care clinics in three states in peninsular Malaysia was conducted for this study. The participants included family medicine specialists, doctors, pharmacists, pharmacist assistants, nurses and assistant medical officers. The interviews were audiotaped and transcribed verbatim. Analysis of the data was guided by the framework approach. Six themes and 28 codes were identified. Despite the availability of a reporting system, most of the participants agreed that MEs were underreported. The nature of the error plays an important role in determining the reporting. The reporting system, organisational factors, provider factors, reporter's burden and benefit of reporting also were identified. Healthcare practitioners in primary care clinics understood the importance of reporting MEs to improve patient safety. Their perceptions and attitudes towards reporting of MEs were influenced by many factors which affect the decision-making process of whether or not to report. Although the process is complex, it primarily is determined by the severity of the outcome of the errors. The participants voluntarily report the errors if they are familiar with the reporting system, what error to report, when to report and what form to use.
Samsiah, A.; Othman, Noordin; Jamshed, Shazia; Hassali, Mohamed Azmi
2016-01-01
Objective To explore and understand participants’ perceptions and attitudes towards the reporting of medication errors (MEs). Methods A qualitative study using in-depth interviews of 31 healthcare practitioners from nine publicly funded, primary care clinics in three states in peninsular Malaysia was conducted for this study. The participants included family medicine specialists, doctors, pharmacists, pharmacist assistants, nurses and assistant medical officers. The interviews were audiotaped and transcribed verbatim. Analysis of the data was guided by the framework approach. Results Six themes and 28 codes were identified. Despite the availability of a reporting system, most of the participants agreed that MEs were underreported. The nature of the error plays an important role in determining the reporting. The reporting system, organisational factors, provider factors, reporter’s burden and benefit of reporting also were identified. Conclusions Healthcare practitioners in primary care clinics understood the importance of reporting MEs to improve patient safety. Their perceptions and attitudes towards reporting of MEs were influenced by many factors which affect the decision-making process of whether or not to report. Although the process is complex, it primarily is determined by the severity of the outcome of the errors. The participants voluntarily report the errors if they are familiar with the reporting system, what error to report, when to report and what form to use. PMID:27906960
Bestvina, Mladen; Vogtmann, Karen
2014-01-01
Geometric group theory refers to the study of discrete groups using tools from topology, geometry, dynamics and analysis. The field is evolving very rapidly and the present volume provides an introduction to and overview of various topics which have played critical roles in this evolution. The book contains lecture notes from courses given at the Park City Math Institute on Geometric Group Theory. The institute consists of a set of intensive short courses offered by leaders in the field, designed to introduce students to exciting, current research in mathematics. These lectures do not duplicate standard courses available elsewhere. The courses begin at an introductory level suitable for graduate students and lead up to currently active topics of research. The articles in this volume include introductions to CAT(0) cube complexes and groups, to modern small cancellation theory, to isometry groups of general CAT(0) spaces, and a discussion of nilpotent genus in the context of mapping class groups and CAT(0) gro...
Directory of Open Access Journals (Sweden)
Saadat Delfani
2012-06-01
Full Text Available Medication errors account for about 78% of serious medical errors in intensive care unit (ICU. So far no study has been performed in Iran to evaluate all type of possible medication errors in ICU. Therefore the objective of this study was to reveal the frequency, type and consequences of all type of errors in an ICU of a large teaching hospital. The prospective observational study was conducted in an 11 bed internal ICU of a university hospital in Shiraz. In each shift all processes that were performed on one selected patient was observed and recorded by a trained pharmacist. Observer would intervene only if medication error would cause substantial harm. The data was evaluated and then were entered in a form that was designed for this purpose. The study continued for 38 shifts. During this period, a total of 442 errors per 5785 opportunities for errors (7.6% occurred. Of those, there were 9.8% administration errors, 6.8% prescribing errors, 3.3% transcription errors and, 2.3% dispensing errors. Totally 45 interventions were made, 40% of interventions result in the correction of errors. The most common causes of errors were observed to be: rule violations, slip and memory lapses and lack of drug knowledge. According to our results, the rate of errors is alarming and requires implementation of a serious solution. Since our system lacks a well-organize detection and reporting mechanism, there is no means for preventing errors in the first place. Hence, as the first step we must implement a system where errors are routinely detected and reported.
Nursing student medication errors: a case study using root cause analysis.
Dolansky, Mary A; Druschel, Kalina; Helba, Maura; Courtney, Kathleen
2013-01-01
Root cause analysis (RCA) has been used widely as a means to understand factors contributing to medication errors and to move beyond blame of an individual to identify system factors that contribute to these errors. Nursing schools respond to student medication errors seriously, and many choose to discipline the student without taking into consideration both personal and system factors. The purpose of this article is to present a case study that highlights an undergraduate nursing student medication error and the application of an RCA. The use of this method was a direct result of our nursing program implementation of the Quality and Safety Education for Nurses competencies. The RCA included a critical evaluation of the incident and a review of the literature. Factors identified were environmental, personal, unit communication and culture, and education. The process of using the RCA provided an opportunity to identify improvement strategies to prevent future errors. The use of the RCA promotes a fair and just culture in nursing education and helps nursing students and faculty identify problems and solutions both in their performance and the systems in which they work.
Pasler, Marlies; Michel, Kilian; Marrazzo, Livia; Obenland, Michael; Pallotta, Stefania; Björnsgard, Mari; Lutterbach, Johannes
2017-09-01
The purpose of this study was to characterize a new single large-area ionization chamber, the integral quality monitor system (iRT, Germany), for online and real-time beam monitoring. Signal stability, monitor unit (MU) linearity and dose rate dependence were investigated for static and arc deliveries and compared to independent ionization chamber measurements. The dose verification capability of the transmission detector system was evaluated by comparing calculated and measured detector signals for 15 volumetric modulated arc therapy plans. The error detection sensitivity was tested by introducing MLC position and linac output errors. Deviations in dose distributions between the original and error-induced plans were compared in terms of detector signal deviation, dose-volume histogram (DVH) metrics and 2D γ-evaluation (2%/2 mm and 3%/3 mm). The detector signal is linearly dependent on linac output and shows negligible (good correlation between DVH metrics and detector signal deviation was found (e.g. PTV D mean: R 2 = 0.97). Positional MLC errors of 1 mm and errors in linac output of 2% were identified with the transmission detector system. The extensive tests performed in this investigation show that the new transmission detector provides a stable and sensitive cumulative signal output and is suitable for beam monitoring during patient treatment.
Experimental study of error sources in skin-friction balance measurements
Allen, J. M.
1977-01-01
An experimental study has been performed to determine potential error sources in skin-friction balance measurements. A floating-element balance, large enough to contain the instrumentation needed to systematically investigate these error sources has been constructed and tested in the thick turbulent boundary layer on the sidewall of a large supersonic wind tunnel. Test variables include element-to-case misalignment, gap size, and Reynolds number. The effects of these variables on the friction, lip, and normal forces have been analyzed. It was found that larger gap sizes were preferable to smaller ones; that small element recession below the surrounding test surface produced errors comparable to the same amount of protrusion above the test surface; and that normal forces on the element were, in some cases, large compared to the friction force.
Error and jitter effect studies on the SLED for BEPCII-linac
Shi-Lun, Pei; Ou-Zheng, Xiao
2011-01-01
RF pulse compressor is a device to convert a long RF pulse to a short one with much higher peak RF magnitude. SLED can be regarded as the earliest RF pulse compressor used in large scale linear accelerators. It is widely studied around the world and applied in the BEPC and BEPCII linac for many years. During the routine operation, the error and jitter effects will deteriorate the SLED performance either on the output electromagnetic wave amplitude or phase. The error effects mainly include the frequency drift induced by cooling water temperature variation and the frequency/Q0/{\\beta} unbalances between the two energy storage cavities caused by mechanical fabrication or microwave tuning. The jitter effects refer to the PSK switching phase and time jitters. In this paper, we re-derived the generalized formulae for the conventional SLED used in the BEPCII linac. At last, the error and jitter effects on the SLED performance are investigated.
DEFF Research Database (Denmark)
Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander
2013-01-01
of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic...
Paixao, Andre; Fontul, Simona; Salcedas, Tânia; Marques, Margarida
2017-04-01
It is known that locations in the track denoting sudden structural changes induce dynamic amplifications in the train-track interaction, thus leading to higher impact loads from trains, which in turn promote a faster development of track defects and increase the degradation of components. Consequently, a reduction in the quality of service can be expected at such discontinuities in the track, inducing higher maintenance costs and decreasing the life-cycle of components. To finding actual evidences on how track discontinuities influence the degradation of the geometric quality, a 50-km long railway section is used as case study. The track geometry data obtained with a recording car is firstly characterized according to the European standard series EN 13848. Then, the results of successive surveys are analysed, making use of various tools such as the standard deviation with moving windows of different sizes and calculating degradation rates. The GPR data was also analysed at the locations corresponding to track discontinuities aiming at better identifying situations where sudden changes occur regarding either the structural characteristics or the track behaviour over the years. The results indicate that the geometric quality degrades faster at locations denoting discontinuities in the track, such as changes in track components, approaches to bridges, tunnels, etc. This behaviour suggests that these sites should be monitored more carefully in the scope of asset management activities in order to maximize the life-cycle of the track and its components. This work is a contribution to COST (European COoperation on Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar".
EXPLICIT ERROR ESTIMATES FOR MIXED AND NONCONFORMING FINITE ELEMENTS
Institute of Scientific and Technical Information of China (English)
Shipeng Mao; Zhong-Ci Shi
2009-01-01
In this paper, we study the explicit expressions of the constants in the error estimates of the lowest order mixed and nonconforming finite element methods. We start with an ex-plicit relation between the error constant of the lowest order Raviart-Thomas interpolation error and the geometric characters of the triangle. This gives an explicit error constant of the lowest order mixed finite element method. Furthermore, similar results can be ex-tended to the nonconforming P1 scheme based on its close connection with the lowest order Raviart-Thomas method. Meanwhile, such explicit a priori error estimates can be used as computable error bounds, which are also consistent with the maximal angle condition for the optimal error estimates of mixed and nonconforming finite element methods.Mathematics subject classification: 65N12, 65N15, 65N30, 65N50.
Generalized geometrically convex functions and inequalities.
Noor, Muhammad Aslam; Noor, Khalida Inayat; Safdar, Farhat
2017-01-01
In this paper, we introduce and study a new class of generalized functions, called generalized geometrically convex functions. We establish several basic inequalities related to generalized geometrically convex functions. We also derive several new inequalities of the Hermite-Hadamard type for generalized geometrically convex functions. Several special cases are discussed, which can be deduced from our main results.
Directory of Open Access Journals (Sweden)
Y. Zhao
2012-07-01
Full Text Available We analyze how biases of meteorological drivers impact the calculation of ecosystem CO_{2}, water and energy fluxes by models. To do so, we drive the same ecosystem model by meteorology from gridded products and by meteorology from local observation at eddy-covariance flux sites. The study is focused on six flux tower sites in France spanning across a climate gradient of 7–14 °C annual mean surface air temperature and 600–1040 mm mean annual rainfall, with forest, grassland and cropland ecosystems. We evaluate the results of the ORCHIDEE process-based model driven by meteorology from four different analysis data sets against the same model driven by site-observed meteorology. The evaluation is decomposed into characteristic time scales. The main result is that there are significant differences in meteorology between analysis data sets and local observation. The phase of seasonal cycle of air temperature, humidity and shortwave downward radiation is reproduced correctly by all meteorological models (average R^{2} = 0.90. At sites located in altitude, the misfit of meteorological drivers from analysis data sets and tower meteorology is the largest. We show that day-to-day variations in weather are not completely well reproduced by meteorological models, with R^{2} between analysis data sets and measured local meteorology going from 0.35 to 0.70. The bias of meteorological driver impacts the flux simulation by ORCHIDEE, and thus would have an effect on regional and global budgets. The forcing error, defined by the simulated flux difference resulting from prescribing modeled instead of observed local meteorology drivers to ORCHIDEE, is quantified for the six studied sites at different time scales. The magnitude of this forcing error is compared to that of the model error defined as the modeled-minus-observed flux, thus containing uncertain parameterizations, parameter values, and initialization. The forcing
The northern European geoid: a case study on long-wavelength geoid errors
DEFF Research Database (Denmark)
Omang, O.C.D.; Forsberg, René
2002-01-01
The long-wavelength geoid errors on large-scale geoid solutions, and the use of modified kernels to mitigate these effects, are studied. The geoid around the Nordic area, from Greenland to the Ural mountains, is considered. The effect of including additional gravity data around the Nordic/Baltic ...
Effects of Lexico-Syntactic Errors on Teaching Materials: A Study of Textbooks Written by Nigerians
Israel, Peace Chinwendu
2014-01-01
This study examined lexico-syntactic errors in selected textbooks written by Nigerians. Our focus was on the educated bilinguals (acrolect) who acquired their primary, secondary and tertiary education in Nigeria and the selected textbooks were textbooks published by Vanity Publishers/Press. The participants (authors) cut across the three major…
Fault and Error Latency Under Real Workload: an Experimental Study. Ph.D. Thesis
Chillarege, Ram
1986-01-01
A practical methodology for the study of fault and error latency is demonstrated under a real workload. This is the first study that measures and quantifies the latency under real workload and fills a major gap in the current understanding of workload-failure relationships. The methodology is based on low level data gathered on a VAX 11/780 during the normal workload conditions of the installation. Fault occurrence is simulated on the data, and the error generation and discovery process is reconstructed to determine latency. The analysis proceeds to combine the low level activity data with high level machine performance data to yield a better understanding of the phenomena. A strong relationship exists between latency and workload and that relationship is quantified. The sampling and reconstruction techniques used are also validated. Error latency in the memory where the operating system resides was studied using data on the physical memory access. Fault latency in the paged section of memory was determined using data from physical memory scans. Error latency in the microcontrol store was studied using data on the microcode access and usage.
Developmental changes in error monitoring : An event-related potential study
Wiersema, Jan R.; van der Meere, Jacob J.; Roeyers, Herbert; Wiersema, R.J
2007-01-01
The aim of the study was to investigate the developmental trajectory of error monitoring. For this purpose, children (age 7-8), young adolescents (age 13-14) and adults (age 23-24) performed a Go/No-Go task and were compared on overt reaction time (RT) performance and on event-related potentials (ER
Coincidence of Homophone Spelling Errors and Attention Problems in Schoolchildren: A Survey Study
Tsai, Li-Hui; Meng, Ling-Fu; Hung, Li-Yu; Chen, Hsin-Yu; Lu, Chiu-Ping
2011-01-01
This article examines the relationship between writing and attention problems and hypothesizes that homophone spelling errors coincide with attention deficits. We analyze specific types of attention deficits, which may contribute to Attention Deficits Hyperactivity Disorder (ADHD); rather than studying ADHD, however, we focus on the inattention…
DEFF Research Database (Denmark)
Tybjærg-Hansen, Anne
2009-01-01
Within-person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of t...
Niemann, Dorothee; Bertsche, Astrid; Meyrath, David; Koepf, Ellen D; Traiser, Carolin; Seebald, Katja; Schmitt, Claus P; Hoffmann, Georg F; Haefeli, Walter E; Bertsche, Thilo
2015-01-01
To prevent medication errors in drug handling in a paediatric ward. One in five preventable adverse drug events in hospitalised children is caused by medication errors. Errors in drug prescription have been studied frequently, but data regarding drug handling, including drug preparation and administration, are scarce. A three-step intervention study including monitoring procedure was used to detect and prevent medication errors in drug handling. After approval by the ethics committee, pharmacists monitored drug handling by nurses on an 18-bed paediatric ward in a university hospital prior to and following each intervention step. They also conducted a questionnaire survey aimed at identifying knowledge deficits. Each intervention step targeted different causes of errors. The handout mainly addressed knowledge deficits, the training course addressed errors caused by rule violations and slips, and the reference book addressed knowledge-, memory- and rule-based errors. The number of patients who were subjected to at least one medication error in drug handling decreased from 38/43 (88%) to 25/51 (49%) following the third intervention, and the overall frequency of errors decreased from 527 errors in 581 processes (91%) to 116/441 (26%). The issue of the handout reduced medication errors caused by knowledge deficits regarding, for instance, the correct 'volume of solvent for IV drugs' from 49-25%. Paediatric drug handling is prone to errors. A three-step intervention effectively decreased the high frequency of medication errors by addressing the diversity of their causes. Worldwide, nurses are in charge of drug handling, which constitutes an error-prone but often-neglected step in drug therapy. Detection and prevention of errors in daily routine is necessary for a safe and effective drug therapy. Our three-step intervention reduced errors and is suitable to be tested in other wards and settings. © 2014 John Wiley & Sons Ltd.
Yoshizaki, J.; Pollock, K.H.; Brownie, C.; Webster, R.A.
2009-01-01
Misidentification of animals is potentially important when naturally existing features (natural tags) are used to identify individual animals in a capture-recapture study. Photographic identification (photoID) typically uses photographic images of animals' naturally existing features as tags (photographic tags) and is subject to two main causes of identification errors: those related to quality of photographs (non-evolving natural tags) and those related to changes in natural marks (evolving natural tags). The conventional methods for analysis of capture-recapture data do not account for identification errors, and to do so requires a detailed understanding of the misidentification mechanism. Focusing on the situation where errors are due to evolving natural tags, we propose a misidentification mechanism and outline a framework for modeling the effect of misidentification in closed population studies. We introduce methods for estimating population size based on this model. Using a simulation study, we show that conventional estimators can seriously overestimate population size when errors due to misidentification are ignored, and that, in comparison, our new estimators have better properties except in cases with low capture probabilities (Society of America.
Sha, Tao; Huang, Yu-Qing; Cai, An-Ping; Huang, Cheng; Zhang, Ying; Chen, Ji-Yan; Zhou, Ying-Ling; Yu, Xue-Ju; Zhou, Dan; Tang, Song-Tao; Feng, Ying-Qing; Tan, Ning
This study was to determine whether different risk factors were associated with different type of left ventricular (LV) geometric abnormalities. This retrospective analysis included 2290 hypertensive participants without other cardiovascular disease, valve disease and with ejection fraction ≥50%. The type of LV geometric abnormality was defined on the basis of the new classification system. LV geometric abnormalities were detected in 1479 subjects (64.6%), wherein concentric LV remodeling is the most common LV geometric abnormality (40.3%). Large waist circumference (WC) and neck circumference (NC) were positively associated with concentric LV remodeling, whereas body mass index (BMI) [odds ratio (OR) 0.89, 95% CI 0.85∼0.92, P ＜ 0.001] and systolic blood pressure (SBP) (OR 0.99, 95% CI 0.98∼0.99, P = 0.018) were inversely associated with concentric abnormalities. SBP and age were positively associated with eccentric dilated LVH, while male was inversely associated with eccentric dilated left ventricular hypertrophy (LVH). Age was the strongest risk factor for eccentric dilated LVH (OR 1.05, 95% CI 1.03∼1.07, P ＜ 0.001). Age, NC, SBP, hyperuricemia, and alcohol use were positively associated with concentric LVH, whereas BMI (OR 0.95, 95% CI 0.90∼0.99, P = 0.033) and male (OR 0.12, 95% CI 0.07∼0.18, P ＜ 0.001) were negatively associated with concentric LVH. The prevalence of hypertensive LV geometric abnormality in rural area of Southern China was obvious higher. Compared with eccentric LV geometric abnormalities, there were more risk factors, including large WC and NC, age, NC, SBP, hyperuricemia, alcohol use, BMI and gender, which were associated with concentric LV geometric abnormalities. Copyright © 2017 Hellenic Society of Cardiology. Published by Elsevier B.V. All rights reserved.
Vineet S Agrawal; Sonali Kapoor; Dhvani Bhesania; Chintul Shah
2016-01-01
Aim: This study aimed to investigate the existence of the golden proportion, recurring esthetic dental (RED) proportion and golden percentage between the frontal view widths of the maxillary anterior natural dentition among students of Indian origin by the aid of digital photography. Materials and Methods: This study was conducted with 80 dental students (41 female and 39 male), with ages ranging from 20 to 23 years. Students whose natural smile did not develop any visual tension with reg...
Littel, Marianne; van den Berg, Ivo; Luijten, Maartje; van Rooij, Antonius J; Keemink, Lianne; Franken, Ingmar H A
2012-09-01
Excessive computer gaming has recently been proposed as a possible pathological illness. However, research on this topic is still in its infancy and underlying neurobiological mechanisms have not yet been identified. The determination of underlying mechanisms of excessive gaming might be useful for the identification of those at risk, a better understanding of the behavior and the development of interventions. Excessive gaming has been often compared with pathological gambling and substance use disorder. Both disorders are characterized by high levels of impulsivity, which incorporates deficits in error processing and response inhibition. The present study aimed to investigate error processing and response inhibition in excessive gamers and controls using a Go/NoGo paradigm combined with event-related potential recordings. Results indicated that excessive gamers show reduced error-related negativity amplitudes in response to incorrect trials relative to correct trials, implying poor error processing in this population. Furthermore, excessive gamers display higher levels of self-reported impulsivity as well as more impulsive responding as reflected by less behavioral inhibition on the Go/NoGo task. The present study indicates that excessive gaming partly parallels impulse control and substance use disorders regarding impulsivity measured on the self-reported, behavioral and electrophysiological level. Although the present study does not allow drawing firm conclusions on causality, it might be that trait impulsivity, poor error processing and diminished behavioral response inhibition underlie the excessive gaming patterns observed in certain individuals. They might be less sensitive to negative consequences of gaming and therefore continue their behavior despite adverse consequences. © 2012 The Authors, Addiction Biology © 2012 Society for the Study of Addiction.
Study on the Grey Polynomial Geometric Programming%灰色正项几何规划研究
Institute of Scientific and Technical Information of China (English)
罗党
2005-01-01
In the model of geometric programming, values of parameters cannot be gotten owing to data fluctuation and incompletion. But reasonable bounds of these parameters can be attained. This is to say, parameters of this model can be regarded as interval grey numbers. When the model contains grey numbers, it is hard for common programming method to solve them. By combining the common programming model with the grey system theory,and using some analysis strategies, a model of grey polynomial geometric programming, a model of θ positioned geometric programming and their quasi-optimum solution or optimum solution are put forward. At the same time, we also developed an algorithm for the problem.This approach brings a new way for the application research of geometric programming. An example at the end of this paper shows the rationality and feasibility of the algorithm.
Federal Laboratory Consortium — Purpose: The mission of the Geometric Design Laboratory (GDL) is to support the Office of Safety Research and Development in research related to the geometric design...
Directory of Open Access Journals (Sweden)
Leila Hajian
2014-09-01
Full Text Available Written error correction may be the most widely used method for responding to student writing. Although there are various studies investigating error correction, there are little researches considering teachers’ and students’ preferences towards written error correction. The present study investigates students’ and teachers’ preferences and attitudes towards correction of classroom written errors in Iranian EFL context by using questionnaire. In this study, 80 students and 12 teachers were asked to answer the questionnaire. Then data were collected and analyzed by descriptive method. The findings from teachers and students show positive attitudes towards written error correction. Although the results of this study demonstrate teachers and students have some common preferences related to written error correction, there are some important discrepancies. For example; students prefer all error be corrected, but teachers prefer selecting some. However, students prefer teachers’ correction rather than peer or self-correction. This study considers a number of difficulties regarding students and teachers in written error correction processes with some suggestions. This study shows many teachers might believe written error correction takes a lot of time and effort to give comments. This study indicates many students does not have any problems in rewriting their paper after getting feedback. It might be one main positive point to improve their writing and it might give them self-confidence. Keywords: Error correction, teacher feedback, preferences.
Agrawal, Vineet S; Kapoor, Sonali; Bhesania, Dhvani; Shah, Chintul
2016-01-01
This study aimed to investigate the existence of the golden proportion, recurring esthetic dental (RED) proportion and golden percentage between the frontal view widths of the maxillary anterior natural dentition among students of Indian origin by the aid of digital photography. This study was conducted with 80 dental students (41 female and 39 male), with ages ranging from 20 to 23 years. Students whose natural smile did not develop any visual tension with regard to the study's and their own criteria were selected as having an esthetic smile. Photographs were taken, and the mesiodistal widths of six maxillary anterior teeth were measured digitally using software. Once the measurements were recorded three different theories of proportion were applied and statistical analysis was done. The golden proportion, i.e., 62% RED proportion and golden percentage were not observed in the subjects. According to the subjects evaluated, the average width of the maxillary lateral incisor was 72% of the frontal view width of the central incisor. The average width of the canine was 84% of the frontal view width of the lateral incisor. The golden proportion and RED proportion were not observed in the natural smiles of subjects who were deemed to have an esthetic smile. The values proposed for the golden percentage theory were not observed in subjects with an esthetic smile. Average frontal view percentage widths of the maxillary anterior dentition exist and can be useful in predicting naturally occurring widths in smiles deemed to be esthetic in a specific population.
Zhao, P. Z.; Xu, G. F.; Tong, D. M.
2016-12-01
Nonadiabatic geometric quantum computation in decoherence-free subspaces has received increasing attention due to the merits of its high-speed implementation and robustness against both control errors and decoherence. However, all the previous schemes in this direction have been based on the conventional geometric phases, of which the dynamical phases need to be removed. In this paper, we put forward a scheme of nonadiabatic geometric quantum computation in decoherence-free subspaces based on unconventional geometric phases, of which the dynamical phases do not need to be removed. Specifically, by using three physical qubits undergoing collective dephasing to encode one logical qubit, we realize a universal set of geometric gates nonadiabatically and unconventionally. Our scheme not only maintains all the merits of nonadiabatic geometric quantum computation in decoherence-free subspaces, but also avoids the additional operations required in the conventional schemes to cancel the dynamical phases.
Geometric phases in graphitic cones
Energy Technology Data Exchange (ETDEWEB)
Furtado, Claudio [Departamento de Fisica, CCEN, Universidade Federal da Paraiba, Cidade Universitaria, 58051-970 Joao Pessoa, PB (Brazil)], E-mail: furtado@fisica.ufpb.br; Moraes, Fernando [Departamento de Fisica, CCEN, Universidade Federal da Paraiba, Cidade Universitaria, 58051-970 Joao Pessoa, PB (Brazil); Carvalho, A.M. de M [Departamento de Fisica, Universidade Estadual de Feira de Santana, BR116-Norte, Km 3, 44031-460 Feira de Santana, BA (Brazil)
2008-08-04
In this Letter we use a geometric approach to study geometric phases in graphitic cones. The spinor that describes the low energy states near the Fermi energy acquires a phase when transported around the apex of the cone, as found by a holonomy transformation. This topological result can be viewed as an analogue of the Aharonov-Bohm effect. The topological analysis is extended to a system with n cones, whose resulting configuration is described by an effective defect00.
Geometric symmetries in light nuclei
Bijker, Roelof
2016-01-01
The algebraic cluster model is is applied to study cluster states in the nuclei 12C and 16O. The observed level sequences can be understood in terms of the underlying discrete symmetry that characterizes the geometrical configuration of the alpha-particles, i.e. an equilateral triangle for 12C, and a regular tetrahedron for 16O. The structure of rotational bands provides a fingerprint of the underlying geometrical configuration of alpha-particles.
Bose, Prosenjit; Morin, Pat; Smid, Michiel
2012-01-01
Highly connected and yet sparse graphs (such as expanders or graphs of high treewidth) are fundamental, widely applicable and extensively studied combinatorial objects. We initiate the study of such highly connected graphs that are, in addition, geometric spanners. We define a property of spanners called robustness. Informally, when one removes a few vertices from a robust spanner, this harms only a small number of other vertices. We show that robust spanners must have a superlinear number of edges, even in one dimension. On the positive side, we give constructions, for any dimension, of robust spanners with a near-linear number of edges.
Geometric wakefield regimes study of a rectangular tapered collimator for ATF2
Fuster-Martinez, Nuria; Latina, Andrea; Snuverink, Jochem
2016-01-01
In this paper we study the discrepancy found between the wakefield impact effect induced by a rectangular tapered collimator prototype for ATF2 calculated using analytical models, calculated from CST PS numerical simulations and implemented in the tracking code PLACET v1.0.0. In order to get consistent results between the analytical calculations, CST PS simulations and the tracking code PLACET v1.0.0 the collimator wakefield module in PLACET v1.0.0 has to be modified. The changes have been implemented in the tracking code PLACET v1.0.1.
Experimental Study on the Effect of Exit Geometric Configurations on Hydrodynamics in CFB
Directory of Open Access Journals (Sweden)
Xiaolei Qiao
2013-05-01
Full Text Available The exit configurations of CFB strongly influence the bulk density profile and the internal circulation of the bed material, which is called the end effect. This study analyzes the influence of three exit geometries and two narrowed exit geometries on hydrodynamics. Experiments indicate that the exit with the projected roof in CFB may be used as a separator and the projected height has a maximum. Narrowing the bed cross section near the bed exit zone is a simple and effective way to enhance the internal circulation and reduce the circulation of bed material simultaneously.
Directory of Open Access Journals (Sweden)
Vineet S Agrawal
2016-01-01
Full Text Available Aim: This study aimed to investigate the existence of the golden proportion, recurring esthetic dental (RED proportion and golden percentage between the frontal view widths of the maxillary anterior natural dentition among students of Indian origin by the aid of digital photography. Materials and Methods: This study was conducted with 80 dental students (41 female and 39 male, with ages ranging from 20 to 23 years. Students whose natural smile did not develop any visual tension with regard to the study′s and their own criteria were selected as having an esthetic smile. Photographs were taken, and the mesiodistal widths of six maxillary anterior teeth were measured digitally using software. Once the measurements were recorded three different theories of proportion were applied and statistical analysis was done. Results: The golden proportion, i.e., 62% RED proportion and golden percentage were not observed in the subjects. According to the subjects evaluated, the average width of the maxillary lateral incisor was 72% of the frontal view width of the central incisor. The average width of the canine was 84% of the frontal view width of the lateral incisor. Conclusion: The golden proportion and RED proportion were not observed in the natural smiles of subjects who were deemed to have an esthetic smile. The values proposed for the golden percentage theory were not observed in subjects with an esthetic smile. Average frontal view percentage widths of the maxillary anterior dentition exist and can be useful in predicting naturally occurring widths in smiles deemed to be esthetic in a specific population.
Directory of Open Access Journals (Sweden)
Seyedeh Narjes Tabatabei
2014-01-01
Full Text Available Using fish scale to identity species and population is a rapid, safe and low cost method. Hence, this study was carried out to investigate the possibility of using geometric and morphometric methods in fish scales for rapid identification of species and populations and compare the efficiency of applying few and/or high number of landmark points. For this purpose, scales of one population of Luciobarbus capito, four populations of Alburnoides eichwaldii and two populations of Rutilus frisii kutum, all belonging to cyprinid family, were examined. On two-dimensional images of the scales 7 and 23 landmark points were digitized in two separate times using TpsDig2, respectively. Landmark data after generalized procrustes analysis were analyzed using Principal Component Analysis (PCA, Canonical Variate Analysis (CVA and Cluster Analysis. The results of both methods (using 7 and 23 landmark points showed significant differences of the shape of scales among the three species studied (P0.05. The results also showed that few number of landmarks could display the differences between scale shapes. According to the results of this study, it could be stated that the scale of each species had unique shape patterns which could be utilized as a species identification key.
Greathouse, James S.; Schwing, Alan M.
2015-01-01
This paper explores use of computational fluid dynamics to study the e?ect of geometric porosity on static stability and drag for NASA's Multi-Purpose Crew Vehicle main parachute. Both of these aerodynamic characteristics are of interest to in parachute design, and computational methods promise designers the ability to perform detailed parametric studies and other design iterations with a level of control previously unobtainable using ground or flight testing. The approach presented here uses a canopy structural analysis code to define the inflated parachute shapes on which structured computational grids are generated. These grids are used by the computational fluid dynamics code OVERFLOW and are modeled as rigid, impermeable bodies for this analysis. Comparisons to Apollo drop test data is shown as preliminary validation of the technique. Results include several parametric sweeps through design variables in order to better understand the trade between static stability and drag. Finally, designs that maximize static stability with a minimal loss in drag are suggested for further study in subscale ground and flight testing.
Medication knowledge, certainty, and risk of errors in health care: a cross-sectional study
Directory of Open Access Journals (Sweden)
Johansson Inger
2011-07-01
Full Text Available Abstract Background Medication errors are often involved in reported adverse events. Drug therapy, prescribed by physicians, is mostly carried out by nurses, who are expected to master all aspects of medication. Research has revealed the need for improved knowledge in drug dose calculation, and medication knowledge as a whole is poorly investigated. The purpose of this survey was to study registered nurses' medication knowledge, certainty and estimated risk of errors, and to explore factors associated with good results. Methods Nurses from hospitals and primary health care establishments were invited to carry out a multiple-choice test in pharmacology, drug management and drug dose calculations (score range 0-14. Self-estimated certainty in each answer was recorded, graded from 0 = very uncertain to 3 = very certain. Background characteristics and sense of coping were recorded. Risk of error was estimated by combining knowledge and certainty scores. The results are presented as mean (±SD. Results Two-hundred and three registered nurses participated (including 16 males, aged 42.0 (9.3 years with a working experience of 12.4 (9.2 years. Knowledge scores in pharmacology, drug management and drug dose calculations were 10.3 (1.6, 7.5 (1.6, and 11.2 (2.0, respectively, and certainty scores were 1.8 (0.4, 1.9 (0.5, and 2.0 (0.6, respectively. Fifteen percent of the total answers showed a high risk of error, with 25% in drug management. Independent factors associated with high medication knowledge were working in hospitals (p Conclusions Medication knowledge was found to be unsatisfactory among practicing nurses, with a significant risk for medication errors. The study revealed a need to improve the nurses' basic knowledge, especially when referring to drug management.
Studies In Non-anticommutative Gauge Theories, Geometric Dualities, And Twistor Strings
Robles Llana, D
2005-01-01
In this Dissertation we consider three different topics. The first one is the study of instantons in U(2) super Yang-Mills theories defined on non-anticommutative superspace. We extend the ordinary instanton calculus to this class of theories by solving the appropriate equations of motion iteratively in the deformation parameter C. In the case without matter, we solve the equations exactly. We find that the SU(2) part of the instanton is the same as in ordinary SU (2) N = 1 super Yang-Mills, but acquires in addition a non-trivial U(1) part which depends on the fermionic collective coordinates and the deformation parameter C. In the case with matter we solve the equations of motion to leading order in the coupling constant. We find that also the profile of the matter fields is deformed through linear and quadratic corrections in C. The instanton effective action for pure gluodynamics is unaffected by C, but gets a contribution of order C2 in addition to the usual 't Hooft term when the matter is included....
Theoretical study on Fe-Al clusters:geometric structure,bonding law and electronic structures
Institute of Scientific and Technical Information of China (English)
CHEN Shougang; YIN Yansheng; WANG Daoping; LU Yao
2004-01-01
Structures of the small Fe-Al clusters with different atom proportion are calculated using the B3LYP method in density functional theory (DFT). Calculated results show that the Al atoms lose electrons easily while the Fe atoms capture electrons easily. The most stable geometry is the bonding between Fe and Fe atoms and between Fe and Al atoms with the largest possibility, and the cluster stability law with the same atom proportion accords with the change of the highest occupied molecular orbital (HOMO) energy and the entropy of cluster system. Moreover, the electronic structure study of the ground-state Fe3Al and Fe2CrAl clusters shows that the substitution of Cr atom for the Fe atom located at the next neighboring site of Al atom reduces localized electrons not only between Al atom and the next neighboring Cr atom, but also between Al atom and the nearest neighboring Fe atom. Although the substitution increases the plasticity and the magnetism of intermetallic compound, the stability of the system slightly decreases. Our theoretical results agree well with the experimental results.
Tsui, Y. K.; Kalechofsky, N.; Burns, C. A.; Schiffer, P.
1999-04-01
Gadolinium gallium garnet, Gd3Ga5O12 (GGG) has an extraordinary low temperature phase diagram. Although the Curie-Weiss temperature of GGG is about -2 K, GGG shows no long-range order down to T˜0.4 K. At low temperatures GGG has a spin glass phase at low fields (⩽0.1 T) and a field-induced long-range order antiferromagnetic state at fields of between 0.7 and 1.3 T [P. Schiffer et al., Phys. Rev. Lett. 73, 2500 (1994), S. Hov, H. Bratsberg, and A. T. Skjeltorp, J. Magn. Magn. Mater. 15-18, 455 (1980); S. Hov, Ph.D. thesis, University of Oslo, 1979 (unpublished), A. P. Ramirez and R. N. Kleiman, J. Appl. Phys. 69, 5252 (1991)]. However, the nature of the ground state at intermediate fields is still unknown, and has been hypothesized to be a three-dimensional spin liquid. We have measured the thermal conductivity (κ) and heat capacity (C) of a high-quality single crystal of GGG in the low temperature regime in order to study the nature of this state. The field dependence of κ shows that phonons are the predominant heat carriers and are scattered by spin fluctuations. We observe indications in κ(H) and C(H) of both the field induced ordering and the spin glass phase at low temperatures (T⩽200 mK).
Radillo, Marisol
2011-01-01
Se reportan los resultados parciales de una investigación exploratoria cuyo objetivo fue identificar las dificultades relacionadas con el uso del lenguaje matemático en la resolución de problemas de Geometría Euclideana, que enfrentan los estudiantes de ingeniería de la Universidad de Guadalajara, México. La metodología se centra en una descripción lingüística de las diferencias demostrables entre el tipo de texto que se requiere en la solución de diversos tipos de problema y las respuestas r...
Geometric Computing Based on Computerized Descriptive Geometric
Institute of Scientific and Technical Information of China (English)
YU Hai-yan; HE Yuan-Jun
2011-01-01
Computer-aided Design （CAD）, video games and other computer graphic related technology evolves substantial processing to geometric elements. A novel geometric computing method is proposed with the integration of descriptive geometry, math and computer algorithm. Firstly, geometric elements in general position are transformed to a special position in new coordinate system. Then a 3D problem is projected to new coordinate planes. Finally, according to 2D/3D correspondence principle in descriptive geometry, the solution is constructed computerized drawing process with ruler and compasses. In order to make this method a regular operation, a two-level pattern is established. Basic Layer is a set algebraic packaged function including about ten Primary Geometric Functions （PGF） and one projection transformation. In Application Layer, a proper coordinate is established and a sequence of PGFs is sought for to get the final results. Examples illustrate the advantages of our method on dimension reduction, regulatory and visual computing and robustness.
Energy Technology Data Exchange (ETDEWEB)
Kim, Jun Seong; Lee, Woo Seung; Kim, Jin Sub; Song, Seung Hyun; Nam, Seok Ho; Jeon, Hae Ryong; Beak, Geon Woo; Ko, Tae Kuk [Yonsei University, Seoul (Korea, Republic of)
2016-09-15
Recently, production technique and property of the High-Temperature Superconductor (HTS) tape have been improved. Thus, the study on applying an HTS magnet to the high magnetic field application is rapidly increased. A Nuclear Magnetic Resonance (NMR) spectrometer requires high magnitude and homogeneous of central magnetic field. However, the HTS magnet has fabrication errors because shape of HTS is tape and HTS magnet is manufactured by winding HTS tape to the bobbin. The fabrication errors are winding error, bobbin diameter error, spacer thickness error and so on. The winding error occurs when HTS tape is departed from the arranged position on the bobbin. The bobbin diameter and spacer thickness error occur since the diameter of bobbin and spacer are inaccurate. These errors lead magnitude and homogeneity of central magnetic field to be different from its ideal design. The purpose of this paper is to investigate the effect of winding error, bobbin diameter error and spacer thickness error on the central field and field homogeneity of HTS magnet using the virtual NMR signals in MATLAB simulation.
Visual acuity measures do not reliably detect childhood refractive error--an epidemiological study.
Directory of Open Access Journals (Sweden)
Lisa O'Donoghue
Full Text Available PURPOSE: To investigate the utility of uncorrected visual acuity measures in screening for refractive error in white school children aged 6-7-years and 12-13-years. METHODS: The Northern Ireland Childhood Errors of Refraction (NICER study used a stratified random cluster design to recruit children from schools in Northern Ireland. Detailed eye examinations included assessment of logMAR visual acuity and cycloplegic autorefraction. Spherical equivalent refractive data from the right eye were used to classify significant refractive error as myopia of at least 1DS, hyperopia as greater than +3.50DS and astigmatism as greater than 1.50DC, whether it occurred in isolation or in association with myopia or hyperopia. RESULTS: Results are presented from 661 white 12-13-year-old and 392 white 6-7-year-old school-children. Using a cut-off of uncorrected visual acuity poorer than 0.20 logMAR to detect significant refractive error gave a sensitivity of 50% and specificity of 92% in 6-7-year-olds and 73% and 93% respectively in 12-13-year-olds. In 12-13-year-old children a cut-off of poorer than 0.20 logMAR had a sensitivity of 92% and a specificity of 91% in detecting myopia and a sensitivity of 41% and a specificity of 84% in detecting hyperopia. CONCLUSIONS: Vision screening using logMAR acuity can reliably detect myopia, but not hyperopia or astigmatism in school-age children. Providers of vision screening programs should be cognisant that where detection of uncorrected hyperopic and/or astigmatic refractive error is an aspiration, current UK protocols will not effectively deliver.
Mogull, Scott A
2017-01-01
Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).
PREFACE: Geometrically frustrated magnetism Geometrically frustrated magnetism
Gardner, Jason S.
2011-04-01
Frustrated magnetism is an exciting and diverse field in condensed matter physics that has grown tremendously over the past 20 years. This special issue aims to capture some of that excitement in the field of geometrically frustrated magnets and is inspired by the 2010 Highly Frustrated Magnetism (HFM 2010) meeting in Baltimore, MD, USA. Geometric frustration is a broad phenomenon that results from an intrinsic incompatibility between some fundamental interactions and the underlying lattice geometry based on triangles and tetrahedra. Most studies have centred around the kagomé and pyrochlore based magnets but recent work has looked at other structures including the delafossite, langasites, hyper-kagomé, garnets and Laves phase materials to name a few. Personally, I hope this issue serves as a great reference to scientist both new and old to this field, and that we all continue to have fun in this very frustrated playground. Finally, I want to thank the HFM 2010 organizers and all the sponsors whose contributions were an essential part of the success of the meeting in Baltimore. Geometrically frustrated magnetism contents Spangolite: an s = 1/2 maple leaf lattice antiferromagnet? T Fennell, J O Piatek, R A Stephenson, G J Nilsen and H M Rønnow Two-dimensional magnetism and spin-size effect in the S = 1 triangular antiferromagnet NiGa2S4 Yusuke Nambu and Satoru Nakatsuji Short range ordering in the modified honeycomb lattice compound SrHo2O4 S Ghosh, H D Zhou, L Balicas, S Hill, J S Gardner, Y Qi and C R Wiebe Heavy fermion compounds on the geometrically frustrated Shastry-Sutherland lattice M S Kim and M C Aronson A neutron polarization analysis study of moment correlations in (Dy0.4Y0.6)T2 (T = Mn, Al) J R Stewart, J M Hillier, P Manuel and R Cywinski Elemental analysis and magnetism of hydronium jarosites—model kagome antiferromagnets and topological spin glasses A S Wills and W G Bisson The Herbertsmithite Hamiltonian: μSR measurements on single crystals
Energy Technology Data Exchange (ETDEWEB)
Borkowski, R.J.; Fragola, J.R.; Schurman, D.L.; Johnson, J.W.
1984-03-01
This report documents the procedure and final results of a feasibility study which examined the usefulness of nuclear plant maintenance work requests in the IPRDS as tools for understanding human error and its influence on component failure and repair. Developed in this study were (1) a set of criteria for judging the quality of a plant maintenance record set for studying human error; (2) a scheme for identifying human errors in the maintenance records; and (3) two taxonomies (engineering-based and psychology-based) for categorizing and coding human error-related events.
Directory of Open Access Journals (Sweden)
Y. Zhao
2011-03-01
Full Text Available We analyze how biases of meteorological drivers impact the calculation of ecosystem CO_{2}, water and energy fluxes by models. To do so, we drive the same ecosystem model by meteorology from gridded products and by ''true" meteorology from local observation at eddy-covariance flux sites. The study is focused on six flux tower sites in France spanning across a 7–14 °C and 600–1040 mm yr^{−1} climate gradient, with forest, grassland and cropland ecosystems. We evaluate the results of the ORCHIDEE process-based model driven by four different meteorological models against the same model driven by site-observed meteorology. The evaluation is decomposed into characteristic time scales. The main result is that there are significant differences between meteorological models and local tower meteorology. The seasonal cycle of air temperature, humidity and shortwave downward radiation is reproduced correctly by all meteorological models (average R^{2}=0.90. At sites located near the coast and influenced by sea-breeze, or located in altitude, the misfit of meteorological drivers from gridded dataproducts and tower meteorology is the largest. We show that day-to-day variations in weather are not completely well reproduced by meteorological models, with R^{2} between modeled grid point and measured local meteorology going from 0.35 (REMO model to 0.70 (SAFRAN model. The bias of meteorological models impacts the flux simulation by ORCHIDEE, and thus would have an effect on regional and global budgets. The forcing error defined by the simulated flux difference resulting from prescribing modeled instead than observed local meteorology drivers to ORCHIDEE is quantified for the six studied sites and different time scales. The magnitude of this forcing error is compared to that of the model error defined as the modeled-minus-observed flux, thus containing uncertain parameterizations, parameter values, and
Directory of Open Access Journals (Sweden)
Jeongyeup Paek
2016-01-01
Full Text Available Bluetooth Low Energy (BLE and the iBeacons have recently gained large interest for enabling various proximity-based application services. Given the ubiquitously deployed nature of Bluetooth devices including mobile smartphones, using BLE and iBeacon technologies seemed to be a promising future to come. This work started off with the belief that this was true: iBeacons could provide us with the accuracy in proximity and distance estimation to enable and simplify the development of many previously difficult applications. However, our empirical studies with three different iBeacon devices from various vendors and two types of smartphone platforms prove that this is not the case. Signal strength readings vary significantly over different iBeacon vendors, mobile platforms, environmental or deployment factors, and usage scenarios. This variability in signal strength naturally complicates the process of extracting an accurate location/proximity estimation in real environments. Our lessons on the limitations of iBeacon technique lead us to design a simple class attendance checking application by performing a simple form of geometric adjustments to compensate for the natural variations in beacon signal strength readings. We believe that the negative observations made in this work can provide future researchers with a reference on how well of a performance to expect from iBeacon devices as they enter their system design phases.
Specific heat study of geometrically frustrated magnet clinoatacamite Cu{sub 2}Cl(OH){sub 3}
Energy Technology Data Exchange (ETDEWEB)
Morodomi, Hiroki; Ienaga, Koichiro; Inagaki, Yuji; Kawae, Tatsuya [Department of Applied Quantum Physics, Kyushu University, Moto-oka, Fukuoka 812-8581 (Japan); Hagiwara, Masayuki; Zheng, X G, E-mail: 3te09049m@s.kyushu-u.ac.j [Department of Physics, Saga University, Saga, 840-8502 (Japan)
2010-01-01
We have performed the specific heat study in a new geometrically frustrated system, clinoatacamite Cu2Cl(OH)3 with the corner-sharing tetrahedron structure of the Cu{sup 2+} ions with 5=1/2 Heisenberg spin. At H=0 T, two anomalies are observed at T{sub 2}=18.1 K and at T{sub 2}=6.2 K. The specific heat decreases rapidly below T{sub 2} and shows no anomaly down to T=150 mK despite the existence of the spin fluctuation shown in the {mu}SR experiments. As the magnetic field is increased, the sharp peak at T{sub 2} is broadened and shows a small reentrant behavior in the T - H phase diagram. On the other hand, the peak at T{sub 1} shows no obvious change up to H=5 T. The entropy at T{sub 1} is estimated as {approx}0.35Rln2 at H=0 T. These features may be caused by the two dimensional nature of the kagome antiferromagnets which are weakly coupled via Cu{sup 2+} ions at the triangular sites located in between the kagome layers.
Organizational Climate, Stress, and Error in Primary Care: The MEMO Study
2005-05-01
challenging environment. Med Care 1999;37:1174– 82. 16. Lazarus RS, Folkman S. Stress, appraisal and coping. New York: Springer; 1984. 17. Ivancevich...quality, and errors. This model was derived from our earlier work, the Physician Worklife Study14,15 as well as the pioneering work of Lazarus and... Folkman ,16 and Ivancevich and Matteson.17 Organizational climate in health care (i.e., the perception of culture by those within it) has been described
Reliability and error analysis on xenon/CT CBF
Energy Technology Data Exchange (ETDEWEB)
Zhang, Z. [Diversified Diagnostic Products, Inc., Houston, TX (United States)
2000-02-01
This article provides a quantitative error analysis of a simulation model of xenon/CT CBF in order to investigate the behavior and effect of different types of errors such as CT noise, motion artifacts, lower percentage of xenon supply, lower tissue enhancements, etc. A mathematical model is built to simulate these errors. By adjusting the initial parameters of the simulation model, we can scale the Gaussian noise, control the percentage of xenon supply, and change the tissue enhancement with different kVp settings. The motion artifact will be treated separately by geometrically shifting the sequential CT images. The input function is chosen from an end-tidal xenon curve of a practical study. Four kinds of cerebral blood flow, 10, 20, 50, and 80 cc/100 g/min, are examined under different error environments and the corresponding CT images are generated following the currently popular timing protocol. The simulated studies will be fed to a regular xenon/CT CBF system for calculation and evaluation. A quantitative comparison is given to reveal the behavior and effect of individual error resources. Mixed error testing is also provided to inspect the combination effect of errors. The experiment shows that CT noise is still a major error resource. The motion artifact affects the CBF results more geometrically than quantitatively. Lower xenon supply has a lesser effect on the results, but will reduce the signal/noise ratio. The lower xenon enhancement will lower the flow values in all areas of brain. (author)
Sherrod, Sonya Ellouise; Wilhelm, Jennifer
2009-01-01
Research indicates that student understanding is either confirmed or reformed when given opportunities to share what they know. This study was conducted to answer the research question: Will classroom dialogue facilitate students' understanding of lunar concepts related to geometric spatial visualisation? Ninety-two middle school students engaged…
Hannawa, Annegret F
2017-05-03
The question is no longer whether to disclose an error to a patient. Many studies have established that medical errors are co-owned by providers and patients and thus must be disclosed. However, little evidence is available on the concrete communication skills and contextual features that contribute to patients' perceptions of "competent disclosures" as a key predictor of objective disclosure outcomes. This study operationalises a communication science model to empirically characterise what messages, behaviours and contextual factors Swiss patients commonly consider "competent" during medical error disclosures, and what symptoms and behaviours they experience in response to competent and incompetent disclosures. For this purpose, ten focus groups were conducted at five hospitals across Switzerland. Sixty-three patients participated in the meetings. Qualitative analysis of the focus group transcripts revealed concrete patient expectations regarding provider's motivations, knowledge and skills. The analysis also illuminated under what circumstances to disclose, what to disclose, how to disclose and the effects of competent and incompetent disclosures on patients' symptoms and behaviours. Patients expected that providers enter a disclosure informed and with approach-oriented motivations. In line with previous research, they preferred a remorseful declaration of responsibility and apology, a clear and honest account, and a discussion of reparation and future forbearance. Patients expected providers to display attentiveness, compo-sure, coordination, expressiveness and interpersonal adaptability as core communication skills. Furthermore, numerous functional, relational, chronological and environmental contextual considerations evolved as critical features of competent disclosures. While patients agreed on a number of preferences, there is no one-size-fits-all approach to competent disclosures. Thus, error disclosures do not lend themselves to a checklist approach
Studies on National Preparatory Students’English Oral Errors and Corrections
Institute of Scientific and Technical Information of China (English)
李媛媛
2014-01-01
This paper, based on the theory and teaching practice, presents a tentative analysis about English oral errors commonly made by university’s national preparatory students. At first, I analyze the causes of oral errors, then review teachers ’different atti-tude towards oral errors and finally propose some main principles and factors and possible strategies of oral error corrections.
Xiao, Yi; Ma, Feng; Lv, Yixuan; Cai, Gui; Teng, Peng; Xu, FengGang; Chen, Shanguang
2015-01-01
Attention is important in error processing. Few studies have examined the link between sustained attention and error processing. In this study, we examined how error-related negativity (ERN) of a four-choice reaction time task was reduced in the mental fatigue condition and investigated the role of sustained attention in error processing. Forty-one recruited participants were divided into two groups. In the fatigue experiment group, 20 subjects performed a fatigue experiment and an additional continuous psychomotor vigilance test (PVT) for 1 h. In the normal experiment group, 21 subjects only performed the normal experimental procedures without the PVT test. Fatigue and sustained attention states were assessed with a questionnaire. Event-related potential results showed that ERN (p attention and fatigue states in electrodes Fz, FC1, Cz, and FC2. These findings indicated that sustained attention was related to error processing and that decreased attention is likely the cause of error processing impairment.
Thompson, W C
1995-01-01
This article discusses two factors that may profoundly affect the value of DNA evidence for proving that two samples have a common source: uncertainty about the interpretation of test results and the possibility of laboratory error. Three case studies are presented to illustrate the importance of the analyst's subjective judgments in interpreting some RFLP-based forensic DNA tests. In each case, the likelihood ratio describing the value of DNA evidence is shown to be dramatically reduced by uncertainty about the scoring of bands and the possibility of laboratory error. The article concludes that statistical estimates of the frequency of matching genotypes can be a misleading index of the value of DNA evidence, and that more adequate indices are needed. It also argues that forensic laboratories should comply with the National Research Council's recommendation that forensic test results be scored in a blind or objective manner.
A study of symplectic integrators for planetary system problems: error analysis and comparisons
Hernandez, David M.; Dehnen, Walter
2017-07-01
The symplectic Wisdom-Holman map revolutionized long-term integrations of planetary systems. There is freedom in such methods of how to split the Hamiltonian and which coordinate system to employ, and several options have been proposed in the literature. These choices lead to different integration errors, which we study analytically and numerically. The Wisdom-Holman method in Jacobi coordinates and the method of Hernandez, H16, compare favourably and avoid problems of some of the other maps, such as incorrect centre-of-mass position or truncation errors even in the one-planet case. We use H16 to compute the evolution of Pluto's orbital elements over 500 million years in a new calculation.
Stability Comparison of Recordable Optical Discs—A Study of Error Rates in Harsh Conditions
Slattery, Oliver; Lu, Richang; Zheng, Jian; Byers, Fred; Tang, Xiao
2004-01-01
The reliability and longevity of any storage medium is a key issue for archivists and preservationists as well as for the creators of important information. This is particularly true in the case of digital media such as DVD and CD where a sufficient number of errors may render the disc unreadable. This paper describes an initial stability study of commercially available recordable DVD and CD media using accelerated aging tests under conditions of increased temperature and humidity. The effect of prolonged exposure to direct light is also investigated and shown to have an effect on the error rates of the media. Initial results show that high quality optical media have very stable characteristics and may be suitable for long-term storage applications. However, results also indicate that significant differences exist in the stability of recordable optical media from different manufacturers. PMID:27366630
Directory of Open Access Journals (Sweden)
Claudimar Pereira da Veiga
2012-08-01
Full Text Available The importance of demand forecasting as a management tool is a well documented issue. However, it is difficult to measure costs generated by forecasting errors and to find a model that assimilate the detailed operation of each company adequately. In general, when linear models fail in the forecasting process, more complex nonlinear models are considered. Although some studies comparing traditional models and neural networks have been conducted in the literature, the conclusions are usually contradictory. In this sense, the objective was to compare the accuracy of linear methods and neural networks with the current method used by the company. The results of this analysis also served as input to evaluate influence of errors in demand forecasting on the financial performance of the company. The study was based on historical data from five groups of food products, from 2004 to 2008. In general, one can affirm that all models tested presented good results (much better than the current forecasting method used, with mean absolute percent error (MAPE around 10%. The total financial impact for the company was 6,05% on annual sales.
Directory of Open Access Journals (Sweden)
A. G. Obolenskov
2016-09-01
Full Text Available Subject of Research. The paper presents theoretical and experimental analysis of dependence of the determination error of a modulated optical signal under intense background illumination on the value of mutual shift of two current-voltage characteristics if using a double synthesized aperture on multiscan position-sensitive detector. Method. The studies have been carried out on a specially designed setup, that allows scanning photosensitive area of multiscan position-sensitive detector by an optical beam that imitates intense solar illumination. At the same time the position error of determination of weak modulated optical signal coordinate is measured at different relative position of signal and background illumination, and background power. Main Results. Experimental studies have confirmed the theoretical conclusions. It is shown that the use of double synthesized aperture of multiscan position-sensitive detector with the voltage shift of the current-voltage characteristics equal to 0.4 V enables to reduce position determination error of a weak modulated signal by an order of magnitude. Practical Relevance. Research results have opened the opportunity of accuracy increase for position-sensitive systems operating under background illuminations exceeding the level of information optical signal.
Wind Power Forecasting Error Frequency Analyses for Operational Power System Studies: Preprint
Energy Technology Data Exchange (ETDEWEB)
Florita, A.; Hodge, B. M.; Milligan, M.
2012-08-01
The examination of wind power forecasting errors is crucial for optimal unit commitment and economic dispatch of power systems with significant wind power penetrations. This scheduling process includes both renewable and nonrenewable generators, and the incorporation of wind power forecasts will become increasingly important as wind fleets constitute a larger portion of generation portfolios. This research considers the Western Wind and Solar Integration Study database of wind power forecasts and numerical actualizations. This database comprises more than 30,000 locations spread over the western United States, with a total wind power capacity of 960 GW. Error analyses for individual sites and for specific balancing areas are performed using the database, quantifying the fit to theoretical distributions through goodness-of-fit metrics. Insights into wind-power forecasting error distributions are established for various levels of temporal and spatial resolution, contrasts made among the frequency distribution alternatives, and recommendations put forth for harnessing the results. Empirical data are used to produce more realistic site-level forecasts than previously employed, such that higher resolution operational studies are possible. This research feeds into a larger work of renewable integration through the links wind power forecasting has with various operational issues, such as stochastic unit commitment and flexible reserve level determination.
The error source analysis of oil spill transport modeling:a case study
Institute of Scientific and Technical Information of China (English)
LI Yan; ZHU Jiang; WANG Hui; KUANG Xiaodi
2013-01-01
Numerical modeling is an important tool to study and predict the transport of oil spills. However, the accu-racy of numerical models is not always good enough to provide reliable information for oil spill transport. It is necessary to analyze and identify major error sources for the models. A case study was conducted to analyze error sources of a three-dimensional oil spill model that was used operationally for oil spill forecast-ing in the National Marine Environmental Forecasting Center (NMEFC), the State Oceanic Administration, China. On June 4, 2011, oil from sea bed spilled into seawater in Penglai 19-3 region, the largest offshore oil field of China, and polluted an area of thousands of square kilometers in the Bohai Sea. Satellite remote sensing images were collected to locate oil slicks. By performing a series of model sensitivity experiments with different wind and current forcings and comparing the model results with the satellite images, it was identified that the major errors of the long-term simulation for oil spill transport were from the wind fields, and the wind-induced surface currents. An inverse model was developed to estimate the temporal variabil-ity of emission intensity at the oil spill source, which revealed the importance of the accuracy in oil spill source emission time function.
Sampling error study for rainfall estimate by satellite using a stochastic model
Shin, Kyung-Sup; North, Gerald R.
1988-01-01
In a parameter study of satellite orbits, sampling errors of area-time averaged rain rate due to temporal sampling by satellites were estimated. The sampling characteristics were studied by accounting for the varying visiting intervals and varying fractions of averaging area on each visit as a function of the latitude of the grid box for a range of satellite orbital parameters. The sampling errors were estimated by a simple model based on the first-order Markov process of the time series of area averaged rain rates. For a satellite of nominal Tropical Rainfall Measuring Mission (Thiele, 1987) carrying an ideal scanning microwave radiometer for precipitation measurements, it is found that sampling error would be about 8 to 12 pct of estimated monthly mean rates over a grid box of 5 X 5 degrees. It is suggested that an observation system based on a low inclination satellite combined with a sunsynchronous satellite simultaneously might be the best candidate for making precipitation measurements from space.
Determining the Errors in Output Kinematic Parameters of Planar Mechanisms with a Complex Structure
Directory of Open Access Journals (Sweden)
Trzaska W.
2014-11-01
Full Text Available The study is focused on determining the errors in output kinematic parameters (position, velocity, acceleration, jerk of entire links or their selected points in complex planar mechanisms. The number of DOFs of the kinematic system is assumed to be equal to the number of drives and the rigid links are assumed to be connected by ideal, clearance-free geometric constraints. Input data include basic parameters of the mechanism with the involved errors as well as kinematic parameters of driving links and the involved errors. Output errors in kinematic parameters are determined basing on the linear theory of errors.
Directory of Open Access Journals (Sweden)
Kaspar Küng
2013-01-01
Full Text Available The purpose of this study was (1 to determine frequency and type of medication errors (MEs, (2 to assess the number of MEs prevented by registered nurses, (3 to assess the consequences of ME for patients, and (4 to compare the number of MEs reported by a newly developed medication error self-reporting tool to the number reported by the traditional incident reporting system. We conducted a cross-sectional study on ME in the Cardiovascular Surgery Department of Bern University Hospital in Switzerland. Eligible registered nurses ( involving in the medication process were included. Data on ME were collected using an investigator-developed medication error self reporting tool (MESRT that asked about the occurrence and characteristics of ME. Registered nurses were instructed to complete a MESRT at the end of each shift even if there was no ME. All MESRTs were completed anonymously. During the one-month study period, a total of 987 MESRTs were returned. Of the 987 completed MESRTs, 288 (29% indicated that there had been an ME. Registered nurses reported preventing 49 (5% MEs. Overall, eight (2.8% MEs had patient consequences. The high response rate suggests that this new method may be a very effective approach to detect, report, and describe ME in hospitals.
Agogo, G.O.; Voet, van der H.; Veer, van 't P.; Ferrari, P.; Leenders, M.; Muller, D.C.; Sánchez-Cantalejo, E.; Bamia, C.; Braaten, T.; Knüppel, S.; Johansson, I.; Eeuwijk, van F.A.; Boshuizen, H.C.
2014-01-01
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference m
Agogo, George O; der Voet, Hilko van; Veer, Pieter Van't; Ferrari, Pietro; Leenders, Max; Muller, David C; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A; Boshuizen, Hendriek
2014-01-01
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference m
Winter, Samantha Lee; Forrest, Sarah Michelle; Wallace, Joanne; Challis, John H
2017-08-08
The purpose of this study was to validate a new geometric solids model, developed to address the lack of female specific models for body segment inertial parameter estimation. A second aim was to determine the effect of reducing the number of geometric solids used to model the limb segments on model accuracy. The 'full' model comprised 56 geometric solids, the 'reduced' 31, and the 'basic' 16. Predicted whole-body inertial parameters were compared with direct measurements (reaction board, scales), and predicted segmental parameters with those estimated from whole-body DXA scans for 28 females. The percentage root mean square error (%RMSE) for whole-body volume was geometric solids are required to more accurately model the trunk.
Soda, K J; Slice, D E; Naylor, G J P
2017-01-01
Over the past few decades, geometric morphometric methods have become increasingly popular and powerful tools to describe morphological data while over the same period artificial neural networks have had a similar rise in the classification of specimens to preconceived groups. However, there has been little research into how well these two systems operate together, particularly in comparison to preexisting techniques. In this study, geometric morphometric data and multilayer perceptrons, a style of artificial neural network, were used to classify shark teeth from the genus Carcharhinus to species. Three datasets of varying size and species differences were used. We compared the performance of this combination with geometric morphometric data in a linear discriminate function analysis, linear measurements in a linear discriminate function analysis, and a preexisting methodology from the literature that incorporates linear measurements and a two-layered discriminate function analysis. Across datasets, geometric morphometric data in a multilayer perceptron tended to yield modest accuracies but accuracies that varied less across species whereas other methods were able to achieve higher accuracies in some species at the expense of lower accuracies in others. Further, the performance of the two-layered discriminate function analysis illustrates that constraining what material is classified can increase the accuracy of a method. Based on this tradeoff, the best methodology will then depend on the scope of the study and the amount of material available. J. Morphol. 278:131-141, 2017. ©© 2016 Wiley Periodicals,Inc.
Geometric methods for discrete dynamical systems
Easton, Robert W
1998-01-01
This book looks at dynamics as an iteration process where the output of a function is fed back as an input to determine the evolution of an initial state over time. The theory examines errors which arise from round-off in numerical simulations, from the inexactness of mathematical models used to describe physical processes, and from the effects of external controls. The author provides an introduction accessible to beginning graduate students and emphasizing geometric aspects of the theory. Conley''s ideas about rough orbits and chain-recurrence play a central role in the treatment. The book will be a useful reference for mathematicians, scientists, and engineers studying this field, and an ideal text for graduate courses in dynamical systems.
Energy Technology Data Exchange (ETDEWEB)
Donis Gil, S.; Robayna Duque, B. E.; Jimenez Sosa, A.; Hernandez Armas, O.; Gonzalez Martin, A. E.; Hernandez Armas, J.
2013-07-01
The calculation of SM is done from errors in positioning (set-up). These errors are calculated from movements in 3D of the patient. This paper is an exploratory study of 20 patients with tumor location of prostate in which errors of set-up for two protocols of work are evaluated. (Author)
Error and jitter effect studies on the SLED for the BEPC Ⅱ-linac
Institute of Scientific and Technical Information of China (English)
PEI Shi-Lun; LI Xiao-Ping; XIAO Ou-Zheng
2012-01-01
An RF pulse compressor is a device used to convert a long RF pulse to a short one with a much higher peak RF magnitude.SLED can be regarded as the earliest RF pulse compressor to be used in large-scale linear accelerators.It has been widely studied around the world and applied in the BEPC and BEPC Ⅱ linac for many years.During routine operation,error and jitter effects will deteriorate the performance of SLED,either on the output electromagnetic wave amplitude or phase.The error effects mainly include the frequency drift induced by cooling water temperature variation and the frequency/Qo/β unbalances between the two energy storage cavities caused by mechanical fabrication or microwave tuning.The jitter effects refer to the PSK switching phase and time jitters.In this paper,we re-derive the generalized formulae for the conventional SLED used in the BEPC Ⅱ linac,and the error and jitter effects on SLED performance are also investigated.
A study on mechanical errors in Cone Beam Computed Tomography (CBCT) System
Energy Technology Data Exchange (ETDEWEB)
Lee, Yi Seong; Yoo, Eun Jeong; Choi, Kyoung Sik [Dept. of Radiation Oncology, Anyang SAM Hospital, Anyang (Korea, Republic of); Lee, Jong Woo [Dept. of Radiation Oncology, Konkuk University Medical Center, Seoul (Korea, Republic of); Suh, Tae Suk [Dept. of Biomedical Engineering and Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Kim, Jeong Koo [Dept. of Radiological Science, Hanseo University, Seosan (Korea, Republic of)
2013-06-15
This study investigated the rate of setup variance by the rotating unbalance of gantry in image-guided radiation therapy. The equipments used linear accelerator(Elekta Synergy ™, UK) and a three-dimensional volume imaging mode(3D Volume View) in cone beam computed tomography(CBCT) system. 2D images obtained by rotating 360°and 180° were reconstructed to 3D image. Catpan503 phantom and homogeneous phantom were used to measure the setup errors. Ball-bearing phantom was used to check the rotation axis of the CBCT. The volume image from CBCT using Catphan503 phantom and homogeneous phantom were analyzed and compared to images from conventional CT in the six dimensional view(X, Y, Z, Roll, Pitch, and Yaw). The variance ratio of setup error were difference in X 0.6 mm, Y 0.5 mm, Z 0.5 mm when the gantry rotated 360° in orthogonal coordinate. whereas rotated 180°, the error measured 0.9 mm, 0.2 mm, 0.3 mm in X, Y, Z respectively. In the rotating coordinates, the more increased the rotating unbalance, the more raised average ratio of setup errors. The resolution of CBCT images showed 2 level of difference in the table recommended. CBCT had a good agreement compared to each recommended values which is the mechanical safety, geometry accuracy and image quality. The rotating unbalance of gentry vary hardly in orthogonal coordinate. However, in rotating coordinate of gantry exceeded the ±1° of recommended value. Therefore, when we do sophisticated radiation therapy six dimensional correction is needed.
Manwani, Naresh
2010-01-01
In this paper we present a new algorithm for learning oblique decision trees. Most of the current decision tree algorithms rely on impurity measures to assess the goodness of hyperplanes at each node while learning a decision tree in a top-down fashion. These impurity measures do not properly capture the geometric structures in the data. Motivated by this, our algorithm uses a strategy to assess the hyperplanes in such a way that the geometric structure in the data is taken into account. At each node of the decision tree, we find the clustering hyperplanes for both the classes and use their angle bisectors as the split rule at that node. We show through empirical studies that this idea leads to small decision trees and better performance. We also present some analysis to show that the angle bisectors of clustering hyperplanes that we use as the split rules at each node, are solutions of an interesting optimization problem and hence argue that this is a principled method of learning a decision tree.
DEFF Research Database (Denmark)
af Rosenschöld, Per Munck; Aznar, Marianne C; Nygaard, Ditte E;
2010-01-01
Proton therapy of lung cancer holds the potential for a reduction of the volume of irradiated normal lung tissue. In this work we investigate the robustness of intensity modulated proton therapy (IMPT) plans to motion, and evaluate a geometrical tumour tracking method to compensate for tumour...
Geometrization of Trace Formulas
Frenkel, Edward
2010-01-01
Following our joint work arXiv:1003.4578 with Robert Langlands, we make the first steps toward developing geometric methods for analyzing trace formulas in the case of the function field of a curve defined over a finite field. We also suggest a conjectural framework of geometric trace formulas for curves defined over the complex field, which exploits the categorical version of the geometric Langlands correspondence.
A study on nurses' perception on the medication error at one of the hospitals in East Malaysia.
Hassan, H; Das, S; Se, H; Damika, K; Letchimi, S; Mat, S; Packiavathy, R; Zulkifli, S Z S
2009-01-01
Medication error is defined as any preventable event that might cause or lead to an inappropriate use or harming of the patient. Such events could be due to compounding, dispensing, distribution, administration and monitoring. The aim of the present study was to determine the nurses' perception on medication error that were related directly or indirectly to the process of administration of drugs. MATERIALS AND METHODS. This was a descriptive cross sectional study conducted on 92 staff nurses working in the selected wards in one of the hospitals in East Malaysia. Data was obtained through structured questionnaires. RESULTS. Analysis of data was done through SPSS program for descriptive inferential statistics. Out of a total of 92 subjects, sixty-eight (73.9%) indicated medication error occurred because the nurses were tired and exhausted. Seventy nine subjects (85.9%) believed that any medication error should be reported to the doctors; another 74 (80.2%) knew that their colleagues committed medication error and 52 (56.5%) did not report the case. Forty eight (52.17%) subjects committed medication error at least once throughout their life. Of the 48 committed medication, 45 (93.75%) nurses believed that the error committed was not serious; while 39 (81.25%) believed the error occurred during the 1st 5 years of their working experience. The findings showed that the incidence of medication error was due to the defect in the organizational system itself and not solely due to the mistakes on the part of any individual.
Despan, Daniela; Erard, S.; Barucci, M. A.; Josset, J. L.; Beauvivre, S.; Chevrel, S.; Pinet, P.; Koschny, D.; Almeida, M.; Foing, B. H.; AMIE Team
2007-10-01
AMIE, the Advanced Moon micro-Imager Experiment on board the ESA lunar mission SMART-1, is an imaging system to survey the terrain in visible and near-infrared light. AMIE provides high resolution images obtained using a tele-objective with 5.3° x 5.3° field of view and a sensor of 1024 x 1024 pixels. The output images have resolution 45m/pixel at 500km, and are encoded with 10 bits/pixel. From the 300 Km pericenter altitude, the same field of view corresponds to a spatial resolution about 30 m/pixel. The FOV is shared by various filters, allowing to reconstruct mosaics of the surface in 3 colors, depending on pointing mode. Spot-pointing observations provide photometric sequences that allow to study the surface properties in restricted areas. One of the scientific objectives of the mission is to get high resolution imaging of the Moon surface, e.g. high latitude regions in the southern hemisphere. In order to map the lunar surface with AMIE, systematic analysis and processing is being carried on using the whole data set. Geometrical analysis of AMIE images relies on the SPICE system: image coordinates are computed to get precise projection at the surface, and illumination angles are computed to analyze the photometric sequences. High resolution mosaics were constructed then compared to lower resolution Clementine UV-Vis and NIR images. Spot-pointing sequences are used to constrain the photometric and physical properties of surface materials in areas of interest, based on Hapke's modeling. Optical alignment parameters in the Spice kernels have been refined and provide absolute coordinates in the IAU lunar frame (ULCN). They provide discrepancies with the Clementine basemap, ranging up to some 0.1° in the equatorial regions, as expected (e.g., Cook et al DPS 2002; Arcinal et al. EPSC 2006). A progress report will be presented at the conference.
Linking Errors between Two Populations and Tests: A Case Study in International Surveys in Education
Directory of Open Access Journals (Sweden)
Dirk Hastedt
2015-06-01
Full Text Available This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average achievement scores from the latter can be linked to the international metric. In addition to raising issues associated with different testing conditions, administrative procedures, and the like, this approach also poses psychometric challenges. This paper endeavors to shed some light on the effects that can be expected, the linkage errors in particular, by countries using this practice. The ILSA selected for this simulation study was IEA TIMSS 2011, and the three countries used as the national assessment cases were Botswana, Honduras, and Tunisia, all of which participated in TIMSS 2011. The items selected as items common to the simulated national tests and the international test came from the Grade 4 TIMSS 2011 mathematics items that IEA released into the public domain after completion of this assessment. The findings of the current study show that linkage errors seemed to achieve acceptable levels if 30 or more items were used for the linkage, although the errors were still significantly higher compared to the TIMSS' cutoffs. Comparison of the estimated country averages based on the simulated national surveys and the averages based on the international TIMSS assessment revealed only one instance across the three countries of the estimates approaching parity. Also, the percentages of students in these countries who actually reached the defined benchmarks on the TIMSS achievement scale differed significantly from the results based on TIMSS and the results for the simulated national assessments. As a conclusion, we advise against using groups of released items from international assessments in national
Measurement error in epidemiologic studies of air pollution based on land-use regression models.
Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino
2013-10-15
Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.
Cheng, Lu Pien
2015-01-01
In this study, ways in which 9-year old students from one Singapore school solved 1-step and 2-step word problems based on the three semantic structures were examined. The students' work and diagrams provided insights into the range of errors in word problem solving for 1- step and 2-step word problems. In particular, the errors provided some…
Institute of Scientific and Technical Information of China (English)
陈思; 蔡丽慧
2014-01-01
The present study is a corpus-based error analysis of IL written production by Chinese EFL learners. Based on the In-terlanguage theory, the research chooses the tagged WECCL as the data base, and makes an overall investigation of the high fre-quency grammatical errors committed by Chinese EFL learners.
Primary School Teacher Candidates' Geometric Habits of Mind
Köse, Nilu¨fer Y.; Tanisli, Dilek
2014-01-01
Geometric habits of mind are productive ways of thinking that support learning and using geometric concepts. Identifying primary school teacher candidates' geometric habits of mind is important as they affect the development of their future students' geometric thinking. Therefore, this study attempts to determine primary school teachers' geometric…
Proof in geometry with "mistakes in geometric proofs"
Fetisov, A I
2006-01-01
This single-volume compilation of 2 books explores the construction of geometric proofs. It offers useful criteria for determining correctness and presents examples of faulty proofs that illustrate common errors. 1963 editions.
Evaluating Method Engineer Performance: an error classification and preliminary empirical study
Directory of Open Access Journals (Sweden)
Steven Kelly
1998-11-01
Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.
A Study of Non-English Majors’Errors in English Writing
Institute of Scientific and Technical Information of China (English)
殷慧智
2013-01-01
Researchers in the field of Second Language Acquisition (SLA) have long held the interest towards the teaching of col-lege English writing. Their research topics, range from how to improve college students ’ability to write with English as their sec-ond language, to how to correct their wrong expressions in their writing. This article mainly aims at exploring college students ’ various writing errors and mistakes,trying to provide some hints for college students and teachers about English writing. Such studies are abundant.
Li, Chiang-shan Ray; Huang, Cong; Yan, Peisi; Paliwal, Prashni; Constable, Robert Todd; Sinha, Rajita
2008-06-01
The ability to detect errors and adjust behavior accordingly is essential for maneuvering in an uncertain environment. Errors are particularly prone to occur when multiple, conflicting responses are registered in a situation that requires flexible behavioral outputs; for instance, when a go signal requires a response and a stop signal requires inhibition of the response during a stop signal task (SST). Previous studies employing the SST have provided ample evidence indicating the importance of the medial cortical brain regions in conflict/error processing. Other studies have also related these regional activations to postconflict/error behavioral adjustment. However, very few studies have directly explored the neural correlates of postconflict/error behavioral adjustment. Here we employed an SST to elicit errors in approximately half of the stop trials despite constant behavioral adjustment of the observers. Using functional magnetic resonance imaging, we showed that prefrontal loci including the ventrolateral prefrontal cortex are involved in post-error slowing in reaction time. These results delineate the neural circuitry specifically involved in error-associated behavioral modifications.
Learning from Errors at Work: A Replication Study in Elder Care Nursing
Leicher, Veronika; Mulder, Regina H.; Bauer, Johannes
2013-01-01
Learning from errors is an important way of learning at work. In this article, we analyse conditions under which elder care nurses use errors as a starting point for the engagement in social learning activities (ESLA) in the form of joint reflection with colleagues on potential causes of errors and ways to prevent them in future. The goal of our…
van den Bemt, P. M. L. A.; Robertz, R.; de Jong, A. L.; van Roon, E. N.; Leufkens, H. G. M.
2007-01-01
Background: Medication errors can result in harm, unless barriers to prevent them are present. Drug administration errors are less likely to be prevented, because they occur in the last stage of the drug distribution process. This is especially the case in non-alert patients, as patients often form the final barrier to prevention of errors.…
Quantification and handling of sampling errors in instrumental measurements: a case study
DEFF Research Database (Denmark)
Andersen, Charlotte Møller; Bro, R.
2004-01-01
Instrumental measurements are often used to represent a whole object even though only a small part of the object is actually measured. This can introduce an error due to the inhomogeneity of the product. Together with other errors resulting from the measuring process, such errors may have a serio...
Exploring New Geometric Worlds
Nirode, Wayne
2015-01-01
When students work with a non-Euclidean distance formula, geometric objects such as circles and segment bisectors can look very different from their Euclidean counterparts. Students and even teachers can experience the thrill of creative discovery when investigating these differences among geometric worlds. In this article, the author describes a…
Comparative Study of Communication Error between Conventional and Digital MCR Operators in NPPs
Energy Technology Data Exchange (ETDEWEB)
Kim, Seung Geun; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)
2015-05-15
In this regard, the appropriate communication is directly related to the efficient and safe system operation, and inappropriate communication is one of the main causes of the accidents in various industries since inappropriate communications can cause a lack of necessary information exchange between operators and lead to serious consequences in large process systems such as nuclear power plants. According to the study conducted by Y. Hirotsu in 2001, about 25 percents of human error caused incidents in NPPs were related to communication issues. Also, other studies were reported that 85 percents of human error caused incidents in aviation industry and 92 percents in railway industry were related to communication problems. Accordingly, the importance of the efforts for reducing inappropriate communications has been emphasized in order to enhance the safety of pre-described systems. As a result, the average ratio of inappropriate communication in digital MCRs was slightly higher than that in conventional MCRs when the average ratio of no communication in digital MCRs was much smaller than that in conventional MCRs. Regarding the average ratio of inappropriate communication, it can be inferred that operators are still more familiar to the conventional MCRs than digital MCRs. More case studies are required for more delicate comparison since there were only three examined cases for digital MCRs. However, similar result is expected because there are no differences in communication method, although there are many differences in the way of procedure proceeding.
Kahle, Matthew
2009-01-01
We study the expected topological properties of Cech and Vietoris-Rips complexes built on randomly sampled points in R^d. These are, in some cases, analogues of known results for connectivity and component counts for random geometric graphs. However, an important difference in this setting is that homology is not monotone in the underlying parameter. In the sparse range, we compute the expectation and variance of the Betti numbers, and establish Central Limit Theorems and concentration of measure. In the dense range, we introduce Morse theoretic arguments to bound the expectation of the Betti numbers, which is the main technical contribution of this article. These results provide a detailed probabilistic picture to compare with the topological statistics of point cloud data.
Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors.
Directory of Open Access Journals (Sweden)
Jian Weng
Full Text Available Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group.
Clinical Research Methodology 1: Study Designs and Methodologic Sources of Error.
Sessler, Daniel I; Imrey, Peter B
2015-10-01
Clinical research can be categorized by the timing of data collection: retrospective or prospective. Clinical research also can be categorized by study design. In case-control studies, investigators compare previous exposures (including genetic and other personal factors, environmental influences, and medical treatments) among groups distinguished by later disease status (broadly defined to include the development of disease or response to treatment). In cohort studies, investigators compare subsequent incidences of disease among groups distinguished by one or more exposures. Comparative clinical trials are prospective cohort studies that compare treatments assigned to patients by the researchers. Most errors in clinical research findings arise from 5 largely distinguishable classes of methodologic problems: selection bias, confounding, measurement bias, reverse causation, and excessive chance variation.
Linkage analysis of quantitative refraction and refractive errors in the Beaver Dam Eye Study.
Klein, Alison P; Duggal, Priya; Lee, Kristine E; Cheng, Ching-Yu; Klein, Ronald; Bailey-Wilson, Joan E; Klein, Barbara E K
2011-07-13
Refraction, as measured by spherical equivalent, is the need for an external lens to focus images on the retina. While genetic factors play an important role in the development of refractive errors, few susceptibility genes have been identified. However, several regions of linkage have been reported for myopia (2q, 4q, 7q, 12q, 17q, 18p, 22q, and Xq) and for quantitative refraction (1p, 3q, 4q, 7p, 8p, and 11p). To replicate previously identified linkage peaks and to identify novel loci that influence quantitative refraction and refractive errors, linkage analysis of spherical equivalent, myopia, and hyperopia in the Beaver Dam Eye Study was performed. Nonparametric, sibling-pair, genome-wide linkage analyses of refraction (spherical equivalent adjusted for age, education, and nuclear sclerosis), myopia and hyperopia in 834 sibling pairs within 486 extended pedigrees were performed. Suggestive evidence of linkage was found for hyperopia on chromosome 3, region q26 (empiric P = 5.34 × 10(-4)), a region that had shown significant genome-wide evidence of linkage to refraction and some evidence of linkage to hyperopia. In addition, the analysis replicated previously reported genome-wide significant linkages to 22q11 of adjusted refraction and myopia (empiric P = 4.43 × 10(-3) and 1.48 × 10(-3), respectively) and to 7p15 of refraction (empiric P = 9.43 × 10(-4)). Evidence was also found of linkage to refraction on 7q36 (empiric P = 2.32 × 10(-3)), a region previously linked to high myopia. The findings provide further evidence that genes controlling refractive errors are located on 3q26, 7p15, 7p36, and 22q11.
Geometric formula for prism deflection
Indian Academy of Sciences (India)
Apoorva G Wagh; Veer Chand Rakhecha
2004-08-01
While studying neutron deflections produced by a magnetic prism, we have stumbled upon a simple `geometric' formula. For a prism of refractive index close to unity, the deflection simply equals the product of the refractive power − 1 and the base-to-height ratio of the prism, regardless of the apex angle. The base and height of the prism are measured respectively along and perpendicular to the direction of beam propagation within the prism. The geometric formula greatly simplifies the optimisation of prism parameters to suit any specific experiment.
A Case Study of the Error Growth and Predictability of a Meiyu Frontal Heavy Precipitation Event
Institute of Scientific and Technical Information of China (English)
罗雨; 张立凤
2011-01-01
The Advanced Regional Eta-coordinate Model (AREM) is used to explore the predictability of a heavy rainfall event along the Meiyu front in China during 3-4 July 2003.Based on the sensitivity of precipitation prediction to initial data sources and initial uncertainties in different variables,the evolution of error growth and the associated mechanism are described and discussed in detail in this paper.The results indicate that the smaller-amplitude initial error presents a faster growth rate and its growth is characterized by a transition from localized growth to widespread expansion error.Such modality of the error growth is closely related to the evolvement of the precipitation episode,and consequcntly remarkable forecast divergence is found near the rainband,indicating that the rainfall area is a sensitive region for error growth.The initial error in the rainband contributes significantly to the forecast divergence,and its amplification and propagation are largely determined by the initial moisture distribution.The moisture condition also affects the error growth on smaller scales and the subsequent upscale error cascade.In addition,the error growth defined by an energy norm reveals that large error energy collocates well with the strong latent heating,implying that the occurrence of precipitation and error growth share the same energy source-the latent heat.This may impose an intrinsic predictability limit on the prediction of heavy precipitation.
Analytic and Experimental Studies of the Errors in Numerical Methods for the Valuation of Options
Institute of Scientific and Technical Information of China (English)
P.Lin; J. J. H. Miller; G. I. Shishkin
2008-01-01
The value of a European option satisfies the Black-Scholes equation with appropriately specified final and boundary conditions. We transform the problem to an initial boundary value problem in dimensionless form. There are two parameters in the coefficients of the resulting linear parabolic partial differential equation. For a range of values of these parameters, the solution of the problem has a boundary or an initial layer. The initial function has a discontinuity in the first-order derivative, which leads to the appearance of an interior layer. We construct analytically the asymptotic solution of the equation in a finite domain. Based on the asymptotic solution we can determine the size of the artificial boundary such that the required solution in a finite domain in x and at the final time is not affected by the boundary. Also, we study computationally the behaviour in the maximum norm of the errors in numerical solutions in cases such that one of the parameters varies from finite (or pretty large) to small values, while the other parameter is fixed and takes either finite (or pretty large) or small values. Crank-Nicolson explicit and implicit schemes using centered or upwind approximations to the derivative are studied. We present numerical computations, which determine experimentally the parameter-uniform rates of convergence. We note that this rate is rather weak, due probably to mixed sources of error such as initial and boundary layers and the discontinuity in the derivative of the solution.
Mu, Dapeng; Yan, Haoming; Feng, Wei; Peng, Peng
2017-01-01
Filtering is a necessary step in the Gravity Recovery and Climate Experiment (GRACE) data processing, but leads to signal leakage and attenuation obviously, and adversely affects the quality of global and regional mass change estimates. We propose to use the Tikhonov regularization technique with the L-curve method to solve a correction equation which can reduce the leakage error caused by filter involved in GRACE data processing. We first demonstrate that the leakage error caused by the Gaussian filter can be well corrected by our regularization technique with simulation studies in Greenland and Antarctica. Furthermore, our regularization technique can restore the spatial distribution of original mass changes. For example, after applying the regularization method to GRAEC data (2003-2012), we find that GRACE mass changes tend to move from interior to coastal area in Greenland, which are consistent with recent other studies. After being corrected for glacial isostatic adjustment (GIA) effect, our results show that the ice mass loss rates were 274 ± 30 and 107 ± 34 Gt/yr in Greenland and Antarctica from 2003 to 2012, respectively. And a 10 ± 4 Gt/yr increase rate in Greenland interior is also detected.
Nazione, Samantha; Pace, Kristin
2015-01-01
Medical malpractice lawsuits are a growing problem in the United States, and there is much controversy regarding how to best address this problem. The medical error disclosure framework suggests that apologizing, expressing empathy, engaging in corrective action, and offering compensation after a medical error may improve the provider-patient relationship and ultimately help reduce the number of medical malpractice lawsuits patients bring to medical providers. This study provides an experimental examination of the medical error disclosure framework and its effect on amount of money requested in a lawsuit, negative intentions, attitudes, and anger toward the provider after a medical error. Results suggest empathy may play a large role in providing positive outcomes after a medical error.
Automated evaluation of setup errors in carbon ion therapy using PET: Feasibility study
Energy Technology Data Exchange (ETDEWEB)
Kuess, Peter, E-mail: peter.kuess@meduniwien.ac.at; Hopfgartner, Johannes; Georg, Dietmar [Department of Radiation Oncology, Division of Medical Radiation Physics, Comprehensive Cancer Center, Medical University Vienna, Vienna A-1090, Austria and Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Vienna A-1090 (Austria); Helmbrecht, Stephan [OncoRay - National Center for Radiation Research in Oncology, Medical Faculty Carl Gustav Carus, TU Dresden D-01307 (Germany); Fiedler, Fine [Helmholtz-Zentrum Dresden-Rossendorf, Institute of Radiation Physics, Dresden D-01307 (Germany); Birkfellner, Wolfgang [Center for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna A-1090, Austria and Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Vienna A-1090 (Austria); Enghardt, Wolfgang [OncoRay - National Center for Radiation Research in Oncology, Medical Faculty Carl Gustav Carus, TU Dresden D-01307, Germany and Helmholtz-Zentrum Dresden-Rossendorf, Institute of Radiation Physics, Dresden D-01307 (Germany)
2013-12-15
Purpose: To investigate the possibility of detecting patient mispositioning in carbon-ion therapy with particle therapy positron emission tomography (PET) in an automated image registration based manner. Methods: Tumors in the head and neck (H and N), pelvic, lung, and brain region were investigated. Biologically optimized carbon ion treatment plans were created with TRiP98. From these treatment plans, the reference β{sup +}-activity distributions were calculated using a Monte Carlo simulation. Setup errors were simulated by shifting or rotating the computed tomography (CT). The expected β{sup +} activity was calculated for each plan with shifts. Finally, the reference particle therapy PET images were compared to the “shifted” β{sup +}-activity distribution simulations using the Pearson's correlation coefficient (PCC). To account for different PET monitoring options the inbeam PET was compared to three different inroom scenarios. Additionally, the dosimetric effects of the CT misalignments were investigated. Results: The automated PCC detection of patient mispositioning was possible in the investigated indications for cranio-caudal shifts of 4 mm and more, except for prostate tumors. In the rather homogeneous pelvic region, the generated β{sup +}-activity distribution of the reference and compared PET image were too much alike. Thus, setup errors in this region could not be detected. Regarding lung lesions the detection strongly depended on the exact tumor location: in the center of the lung tumor misalignments could be detected down to 2 mm shifts while resolving shifts of tumors close to the thoracic wall was more challenging. Rotational shifts in the H and N and lung region of +6° and more could be detected using inroom PET and partly using inbeam PET. Comparing inroom PET to inbeam PET no obvious trend was found. However, among the inroom scenarios a longer measurement time was found to be advantageous. Conclusions: This study scopes the use of
Katahira, Yu; Fukuta, Masahiko; Katsuki, Masahide; Momochi, Takeshi; Yamamoto, Yoshihiro
2016-09-01
Recently, it has been required to improve qualities of aspherical lenses mounted on camera units. Optical lenses in highvolume production generally are applied with molding process using cemented carbide or Ni-P coated steel, which can be selected from lens material such as glass and plastic. Additionally it can be obtained high quality of the cut or ground surface on mold due to developments of different mold product technologies. As results, it can be less than 100nmPV as form-error and 1nmRa as surface roughness in molds. Furthermore it comes to need higher quality, not only formerror( PV) and surface roughness(Ra) but also other surface characteristics. For instance, it can be caused distorted shapes at imaging by middle spatial frequency undulations on the lens surface. In this study, we made focus on several types of sinuous structures, which can be classified into form errors for designed surface and deteriorate optical system performances. And it was obtained mold product processes minimalizing undulations on the surface. In the report, it was mentioned about the analyzing process by using PSD so as to evaluate micro undulations on the machined surface quantitatively. In addition, it was mentioned that the grinding process with circumferential velocity control was effective for large aperture lenses fabrication and could minimalize undulations appeared on outer area of the machined surface, and mentioned about the optical glass lens molding process by using the high precision press machine.
Multi-satellite rainfall sampling error estimates – a comparative study
Directory of Open Access Journals (Sweden)
A. Loew
2012-10-01
Full Text Available This study focus is set on quantifying sampling related uncertainty in the satellite rainfall estimates. We conduct observing system simulation experiment to estimate sampling error for various constellations of Low-Earth orbiting and geostationary satellites. There are two types of microwave instruments currently available: cross track sounders and conical scanners. We evaluate the differences in sampling uncertainty for various satellite constellations that carry instruments of the common type as well as in combination with geostationary observations. A precise orbital model is used to simulate realistic satellite overpasses with orbital shifts taken into account. With this model we resampled rain gauge timeseries to simulate satellites rainfall estimates free of retrieval and calibration errors. We concentrate on two regions, Germany and Benin, areas with different precipitation regimes. Our results show that sampling uncertainty for all satellite constellations does not differ greatly depending on the area despite the differences in local precipitation patterns. Addition of 3 hourly geostationary observations provides equal performance improvement in Germany and Benin, reducing rainfall undersampling by 20–25% of the total rainfall amount. Authors do not find a significant difference in rainfall sampling between conical imager and cross-track sounders.
Directory of Open Access Journals (Sweden)
Chua SS
2017-03-01
Full Text Available Siew-Siang Chua,1 Sim-Mei Choo,1 Che Zuraini Sulaiman,2 Asma Omar,3 Meow-Keong Thong3 1Department of Pharmacy, Faculty of Medicine, University of Malaya, 2Pharmacy Department, University Malaya Medical Centre, 3Department of Paediatrics, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia Background and purpose: Drug administration errors are more likely to reach the patient than other medication errors. The main aim of this study was to determine whether the sharing of information on drug administration errors among health care providers would reduce such problems. Patients and methods: This study involved direct, undisguised observations of drug administrations in two pediatric wards of a major teaching hospital in Kuala Lumpur, Malaysia. This study consisted of two phases: Phase 1 (pre-intervention and Phase 2 (post-intervention. Data were collected by two observers over a 40-day period in both Phase 1 and Phase 2 of the study. Both observers were pharmacy graduates: Observer 1 just completed her undergraduate pharmacy degree, whereas Observer 2 was doing her one-year internship as a provisionally registered pharmacist in the hospital under study. A drug administration error was defined as a discrepancy between the drug regimen received by the patient and that intended by the prescriber and also drug administration procedures that did not follow standard hospital policies and procedures. Results from Phase 1 of the study were analyzed, presented and discussed with the ward staff before commencement of data collection in Phase 2. Results: A total of 1,284 and 1,401 doses of drugs were administered in Phase 1 and Phase 2, respectively. The rate of drug administration errors reduced significantly from Phase 1 to Phase 2 (44.3% versus 28.6%, respectively; P<0.001. Logistic regression analysis showed that the adjusted odds of drug administration errors in Phase 1 of the study were almost three times that in Phase 2 (P<0.001. The most
Metastable vacua and geometric deformations
Amariti, A; Girardello, L; Mariotti, A
2008-01-01
We study the geometric interpretation of metastable vacua for systems of D3 branes at non isolated toric deformable singularities. Using the L^{aba} examples, we investigate the relations between the field theoretic susy breaking and restoration and the complex deformations of the CY singularities.
Geometric and unipotent crystals
Berenstein, Arkady; Kazhdan, David
1999-01-01
In this paper we introduce geometric crystals and unipotent crystals which are algebro-geometric analogues of Kashiwara's crystal bases. Given a reductive group G, let I be the set of vertices of the Dynkin diagram of G and T be the maximal torus of G. The structure of a geometric G-crystal on an algebraic variety X consists of a rational morphism \\gamma:X-->T and a compatible family e_i:G_m\\times X-->X, i\\in I of rational actions of the multiplicative group G_m satisfying certain braid-like ...
Error Budgets for the Exoplanet Starshade (exo-s) Probe-Class Mission Study
Shaklan, Stuart B.; Marchen, Luis; Cady, Eric; Ames, William; Lisman, P. Douglas; Martin, Stefan R.; Thomson, Mark; Regehr, Martin
2015-01-01
Exo-S is a probe-class mission study that includes the Dedicated mission, a 30 millimeters starshade co-launched with a 1.1 millimeter commercial telescope in an Earth-leading deep-space orbit, and the Rendezvous mission, a 34 millimeter starshade intended to work with a 2.4 millimeters telescope in an Earth-Sun L2 orbit. A third design, referred to as the Rendezvous Earth Finder mission, is based on a 40 millimeter starshade and is currently under study. This paper presents error budgets for the detection of Earth-like planets with each of these missions. The budgets include manufacture and deployment tolerances, the allowed thermal fluctuations and dynamic motions, formation flying alignment requirements, surface and edge reflectivity requirements, and the allowed transmission due to micrometeoroid damage.
Study on geometric approach of SVM algorithm SK algorithm analysis and study%SVM的几何方法——SK类思路的研究
Institute of Scientific and Technical Information of China (English)
常振华; 陈伯成; 李英杰; 刘文煌; 闫学为
2011-01-01
The geometric approach of the Support Vector Machine(SVM) is a kind of geometric way to find the solution to the problem of the SVM algorithm. Based on its geometric characters,the SK(Schlesinger-Kozinec) algorithm is studied intuitively. It briefly sums up the two convex hulls,based on their relative positions, into five categories,and makes sure their optimizing position got in each computing is mostly at the hull vertices or boundary,it can get to the boundary(the optimization place of the computing) at the first computation.The manual single-step simulation results show that the projection is not always successful for such kind of algorithms in many cases,though it can't affect the computing result,but can weaken the algorithm efficiency. Based on the analysis,it demonstrates two improving ways for the soft SK algorithm(Backward-SK and Forward-SK methods),and makes some simulation for comparing. The simulation results show that the improved method computing results are almost same as the SK and soft SK ones,but the computing process of improved one is more intuitive.%支持向量机(Support Vector Machine,SVM)的几何方法是一种基于SVM计算过程中几何意义出发的求解方法.利用其几何特点,比较直观地对其基本算法的构建过程进行了分析.两凸包相对位置可以简要地归纳成5类,且在该类算法迭代过程最优点多在顶点和边界上,该类算法在第一次迭代就可能达到边界(最优点);该类算法的手动单步模拟计结果揭示:很多情况下,该类算法迭代过程的投影并不成功,虽不影响解法的最终结果,但会影响迭代效率:基于几何的分析,给出软SK软算法的两种改进思路(Backward-SK和Forward-SK思路),并进行了仿真比较计算.实验表明,该方法计算效果与原思路相似,但是计算过程理解更加直观.
Kim, ChungYun; Mazan, Jennifer L; Quiñones-Boex, Ana C
To determine pharmacists' attitudes and behaviors on medication errors and their disclosure and to compare community and hospital pharmacists on such views. An online questionnaire was developed from previous studies on physicians' disclosure of errors. Questionnaire items included demographics, environment, personal experiences, and attitudes on medication errors and the disclosure process. An invitation to participate along with the link to the questionnaire was electronically distributed to members of two Illinois pharmacy associations. A follow-up reminder was sent 4 weeks after the original message. Data were collected for 3 months, and statistical analyses were performed with the use of IBM SPSS version 22.0. The overall response rate was 23.3% (n = 422). The average employed respondent was a 51-year-old white woman with a BS Pharmacy degree working in a hospital pharmacy as a clinical staff member. Regardless of practice settings, pharmacist respondents agreed that medication errors were inevitable and that a disclosure process is necessary. Respondents from community and hospital settings were further analyzed to assess any differences. Community pharmacist respondents were more likely to agree that medication errors were inevitable and that pharmacists should address the patient's emotions when disclosing an error. Community pharmacist respondents were also more likely to agree that the health care professional most closely involved with the error should disclose the error to the patient and thought that it was the pharmacists' responsibility to disclose the error. Hospital pharmacist respondents were more likely to agree that it was important to include all details in a disclosure process and more likely to disagree on putting a "positive spin" on the event. Regardless of practice setting, responding pharmacists generally agreed that errors should be disclosed to patients. There were, however, significant differences in their attitudes and behaviors
Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael
2014-04-01
We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.
Schmitt, Eberhard; Müller, Patrick; Stein, Stefan; Schwarz-Finsterle, Jutta; Hausmann, Michael
2010-01-01
Adopting the world wide accessible Grid computing power and data management structures enables usage of large image data bases for individual diagnosis and therapy decisions. Here, we define several descriptors of the genome architecture of cell nuclei which are the basis of a detailed analysis for conclusions on the health state of an individual patient. All these descriptors can be accessed by automatic inspection of microscopic images of fluorescently labelled nuclei, obtained from cells from tissue sections or blood and subjected to standard biochemical protocols. We demonstrate how the combinatorial, geometrical and statistical parameters may be used in diagnosis and therapy monitoring.
Calignano, Flaviana; Vezzetti, Enrico
2010-04-01
To obtain the best surgical results in orthognathic surgery, treatment planning and evaluation of results should be performed. In these operations it is necessary to provide the physician with powerful tools that can underline the behavior of soft tissue. For this reason, considering the improvements provided by the use of 3D scanners in medical diagnosis, we propose a methodology for analyzing facial morphology working with geometrical features. The methodology has been tested on patients with malocclusion in order to analyze the reliability and efficiency of the provided diagnostic results.
Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton
2016-01-01
Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main paramet...
Geometric and engineering drawing
Morling, K
2010-01-01
The new edition of this successful text describes all the geometric instructions and engineering drawing information that are likely to be needed by anyone preparing or interpreting drawings or designs with plenty of exercises to practice these principles.
Differential geometric structures
Poor, Walter A
2007-01-01
This introductory text defines geometric structure by specifying parallel transport in an appropriate fiber bundle and focusing on simplest cases of linear parallel transport in a vector bundle. 1981 edition.
Bledsoe, Gloria J
1987-01-01
The game of "Guess What" is described as a stimulating vehicle for students to consider the unifying or distinguishing features of geometric figures. Teaching suggestions as well as the gameboard are provided. (MNS)
Djernaes, Julie D; Nielsen, Jon V; Berg, Lise C
2017-03-01
The widths of spaces between the thoracolumbar processi spinosi (interspinous spaces) are frequently assessed using radiography in sports horses; however effects of varying X-ray beam angles and geometric distortion have not been previously described. The aim of this prospective, observational study was to determine whether X-ray beam angle has an effect on apparent widths of interspinous spaces. Thoracolumbar spine specimens were collected from six equine cadavers and left-right lateral radiographs and sagittal and dorsal reconstructed computed tomographic (CT) images were acquired. Sequential radiographs were acquired with each interspinous space in focus. Measurements were performed for each interspinous space in the focus position and up to eight angled positions as the interspinous space moved away from focus (±). Focus position measurements were compared to matching sagittal CT measurements. Effect of geometric distortion was evaluated by comparing the interspinous space in radiographs with sagittal and dorsal reconstructed CT images. A total of 49 interspinous spaces were sampled, yielding 274 measurements. X-ray beam angle significantly affected measured width of interspinous spaces in position +3 (P = 0.038). Changes in width did not follow a consistent pattern. Interspinous space widths in focus position were significantly smaller in radiographs compared to matching reconstructed CT images for backs diagnosed with kissing spine syndrome (P Geometric distortion markedly affected appearance of interspinous space width between planes. In conclusion, X-ray beam angle and geometric distortion influence radiographically measured widths of interspinous spaces in the equine thoracolumbar spine, and this should be taken into consideration when evaluating sport horses. © 2016 American College of Veterinary Radiology.
Saturation and geometrical scaling
Praszalowicz, Michal
2016-01-01
We discuss emergence of geometrical scaling as a consequence of the nonlinear evolution equations of QCD, which generate a new dynamical scale, known as the saturation momentum: Qs. In the kinematical region where no other energy scales exist, particle spectra exhibit geometrical scaling (GS), i.e. they depend on the ratio pT=Qs, and the energy dependence enters solely through the energy dependence of the saturation momentum. We confront the hypothesis of GS in different systems with experimental data.
Kuo, Grace M; Touchette, Daniel R; Marinac, Jacqueline S
2013-03-01
To describe and evaluate drug errors and related clinical pharmacist interventions. Cross-sectional observational study with an online data collection form. American College of Clinical Pharmacy practice-based research network (ACCP PBRN). A total of 62 clinical pharmacists from the ACCP PBRN who provided direct patient care in the inpatient and outpatient practice settings. Clinical pharmacist participants identified drug errors in their usual practices and submitted online error reports over a period of 14 consecutive days during 2010. The 62 clinical pharmacists submitted 924 reports; of these, 779 reports from 53 clinical pharmacists had complete data. Drug errors occurred in both the inpatient (61%) and outpatient (39%) settings. Therapeutic categories most frequently associated with drug errors were systemic antiinfective (25%), hematologic (21%), and cardiovascular (19%) drugs. Approximately 95% of drug errors did not result in patient harm; however, 33 drug errors resulted in treatment or medical intervention, 6 resulted in hospitalization, 2 required treatment to sustain life, and 1 resulted in death. The types of drug errors were categorized as prescribing (53%), administering (13%), monitoring (13%), dispensing (10%), documenting (7%), and miscellaneous (4%). Clinical pharmacist interventions included communication (54%), drug changes (35%), and monitoring (9%). Approximately 89% of clinical pharmacist recommendations were accepted by the prescribers: 5% with drug therapy modifications, 28% due to clinical pharmacist prescriptive authority, and 56% without drug therapy modifications. This study provides insight into the role clinical pharmacists play with regard to drug error interventions using a national practice-based research network. Most drug errors reported by clinical pharmacists in the United States did not result in patient harm; however, severe harm and death due to drug errors were reported. Drug error types, therapeutic categories, and
RANDOM AND SYSTEMATIC FIELD ERRORS IN THE SNS RING: A STUDY OF THEIR EFFECTS AND COMPENSATION
Energy Technology Data Exchange (ETDEWEB)
GARDNER,C.J.; LEE,Y.Y.; WENG,W.T.
1998-06-22
The Accumulator Ring for the proposed Spallation Neutron Source (SNS) [l] is to accept a 1 ms beam pulse from a 1 GeV Proton Linac at a repetition rate of 60 Hz. For each beam pulse, 10{sup 14} protons (some 1,000 turns) are to be accumulated via charge-exchange injection and then promptly extracted to an external target for the production of neutrons by spallation. At this very high intensity, stringent limits (less than two parts in 10,000 per pulse) on beam loss during accumulation must be imposed in order to keep activation of ring components at an acceptable level. To stay within the desired limit, the effects of random and systematic field errors in the ring require careful attention. This paper describes the authors studies of these effects and the magnetic corrector schemes for their compensation.
Translation of Mongolian Shamanic Praises- A Study of most Frequent Errors
Directory of Open Access Journals (Sweden)
Khishigsuren Dorj
2016-12-01
Full Text Available The paper explores the translation of Mongolian shamanic poetry into English. Many scholars, foreign and native alike, consider shamanic poems one of the early sources of Mongolian oral literature. Translating ancient poems presents linguistic and cultural challenges along with issues of varying poetic meters of source language and target language. Therefore, the study attempted to explore what the most frequent translation error types are in published works, and if those translations could convey uniqueness of original poems in lexical and lyrical competences. By analyzing eight published works and comparing the selected three translations of a Mongolian shamanic praise, attempt was done to hypothesize a methodology for translation of Mongolian shamanic poetry. Based on the findings, and in correspondence of the methodology drafted, the author’s translation of the praise of Darhad shamaness Ch.Batbayar of Sharnuud clan is also presented.
Nevalainen, Maarit; Kuikka, Liisa; Pitkälä, Kaisu
2014-06-01
To study coping differences between young and experienced GPs in primary care who experience medical errors and uncertainty. Questionnaire-based survey (self-assessment) conducted in 2011. Finnish primary practice offices in Southern Finland. Finnish GPs engaged in primary health care from two different respondent groups: young (working experience ≤ 5 years, n = 85) and experienced (working experience > 5 years, n = 80). Outcome measures included experiences and attitudes expressed by the included participants towards medical errors and tolerance of uncertainty, their coping strategies, and factors that may influence (positively or negatively) sources of errors. In total, 165/244 GPs responded (response rate: 68%). Young GPs expressed significantly more often fear of committing a medical error (70.2% vs. 48.1%, p = 0.004) and admitted more often than experienced GPs that they had committed a medical error during the past year (83.5% vs. 68.8%, p = 0.026). Young GPs were less prone to apologize to a patient for an error (44.7% vs. 65.0%, p = 0.009) and found, more often than their more experienced colleagues, on-site consultations and electronic databases useful for avoiding mistakes. Experienced GPs seem to better tolerate uncertainty and also seem to fear medical errors less than their young colleagues. Young and more experienced GPs use different coping strategies for dealing with medical errors. When GPs become more experienced, they seem to get better at coping with medical errors. Means to support these skills should be studied in future research.
Dönni, A.; Ehlers, G.; Maletta, H.; Fischer, P.; Kitazawa, H.; Zolliker, M.
1996-12-01
The heavy-fermion compound CePdAl with ZrNiAl-type crystal structure (hexagonal space group 0953-8984/8/50/043/img8) was investigated by powder neutron diffraction. The triangular coordination symmetry of magnetic Ce atoms on site 3f gives rise to geometrical frustration. CePdAl orders below 0953-8984/8/50/043/img9 with an incommensurate antiferromagnetic propagation vector 0953-8984/8/50/043/img10, and a longitudinal sine-wave (LSW) modulated spin arrangement. Magnetically ordered moments at Ce(1) and Ce(3) coexist with frustrated disordered moments at Ce(2). The experimentally determined magnetic structure is in agreement with group theoretical symmetry analysis considerations, calculated by the program MODY, which confirm that for Ce(2) an ordered magnetic moment parallel to the magnetically easy c-axis is forbidden by symmetry. Further low-temperature experiments give evidence for a second magnetic phase transition in CePdAl between 0.6 and 1.3 K. Magnetic structures of CePdAl are compared with those of the isostructural compound TbNiAl, where a non-zero ordered magnetic moment for the geometrically frustrated Tb(2) atoms is allowed by symmetry.
Zhang, Yong; Chen, Bin; Li, Dong
2016-04-01
To investigate the influence of polarization on the polarized light propagation in biological tissue, a polarized geometric Monte Carlo method is developed. The Stokes-Mueller formalism is expounded to describe the shifting of light polarization during propagation events, including scattering and interface interaction. The scattering amplitudes and optical parameters of different tissue structures are obtained using Mie theory. Through simulations of polarized light (pulsed dye laser at wavelength of 585 nm) propagation in an infinite slab tissue model and a discrete vessel tissue model, energy depositions in tissue structures are calculated and compared with those obtained through general geometric Monte Carlo simulation under the same parameters but without consideration of polarization effect. It is found that the absorption depth of the polarized light is about one half of that determined by conventional simulations. In the discrete vessel model, low penetrability manifests in three aspects: diffuse reflection became the main contributor to the energy escape, the proportion of epidermal energy deposition increased significantly, and energy deposition in the blood became weaker and more uneven. This may indicate that the actual thermal damage of epidermis during the real-world treatment is higher and the deep buried blood vessels are insufficiently damaged by consideration of polarization effect, compared with the conventional prediction.
Ohshimo, Keijiro; Norimasa, Naoya; Moriyama, Ryoichi; Misaizu, Fuminori
2016-05-01
Geometrical structures of titanium oxide cluster cations and anions have been investigated by ion mobility mass spectrometry and quantum chemical calculations based on density functional theory. Stable cluster compositions with respect to collision induced dissociation were also determined by changing ion injection energy to an ion drift cell for mobility measurements. The TinO2n-1+ cations and TinO2n- anions were predominantly observed at high injection energies, in addition to TinO2n+ for cations and TinO2n+1- for anions. Collision cross sections of TinO2n+ and TinO2n+1- for n = 1-7, determined by ion mobility mass spectrometry, were compared with those obtained theoretically as orientation-averaged cross sections for the optimized structures by quantum chemical calculations. All of the geometrical structures thus assigned have three-dimensional structures, which are in marked contrast with other oxides of late transition metals. One-oxygen atom dissociation processes from TinO2n+ and TinO2n+1- by collisions were also explained by analysis of spin density distributions.
The study of CD side to side error in line/space pattern caused by post-exposure bake effect
Huang, Jin; Guo, Eric; Ge, Haiming; Lu, Max; Wu, Yijun; Tian, Mingjing; Yan, Shichuan; Wang, Ran
2016-10-01
In semiconductor manufacturing, as the design rule has decreased, the ITRS roadmap requires crucial tighter critical dimension (CD) control. CD uniformity is one of the necessary parameters to assure good performance and reliable functionality of any integrated circuit (IC) [1] [2], and towards the advanced technology nodes, it is a challenge to control CD uniformity well. The study of corresponding CD Uniformity by tuning Post-Exposure bake (PEB) and develop process has some significant progress[3], but CD side to side error happening to some line/space pattern are still found in practical application, and the error has approached to over the uniformity tolerance. After details analysis, even though use several developer types, the CD side to side error has not been found significant relationship to the developing. In addition, it is impossible to correct the CD side to side error by electron beam correction as such error does not appear in all Line/Space pattern masks. In this paper the root cause of CD side to side error is analyzed and the PEB module process are optimized as a main factor for improvement of CD side to side error.
Directory of Open Access Journals (Sweden)
Seo-Hee Kim
Full Text Available The present study used event-related potentials (ERPs to investigate deficits in error-monitoring by college students with schizotypal traits. Scores on the Schizotypal Personality Questionnaire (SPQ were used to categorize the participants into schizotypal-trait (n = 17 and normal control (n = 20 groups. The error-monitoring abilities of the participants were evaluated using the Simon task, which consists of congruent (locations of stimulus and response are the same and incongruent (locations of stimulus and response are different conditions. The schizotypal-trait group committed more errors on the Simon task and exhibited smaller error-related negativity (ERN amplitudes than did the control group. Additionally, ERN amplitude measured at FCz was negatively correlated with the error rate on the Simon task in the schizotypal-trait group but not in the control group. The two groups did not differ in terms of correct-related potentials (CRN, error positivity (Pe and correct-related positivity (Pc amplitudes. The present results indicate that individuals with schizotypal traits have deficits in error-monitoring and that reduced ERN amplitudes may represent a biological marker of schizophrenia.
Institute of Scientific and Technical Information of China (English)
范晋伟; 宁堃; 金爱韦; 梅钦
2012-01-01
Focusing on the grinding accuracy of a precision crankshaft (the connecting rod journal), using the mathematical model of coordinated grinding CNC machine tool motion,the calculation method for ideal grinding tool track has been derived.Using the multi-Body system theory,the coordinate transformation equation from the machine to workpiece branch and from the machine to tool branch has been ac quired.And then the precision processing equation of the crankshaft has been drawn.The technology of error compensation based on multi-Body system and the mathematical model of coordinated grinding were combined together.The calculation methods for ideal NC instructions,the precision iterative method to solve the instruction were discussed in detail.Revised instruction can guarantee the surface quality of the crankshaft (the connecting rod journal), and the accuracy of the grinding crankshaft can be achieved.%针对精密曲轴磨削(连杆颈)加工中存在的精度问题,利用随动磨削数控机床运动的数学模型推导出理想的砂轮磨削轨迹的求解方程.利用多体系统理论推导出从机床-工件分支与机床-刀具分支的坐标转换方程、曲轴磨削的精密加工方程,进而将随动磨削加工数学模型与多体系统的误差补偿技术相结合,研究了理想数控指令的生成方法,并用精密迭代的方法求解出误差条件下精密加工数控指令.修正后的指令可以在曲轴磨削生产当中保证曲轴(连杆颈)的表面加工质量,达到了精密曲轴磨削的精度要求.
The geometric semantics of algebraic quantum mechanics.
Cruz Morales, John Alexander; Zilber, Boris
2015-08-06
In this paper, we will present an ongoing project that aims to use model theory as a suitable mathematical setting for studying the formalism of quantum mechanics. We argue that this approach provides a geometric semantics for such a formalism by means of establishing a (non-commutative) duality between certain algebraic and geometric objects.
Nguyen, Huong; Nguyen, Tuan-Dung; Haaijer-Ruskamp, Flora M.; Taxis, Katja
2014-01-01
Background: Medication errors involving insulin are common, particularly during the administration stage, and may cause severe harm. Little is known about the prevalence of insulin administration errors in hospitals, especially in resource-restricted settings, where the burden of diabetes is growing
Error processing in heroin addicts:an event-related potential study
Institute of Scientific and Technical Information of China (English)
林彬
2012-01-01
Objective To investigate the relationship between impulsive behaviors and the error related negativity (ERN) component of event-related potentials of error processing in heroin addicts. Methods Using the paradigms for psychological experiment,the Iowa gambling task(IGT) was performed both in heroin
van den Bemt, P M L A; Robertz, R; de Jong, A L; van Roon, E N; Leufkens, H G M
2007-01-01
BACKGROUND: Medication errors can result in harm, unless barriers to prevent them are present. Drug administration errors are less likely to be prevented, because they occur in the last stage of the drug distribution process. This is especially the case in non-alert patients, as patients often form
Total Survey Error & Institutional Research: A Case Study of the University Experience Survey
Whiteley, Sonia
2014-01-01
Total Survey Error (TSE) is a component of Total Survey Quality (TSQ) that supports the assessment of the extent to which a survey is "fit-for-purpose". While TSQ looks at a number of dimensions, such as relevance, credibility and accessibility, TSE is has a more operational focus on accuracy and minimising errors. Mitigating survey…
On chromatic and geometrical calibration
DEFF Research Database (Denmark)
Folm-Hansen, Jørgen
1999-01-01
of non-uniformity of the illumination of the image plane. Only the image deforming aberrations and the non-uniformity of illumination are included in the calibration models. The topics of the pinhole camera model and the extension to the Direct Linear Transform (DLT) are described. It is shown how......The main subject of the present thesis is different methods for the geometrical and chromatic calibration of cameras in various environments. For the monochromatic issues of the calibration we present the acquisition of monochrome images, the classic monochrome aberrations and the various sources...... the DLT can be extended with non-linear models of the common lens aberrations/errors some of them caused by manufacturing defects like decentering and thin prism distortion. The relation between a warping and the non-linear defects are shown. The issue of making a good resampling of an image by using...
A Toolbox for Geometric Grain Boundary Characterization
Glowinski, Krzysztof; Morawiec, Adam
Properties of polycrystalline materials are affected by grain boundary networks. The most basic aspect of boundary analysis is boundary geometry. This paper describes a package of computer programs for geometric boundary characterization based on macroscopic boundary parameters. The program allows for determination whether a boundary can be classified as near-tilt, -twist, -symmetric et cetera. Since calculations on experimental, i.e., error affected data are assumed, the program also provides distances to the nearest geometrically characteristic boundaries. The software has a number of other functions helpful in grain boundary analysis. One of them is the determination of planes of all characteristic boundaries for a given misorientation. The resulting diagrams of geometrically characteristic boundaries can be linked to experimentally determined grain boundary distributions. In computations, all symmetrically equivalent representations of boundaries are taken into account. Cubic and hexagonal holohedral crystal symmetries are allowed.
Mikuls, TR; Curtis, [No Value; Allison, JJ; Hicks, RW; Saag, KG
2006-01-01
Objectives. To more closely assess medication errors in gout care, we examined data from a national, Internet-accessible error reporting program over a 5-year reporting period. Methods. We examined data from the MEDMARX (TM) database, covering the period from January 1, 1999 through December 31, 200
Recht, Benjamin
2012-01-01
Randomized algorithms that base iteration-level decisions on samples from some pool are ubiquitous in machine learning and optimization. Examples include stochastic gradient descent and randomized coordinate descent. This paper makes progress at theoretically evaluating the difference in performance between sampling with- and without-replacement in such algorithms. Focusing on least means squares optimization, we formulate a noncommutative arithmetic-geometric mean inequality that would prove that the expected convergence rate of without-replacement sampling is faster than that of with-replacement sampling. We demonstrate that this inequality holds for many classes of random matrices and for some pathological examples as well. We provide a deterministic worst-case bound on the gap between the discrepancy between the two sampling models, and explore some of the impediments to proving this inequality in full generality. We detail the consequences of this inequality for stochastic gradient descent and the random...
Matsueda, Hiroaki
2015-01-01
We examine the Hessian potential that derives the flat Minkowski spacetime in $(1+1)$-dimension. The entanglement thermodynamics by the Hessian geometry enables us to obtain the entanglement entropy of a corresponding quantum state by means of holography. We find that the positivity of the entropy leads to the presence of past and future causal cones in the Minkowski spacetime. We also find that the quantum state is equivalent to the thermofield-double state, and then the entropy is proportional to the temperature. The proportionality is consistent with previous holographic works. The present Hessian geometrical approach captures that the causality in the classical side is converted into quantum entanglement inherent in the thermofield dynamics.
Immagini e Concetti in Geometria=The Figural and the Conceptual Components of Geometrical Concepts.
Mariotti, Maria Alessandra
1992-01-01
Discusses geometrical reasoning in the framework of the theory of Figural Concepts to highlight the interaction between the figural and conceptual components of geometrical concepts. Examples of students' difficulties and errors in geometrical reasoning are interpreted according to the internal tension that appears in figural concepts resulting…
Immagini e Concetti in Geometria=The Figural and the Conceptual Components of Geometrical Concepts.
Mariotti, Maria Alessandra
1992-01-01
Discusses geometrical reasoning in the framework of the theory of Figural Concepts to highlight the interaction between the figural and conceptual components of geometrical concepts. Examples of students' difficulties and errors in geometrical reasoning are interpreted according to the internal tension that appears in figural concepts resulting…
Geometric phases in discrete dynamical systems
Energy Technology Data Exchange (ETDEWEB)
Cartwright, Julyan H.E., E-mail: julyan.cartwright@csic.es [Instituto Andaluz de Ciencias de la Tierra, CSIC–Universidad de Granada, E-18100 Armilla, Granada (Spain); Instituto Carlos I de Física Teórica y Computacional, Universidad de Granada, E-18071 Granada (Spain); Piro, Nicolas, E-mail: nicolas.piro@epfl.ch [École Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne (Switzerland); Piro, Oreste, E-mail: piro@imedea.uib-csic.es [Departamento de Física, Universitat de les Illes Balears, E-07122 Palma de Mallorca (Spain); Tuval, Idan, E-mail: ituval@imedea.uib-csic.es [Mediterranean Institute for Advanced Studies, CSIC–Universitat de les Illes Balears, E-07190 Mallorca (Spain)
2016-10-14
In order to study the behaviour of discrete dynamical systems under adiabatic cyclic variations of their parameters, we consider discrete versions of adiabatically-rotated rotators. Parallelling the studies in continuous systems, we generalize the concept of geometric phase to discrete dynamics and investigate its presence in these rotators. For the rotated sine circle map, we demonstrate an analytical relationship between the geometric phase and the rotation number of the system. For the discrete version of the rotated rotator considered by Berry, the rotated standard map, we further explore this connection as well as the role of the geometric phase at the onset of chaos. Further into the chaotic regime, we show that the geometric phase is also related to the diffusive behaviour of the dynamical variables and the Lyapunov exponent. - Highlights: • We extend the concept of geometric phase to maps. • For the rotated sine circle map, we demonstrate an analytical relationship between the geometric phase and the rotation number. • For the rotated standard map, we explore the role of the geometric phase at the onset of chaos. • We show that the geometric phase is related to the diffusive behaviour of the dynamical variables and the Lyapunov exponent.
An approach to error elimination for multi-axis CNC machining and robot manipulation
Institute of Scientific and Technical Information of China (English)
XIONG; CaiHua
2007-01-01
The geometrical accuracy of a machined feature on a workpiece during machining processes is mainly affected by the kinematic chain errors of multi-axis CNC machines and robots, locating precision of fixtures, and datum errors on the workpiece. It is necessary to find a way to minimize the feature errors on the workpiece. In this paper, the kinematic chain errors are transformed into the displacements of the workpiece. The relationship between the kinematic chain errors and the displacements of the position and orientation of the workpiece is developed. A mapping model between the displacements of workpieces and the datum errors, and adjustments of fixtures is established. The suitable sets of unit basis twists for each of the commonly encountered types of feature and the corresponding locating directions are analyzed, and an error elimination (EE) method of the machined feature is formulated. A case study is given to verify the EE method.
Analysis of Solar Two Heliostat Tracking Error Sources
Energy Technology Data Exchange (ETDEWEB)
Jones, S.A.; Stone, K.W.
1999-01-28
This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.
Analysis of Solar Two heliostat tracking error sources
Energy Technology Data Exchange (ETDEWEB)
Stone, K.W.; Jones, S.A.
1999-07-01
This paper explores the geometrical errors that reduce heliostat tracking accuracy at Solar Two. The basic heliostat control architecture is described. Then, the three dominant error sources are described and their effect on heliostat tracking is visually illustrated. The strategy currently used to minimize, but not truly correct, these error sources is also shown. Finally, a novel approach to minimizing error is presented.
Geometric Control of Patterned Linear Systems
Hamilton, Sarah C
2012-01-01
This monograph is aiming at researchers of systems control, especially those interested in multiagent systems, distributed and decentralized control, and structured systems. The book assumes no prior background in geometric control theory; however, a first year graduate course in linear control systems is desirable. Since not all control researchers today are exposed to geometric control theory, the book also adopts a tutorial style by way of examples that illustrate the geometric and abstract algebra concepts used in linear geometric control. In addition, the matrix calculations required for the studied control synthesis problems of linear multivariable control are illustrated via a set of running design examples. As such, some of the design examples are of higher dimension than one may typically see in a text; this is so that all the geometric features of the design problem are illuminated.
An Empirical Study of End-User Behaviour in Spreadsheet Error Detection & Correction
Bishop, Brian
2008-01-01
Very little is known about the process by which end-user developers detect and correct spreadsheet errors. Any research pertaining to the development of spreadsheet testing methodologies or auditing tools would benefit from information on how end-users perform the debugging process in practice. Thirteen industry-based professionals and thirty-four accounting & finance students took part in a current ongoing experiment designed to record and analyse end-user behaviour in spreadsheet error detection and correction. Professionals significantly outperformed students in correcting certain error types. Time-based cell activity analysis showed that a strong correlation exists between the percentage of cells inspected and the number of errors corrected. The cell activity data was gathered through a purpose written VBA Excel plug-in that records the time and detail of all cell selection and cell change actions of individuals.
A POPULATION BASED STUDY OF REFRACTIVE ERRORS IN CHILDREN AMONG AGE GROUP OF 7-15 YEARS
Directory of Open Access Journals (Sweden)
Dhanya
2016-03-01
Full Text Available INTRODUCTION Refractive error is the most common cause of visual impairment around the world and the second leading cause of treatable blindness. Very early detection and treatment of visual impairment in children results in a reduction in the number of school children with poor sight being uncorrected. AIM To study the prevalence of uncorrected refractive errors among children of 7-15 years of age group. MATERIALS AND METHODS A total of 958 children of age group 7-15 years were examined during a time period of 1 year from June 2014 to May 2015. The examination included visual acuity, slit lamp examination, auto refractometer, keratometry, A-Scan Biometry and fundoscopic examination. Patients were then taken to assess the refractive error under the cycloplegic effect of 1% homatropine by streak retinoscopy. Hyperopia was defined as spherical power of >+2.00 D, Myopia as -0.50 D. RESULTS Visual impairment (VA of 6/12 or worse in better eye was present in 8.14% of the children examined. The prevalence of myopia, hypermetropia and astigmatism was 4.70%, 1.24%, 2.2% respectively, Myopia was commonly seen in older age group children. CONCLUSION Refractive error was the main cause of visual impairment in children between 7-15 years. Myopia was the most common refractive error particularly in older children. Uncorrected refractive errors among children have a considerable impact on learning and their academic achievement. Diagnosis and correction of refractive error is the most effective form of eye care. As it is an easily treatable cause of visual impairment, effective strategies should be developed to eliminate refractive error in children.
Directory of Open Access Journals (Sweden)
Mariusz Belka
Full Text Available A set of 15 new sulphonamide derivatives, presenting antitumor activity have been subjected to a metabolic stability study. The results showed that besides products of biotransformation, some additional peaks occurred in chromatograms. Tandem mass spectrometry revealed the same mass and fragmentation pathway, suggesting that geometric isomerization occurred. Thus, to support this hypothesis, quantitative structure-retention relationships were applied. Human liver microsomes were used as an in vitro model of metabolism. The biotransformation reactions were tracked by liquid chromatography assay and additionally, fragmentation mass spectra were recorded. In silico molecular modeling at a semi-empirical level was conducted as a starting point for molecular descriptor calculations. A quantitative structure-retention relationship model was built applying multiple linear regression based on selected three-dimensional descriptors. The studied compounds revealed high metabolic stability, with a tendency to form hydroxylated biotransformation products. However, significant chemical instability in conditions simulating human body fluids was noticed. According to literature and MS data geometrical isomerization was suggested. The developed in sillico model was able to describe the relationship between the geometry of isomer pairs and their chromatographic retention properties, thus it supported the hypothesis that the observed pairs of peaks are most likely geometric isomers. However, extensive structural investigations are needed to fully identify isomers' geometry. An effort to describe MS fragmentation pathways of novel chemical structures is often not enough to propose structures of potent metabolites and products of other chemical reactions that can be observed in compound solutions at early drug discovery studies. The results indicate that the relatively non-expensive and not time- and labor-consuming in sillico approach could be a good supportive
Directory of Open Access Journals (Sweden)
Teresita M Porter
Full Text Available Nuclear large subunit ribosomal DNA is widely used in fungal phylogenetics and to an increasing extent also amplicon-based environmental sequencing. The relatively short reads produced by next-generation sequencing, however, makes primer choice and sequence error important variables for obtaining accurate taxonomic classifications. In this simulation study we tested the performance of three classification methods: 1 a similarity-based method (BLAST + Metagenomic Analyzer, MEGAN; 2 a composition-based method (Ribosomal Database Project naïve bayesian classifier, NBC; and, 3 a phylogeny-based method (Statistical Assignment Package, SAP. We also tested the effects of sequence length, primer choice, and sequence error on classification accuracy and perceived community composition. Using a leave-one-out cross validation approach, results for classifications to the genus rank were as follows: BLAST + MEGAN had the lowest error rate and was particularly robust to sequence error; SAP accuracy was highest when long LSU query sequences were classified; and, NBC runs significantly faster than the other tested methods. All methods performed poorly with the shortest 50-100 bp sequences. Increasing simulated sequence error reduced classification accuracy. Community shifts were detected due to sequence error and primer selection even though there was no change in the underlying community composition. Short read datasets from individual primers, as well as pooled datasets, appear to only approximate the true community composition. We hope this work informs investigators of some of the factors that affect the quality and interpretation of their environmental gene surveys.
Ganushchak, Lesya Y; Schiller, Niels O
2008-01-01
During speech production, we continuously monitor what we say. In situations in which speech errors potentially have more severe consequences, e.g. during a public presentation, our verbal self-monitoring system may pay special attention to prevent errors than in situations in which speech errors are more acceptable, such as a casual conversation. In an event-related potential study, we investigated whether or not motivation affected participants' performance using a picture naming task in a semantic blocking paradigm. Semantic context of to-be-named pictures was manipulated; blocks were semantically related (e.g., cat, dog, horse, etc.) or semantically unrelated (e.g., cat, table, flute, etc.). Motivation was manipulated independently by monetary reward. The motivation manipulation did not affect error rate during picture naming. However, the high-motivation condition yielded increased amplitude and latency values of the error-related negativity (ERN) compared to the low-motivation condition, presumably indicating higher monitoring activity. Furthermore, participants showed semantic interference effects in reaction times and error rates. The ERN amplitude was also larger during semantically related than unrelated blocks, presumably indicating that semantic relatedness induces more conflict between possible verbal responses.
Panesar, Sukhmeet S; Ignatowicz, Agnieszka M; Donaldson, Liam J
2014-12-01
The aim of this qualitative study is to better understand the types of error occurring during the management of cardiac arrests that led to a death. All patient safety incidents involving management of cardiac arrests and resulting in death which were reported to a national patient safety database over a 17-month period were analysed. Structured data from each report were extracted and these together with the free text, were subjected to content analysis which was inductive, with the coding scheme emerged from continuous reading and re-reading of incidents. There were 30 patient safety incidents involving management of cardiac arrests and resulting in death. The reviewers identified a main shortfall in the management of each cardiac arrest and this resulted in 12 different factors being documented. These were grouped into four themes that highlighted systemic weaknesses: miscommunication involving crash number (4/30, 13%), shortfalls in staff attending the arrest (4/30, 13%), equipment deficits (11/30, 36%), and poor application of knowledge and skills (11/30, 37%). The factors identified represent serious shortfalls in the quality of response to cardiac arrests resulting in death in hospital. No firm conclusion can be drawn about how many deaths in the study population would have been averted if the emergency had been managed to a high standard. The effective management of cardiac arrests should be considered as one of the markers of safe care within a healthcare organisation.
Modeling misidentification errors that result from use of genetic tags in capture-recapture studies
Yoshizaki, J.; Brownie, C.; Pollock, K.H.; Link, W.A.
2011-01-01
Misidentification of animals is potentially important when naturally existing features (natural tags) such as DNA fingerprints (genetic tags) are used to identify individual animals. For example, when misidentification leads to multiple identities being assigned to an animal, traditional estimators tend to overestimate population size. Accounting for misidentification in capture-recapture models requires detailed understanding of the mechanism. Using genetic tags as an example, we outline a framework for modeling the effect of misidentification in closed population studies when individual identification is based on natural tags that are consistent over time (non-evolving natural tags). We first assume a single sample is obtained per animal for each capture event, and then generalize to the case where multiple samples (such as hair or scat samples) are collected per animal per capture occasion. We introduce methods for estimating population size and, using a simulation study, we show that our new estimators perform well for cases with moderately high capture probabilities or high misidentification rates. In contrast, conventional estimators can seriously overestimate population size when errors due to misidentification are ignored. ?? 2009 Springer Science+Business Media, LLC.
Huang, Yanhong; Xu, Honglei; Tu, Rong; Zhang, Xu; Huang, Min
2016-01-01
In this paper, the common errors in medical devices test reports are classified and analyzed. And then the main 11 influence factors for these inspection report errors are summarized. The hierarchy model was also developed and verified by presentation data using MATLAB. The feasibility of comprehensive weights quantitative comparison has been analyzed by using the analytic hierarchy process. In the end, this paper porspects the further research direction.
Geometric Hypergraph Learning for Visual Tracking.
Du, Dawei; Qi, Honggang; Wen, Longyin; Tian, Qi; Huang, Qingming; Lyu, Siwei
2016-11-18
Graph-based representation is widely used in visual tracking field by finding correct correspondences between target parts in different frames. However, most graph-based trackers consider pairwise geometric relations between local parts. They do not make full use of the target's intrinsic structure, thereby making the representation easily disturbed by errors in pairwise affinities when large deformation or occlusion occurs. In this paper, we propose a geometric hypergraph learning-based tracking method, which fully exploits high-order geometric relations among multiple correspondences of parts in different frames. Then visual tracking is formulated as the mode-seeking problem on the hypergraph in which vertices represent correspondence hypotheses and hyperedges describe high-order geometric relations among correspondences. Besides, a confidence-aware sampling method is developed to select representative vertices and hyperedges to construct the geometric hypergraph for more robustness and scalability. The experiments are carried out on three challenging datasets (VOT2014, OTB100, and Deform-SOT) to demonstrate that our method performs favorably against other existing trackers.
Use of Techno-Anthropologic Approaches in Studying Technology--induced Errors.
Borycki, Elizabeth M; Kushniruk, Andre W
2015-01-01
In this book chapter the authors review several Techno-Anthropologic approaches that can be used to improve the quality and safety of health information technology (HIT) by eliminating or reducing the incidence and occurrence of technology-induced errors. Technology-induced errors arise from interactions between health professionals, patients and/or HIT (i.e., software and hardware) and lead to a medical error. Techno-Anthropologic methods can be used to address these types of medical errors before they occur. In this book chapter they are discussed in the context of: (a) how they can be applied to identifying technology-induced errors and (b) how this information can be used to design and implement safer HIT. Important in this chapter is a review of several methods: traditional ethnography, rapid assessment of clinical information systems, video ethnography and photovoice as they are applied to the discovery of potential (i.e., near misses) and actual (i.e., mistakes) technology-induced errors.
Study for compensation of unexpected image placement error caused by VSB mask writer deflector
Lee, Hyun-joo; Choi, Min-kyu; Moon, Seong-yong; Cho, Han-Ku; Doh, Jonggul; Ahn, Jinho
2012-11-01
The Electron Optical System (EOS) is designed for the electron beam machine employing a vector scanned variable shaped beam (VSB) with the deflector. Most VSB systems utilize multi stage deflection architecture to obtain a high precision and a high-speed deflection at the same time. Many companies use the VSB mask writer and they have a lot of experiences about Image Placement (IP) error suffering from contaminated EOS deflector. And also most of VSB mask writer users are having already this error. In order to use old VSB mask writer, we introduce the method how to compensate unexpected IP error from VSB mask writer. There are two methods to improve this error due to contaminated deflector. The one is the usage of 2nd stage grid correction in addition to the original stage grid. And the other is the usage of uncontaminated area in the deflector. According to the results of this paper, 30% of IP error can be reduced by 2nd stage grid correction and the change of deflection area in deflector. It is the effective method to reduce the deflector error at the VSB mask writer. And it can be the one of the solution for the long-term production of photomask.
Condom-use errors and problems: a neglected aspect of studies assessing condom effectiveness.
Crosby, Richard; Sanders, Stephanie; Yarber, William L; Graham, Cynthia A
2003-05-01
To assess and compare condom-use errors and problems among condom-using university males and females. A convenience sample of 260 undergraduates was utilized. Males (n=118) and females (n=142) reported using condoms in the past 3 months for at least one episode of sex (penis in the mouth, vagina, or rectum) with a partner of the other sex. A questionnaire assessed 15 errors and problems associated with condom use that could be observed or experienced by females as well as males. About 44% reported lack of condom availability. Errors that could contribute to failure included using sharp instruments to open condom packages (11%), storing condoms in wallets (19%), and not using a new condom when switching from one form of sex to another (83%). Thirty-eight percent reported that condoms were applied after sex had begun, and nearly 14% indicated they removed condoms before sex was concluded. Problems included loss of erection during condom application (15%) or during sex (10%). About 28% reported that condoms had either slipped off or broken. Nearly 19% perceived, at least once, that their condom problems necessitated the use of a new condom. Few differences were observed in errors and problems between males and females. Findings suggest that condom-use errors and problems may be quite common and that assessment of errors and problems do not necessarily need to be gender specific. Findings also suggest that correcting "user failure" may represent an important challenge in the practice of preventive medicine.
Parallax error in long-axial field-of-view PET scanners—a simulation study
Schmall, Jeffrey P.; Karp, Joel S.; Werner, Matt; Surti, Suleman
2016-07-01
There is a growing interest in the design and construction of a PET scanner with a very long axial extent. One critical design challenge is the impact of the long axial extent on the scanner spatial resolution properties. In this work, we characterize the effect of parallax error in PET system designs having an axial field-of-view (FOV) of 198 cm (total-body PET scanner) using fully-3D Monte Carlo simulations. Two different scintillation materials were studied: LSO and LaBr3. The crystal size in both cases was 4 × 4 × 20 mm3. Several different depth-of-interaction (DOI) encoding techniques were investigated to characterize the improvement in spatial resolution when using a DOI capable detector. To measure spatial resolution we simulated point sources in a warm background in the center of the imaging FOV, where the effects of axial parallax are largest, and at several positions radially offset from the center. Using a line-of-response based ordered-subset expectation maximization reconstruction algorithm we found that the axial resolution in an LSO scanner degrades from 4.8 mm to 5.7 mm (full width at half max) at the center of the imaging FOV when extending the axial acceptance angle (α) from ±12° (corresponding to an axial FOV of 18 cm) to the maximum of ±67°—a similar result was obtained with LaBr3, in which the axial resolution degraded from 5.3 mm to 6.1 mm. For comparison we also measured the degradation due to radial parallax error in the transverse imaging FOV; the transverse resolution, averaging radial and tangential directions, of an LSO scanner was degraded from 4.9 mm to 7.7 mm, for a measurement at the center of the scanner compared to a measurement with a radial offset of 23 cm. Simulations of a DOI detector design improved the spatial resolution in all dimensions. The axial resolution in the LSO-based scanner, with α = ± 67°, was improved from 5.7 mm to 5.0 mm by
The retreat from locative overgeneralisation errors: a novel verb grammaticality judgment study.
Bidgood, Amy; Ambridge, Ben; Pine, Julian M; Rowland, Caroline F
2014-01-01
Whilst some locative verbs alternate between the ground- and figure-locative constructions (e.g. Lisa sprayed the flowers with water/Lisa sprayed water onto the flowers), others are restricted to one construction or the other (e.g. *Lisa filled water into the cup/*Lisa poured the cup with water). The present study investigated two proposals for how learners (aged 5-6, 9-10 and adults) acquire this restriction, using a novel-verb-learning grammaticality-judgment paradigm. In support of the semantic verb class hypothesis, participants in all age groups used the semantic properties of novel verbs to determine the locative constructions (ground/figure/both) in which they could and could not appear. In support of the frequency hypothesis, participants' tolerance of overgeneralisation errors decreased with each increasing level of verb frequency (novel/low/high). These results underline the need to develop an integrated account of the roles of semantics and frequency in the retreat from argument structure overgeneralisation.
The retreat from locative overgeneralisation errors: a novel verb grammaticality judgment study.
Directory of Open Access Journals (Sweden)
Amy Bidgood
Full Text Available Whilst some locative verbs alternate between the ground- and figure-locative constructions (e.g. Lisa sprayed the flowers with water/Lisa sprayed water onto the flowers, others are restricted to one construction or the other (e.g. *Lisa filled water into the cup/*Lisa poured the cup with water. The present study investigated two proposals for how learners (aged 5-6, 9-10 and adults acquire this restriction, using a novel-verb-learning grammaticality-judgment paradigm. In support of the semantic verb class hypothesis, participants in all age groups used the semantic properties of novel verbs to determine the locative constructions (ground/figure/both in which they could and could not appear. In support of the frequency hypothesis, participants' tolerance of overgeneralisation errors decreased with each increasing level of verb frequency (novel/low/high. These results underline the need to develop an integrated account of the roles of semantics and frequency in the retreat from argument structure overgeneralisation.
Gunz, Philipp; Bulygina, Ekaterina
2012-11-01
In the 1930s subadult hominin remains and Mousterian artifacts were discovered in the Teshik-Tash cave in South Uzbekistan. Since then, the majority of the scientific community has interpreted Teshik-Tash as a Neanderthal. However, some have considered aspects of the morphology of the Teshik-Tash skull to be more similar to fossil modern humans such as those represented at Skhūl and Qafzeh, or to subadult Upper Paleolithic modern humans. Here we present a 3D geometric morphometric analysis of the Teshik-Tash frontal bone in the context of developmental shape changes in recent modern humans, Neanderthals, and early modern humans. We assess the phenetic affinities of Teshik-Tash to other subadult fossils, and use developmental simulations to predict possible adult shapes. We find that the morphology of the frontal bone places the Teshik-Tash child close to other Neanderthal children and that the simulated adult shapes are closest to Neanderthal adults. Taken together with genetic data showing that Teshik-Tash carried mtDNA of the Neanderthal type, as well as its occipital bun, and its shovel-shaped upper incisors, these independent lines of evidence firmly place Teshik-Tash among Neanderthals.
Directory of Open Access Journals (Sweden)
Judah Paul Makonye
2016-12-01
Full Text Available The study focused on the errors and misconceptions that learners manifest in the addition and subtraction of directed numbers. Skemp’s notions of relational and instrumental understanding of mathematics and Sfard’s participation and acquisition metaphors of learning mathematics informed the study. Data were collected from 35 Grade 8 learners’ exercise book responses to directed numbers tasks as well as through interviews. Content analysis was based on Kilpatrick et al.’s strands of mathematical proficiency. The findings were as follows: 83.3% of learners have misconceptions, 16.7% have procedural errors, 67% have strategic errors, and 28.6% have logical errors on addition and subtraction of directed numbers. The sources of the errors seemed to be lack of reference to mediating artifacts such as number lines or other real contextual situations when learning to deal with directed numbers. Learners seemed obsessed with positive numbers and addition operation frames—the first number ideas they encountered in school. They could not easily accommodate negative numbers or the subtraction operation involving negative integers. Another stumbling block seemed to be poor proficiency in English, which is the language of teaching and learning mathematics. The study recommends that building conceptual understanding on directed numbers and operations on them must be encouraged through use of multirepresentations and other contexts meaningful to learners. For that reason, we urge delayed use of calculators.
Directory of Open Access Journals (Sweden)
Francisco Cavas-Martínez
Full Text Available AIM: To establish a new procedure for 3D geometric reconstruction of the human cornea to obtain a solid model that represents a personalized and in vivo morphology of both the anterior and posterior corneal surfaces. This model is later analyzed to obtain geometric variables enabling the characterization of the corneal geometry and establishing a new clinical diagnostic criterion in order to distinguish between healthy corneas and corneas with keratoconus. METHOD: The method for the geometric reconstruction of the cornea consists of the following steps: capture and preprocessing of the spatial point clouds provided by the Sirius topographer that represent both anterior and posterior corneal surfaces, reconstruction of the corneal geometric surfaces and generation of the solid model. Later, geometric variables are extracted from the model obtained and statistically analyzed to detect deformations of the cornea. RESULTS: The variables that achieved the best results in the diagnosis of keratoconus were anterior corneal surface area (ROC area: 0.847, p<0.000, std. error: 0.038, 95% CI: 0.777 to 0.925, posterior corneal surface area (ROC area: 0.807, p<0.000, std. error: 0.042, 95% CI: 0,726 to 0,889, anterior apex deviation (ROC area: 0.735, p<0.000, std. error: 0.053, 95% CI: 0.630 to 0.840 and posterior apex deviation (ROC area: 0.891, p<0.000, std. error: 0.039, 95% CI: 0.8146 to 0.9672. CONCLUSION: Geometric modeling enables accurate characterization of the human cornea. Also, from a clinical point of view, the procedure described has established a new approach for the study of eye-related diseases.
Liu, Yufang; Ding, Junxia; Liu, Ruiqiong; Shi, Deheng; Sun, Jinfeng
2009-12-01
The geometric structures and infrared (IR) spectra in the electronically excited state of a novel doubly hydrogen-bonded complex formed by fluorenone and alcohols, which has been observed by IR spectra in experimental study, are investigated by the time-dependent density functional theory (TDDFT) method. The geometric structures and IR spectra in both ground state and the S(1) state of this doubly hydrogen-bonded FN-2MeOH complex are calculated using the DFT and TDDFT methods, respectively. Two intermolecular hydrogen bonds are formed between FN and methanol molecules in the doubly hydrogen-bonded FN-2MeOH complex. Moreover, the formation of the second intermolecular hydrogen bond can make the first intermolecular hydrogen bond become slightly weak. Furthermore, it is confirmed that the spectral shoulder at around 1700 cm(-1) observed in the IR spectra should be assigned as the doubly hydrogen-bonded FN-2MeOH complex from our calculated results. The electronic excited-state hydrogen bonding dynamics is also studied by monitoring some vibraitonal modes related to the formation of hydrogen bonds in different electronic states. As a result, both the two intermolecular hydrogen bonds are significantly strengthened in the S(1) state of the doubly hydrogen-bonded FN-2MeOH complex. The hydrogen bond strengthening in the electronically excited state is similar to the previous study on the singly hydrogen-bonded FN-MeOH complex and play important role on the photophysics of fluorenone in solutions.
Mahavira's Geometrical Problems
DEFF Research Database (Denmark)
Høyrup, Jens
2004-01-01
Analysis of the geometrical chapters Mahavira's 9th-century Ganita-sara-sangraha reveals inspiration from several chronological levels of Near-Eastern and Mediterranean mathematics: (1)that known from Old Babylonian tablets, c. 1800-1600 BCE; (2)a Late Babylonian but pre-Seleucid Stratum, probably...
Burgess, Claudia R.
2014-01-01
Designed for a broad audience, including educators, camp directors, afterschool coordinators, and preservice teachers, this investigation aims to help individuals experience mathematics in unconventional and exciting ways by engaging them in the physical activity of building geometric shapes using ropes. Through this engagement, the author…
Abu Dabrh, Abd Moain; Murad, Mohammad Hassan; Newcomb, Richard D; Buchta, William G; Steffen, Mark W; Wang, Zhen; Lovett, Amanda K; Steinkraus, Lawrence W
2016-09-02
Communication skills and professionalism are two competencies in graduate medical education that are challenging to evaluate. We aimed to develop, test and validate a de novo instrument to evaluate these two competencies. Using an Objective Standardized Clinical Examination (OSCE) based on a medication error scenario, we developed an assessment instrument that focuses on distinctive domains [context of discussion, communication and detection of error, management of error, empathy, use of electronic medical record (EMR) and electronic medical information resources (EMIR), and global rating]. The aim was to test feasibility, acceptability, and reliability of the method. Faculty and standardized patients (SPs) evaluated 56 trainees using the instrument. The inter-rater reliability of agreement between faculty was substantial (Fleiss k = 0.71) and intraclass correlation efficient was excellent (ICC = 0.80). The measured agreement between faculty and SPs evaluation of resident was lower (Fleiss k = 0.36). The instrument showed good conformity (ICC = 0.74). The majority of the trainees (75 %) had satisfactory or higher performance in all six assessed domains and 86 % found the OSCE to be realistic. Sixty percent reported not receiving feedback on EMR use and asked for subsequent training. An OSCE-based instrument using a medical error scenario can be used to assess competency in professionalism, communication, using EMRs and managing medical errors.
Hamouda, Arafat
2011-01-01
It is no doubt that teacher written feedback plays an essential role in teaching writing skill. The present study, by use of questionnaire, investigates Saudi EFL students' and teachers' preferences and attitudes towards written error corrections. The study also aims at identifying the difficulties encountered by teachers and students during the…
Directory of Open Access Journals (Sweden)
Casey P Durand
Full Text Available INTRODUCTION: Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. METHODS: A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. RESULTS: In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. CONCLUSIONS: Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.
Error analysis and feasibility study of dynamic stiffness matrix-based damping matrix identification
Ozgen, Gokhan O.; Kim, Jay H.
2009-02-01
Developing a method to formulate a damping matrix that represents the actual spatial distribution and mechanism of damping of the dynamic system has been an elusive goal. The dynamic stiffness matrix (DSM)-based damping identification method proposed by Lee and Kim is attractive and promising because it identifies the damping matrix from the measured DSM without relying on any unfounded assumptions. However, in ensuing works it was found that damping matrices identified from the method had unexpected forms and showed traces of large variance errors. The causes and possible remedies of the problem are sought for in this work. The variance and leakage errors are identified as the major sources of the problem, which are then related to system parameters through numerical and experimental simulations. An improved experimental procedure is developed to reduce the effect of these errors in order to make the DSM-based damping identification method a practical option.
Directory of Open Access Journals (Sweden)
L. Luquot
2015-11-01
Full Text Available The aim of this study is to compare the structural, geometrical and transport parameters of a limestone rock sample determined by X-ray microtomography (XMT images and laboratory experiments. Total and effective porosity, surface-to-volume ratio, pore size distribution, permeability, tortuosity and effective diffusion coefficient have been estimated. Sensitivity analyses of the segmentation parameters have been performed. The limestone rock sample studied here have been characterized using both approaches before and after a reactive percolation experiment. Strong dissolution process occured during the percolation, promoting a wormhole formation. This strong heterogeneity formed after the percolation step allows to apply our methodology to two different samples and enhance the use of experimental techniques or XMT images depending on the rock heterogeneity. We established that for most of the parameters calculated here, the values obtained by computing XMT images are in agreement with the classical laboratory measurements. We demonstrated that the computational porosity is more informative than the laboratory one. We observed that pore size distributions obtained by XMT images and laboratory experiments are slightly different but complementary. Regarding the effective diffusion coefficient, we concluded that both approaches are valuable and give similar results. Nevertheless, we wrapped up that computing XMT images to determine transport, geometrical and petrophysical parameters provides similar results than the one measured at the laboratory but with much shorter durations.
Vallejo Seco, Guillermo; Ato García, Manuel; Fernández García, María Paula; Livacic Rojas, Pablo Esteban; Tuero Herrero, Ellián
2016-01-01
Antecedentes: S. Usami (2014) describe un método que permite determinar de forma realista el tamaño de muestra en la investigación longitudinal utilizando un modelo multinivel. En la presente investigación se extiende el trabajo aludido a situaciones donde es probable que se incumpla el supuesto de homogeneidad de los errores a través de los grupos y la estructura del término de error no sea de identidad escalada. Método: para ello, se ha seguido procedimiento basado en transformar los compon...
Fixing Geometric Errors on Polygonal Models: A Survey
Institute of Scientific and Technical Information of China (English)
Tao Ju
2009-01-01
Polygonal models are popular representations of 3D objects. The use of polygonal models in computational applications often requires a model to properly bound a 3D solid. That is, the polygonal model needs to be closed, manifold, and free of self-intersections. This paper surveys a sizeable literature for repairing models that do not satisfy this criteria, focusing on categorizing them by their methodology and capability. We hope to offer pointers to further readings for researchers and practitioners, and suggestions of promising directions for future research endeavors.
Directory of Open Access Journals (Sweden)
Yan Wang
Full Text Available Past event-related potentials (ERPs research shows that, after exerting effortful emotion inhibition, the neural correlates of performance monitoring (e.g. error-related negativity were weakened. An undetermined issue is whether all forms of emotion regulation uniformly impair later performance monitoring. The present study compared the cognitive consequences of two emotion regulation strategies, namely suppression and reappraisal. Participants were instructed to suppress their emotions while watching a sad movie, or to adopt a neutral and objective attitude toward the movie, or to just watch the movie carefully. Then after a mood scale, all participants completed an ostensibly unrelated Stroop task, during which ERPs (i.e. error-related negativity (ERN, post-error positivity (Pe and N450 were obtained. Reappraisal group successfully decreased their sad emotion, relative to the other two groups. Compared with participants in the control group and the reappraisal group, those who suppressed their emotions during the sad movie showed reduced ERN after error commission. Participants in the suppression group also made more errors in incongruent Stroop trials than the other two groups. There were no significant main effects or interactions of group for reaction time, Pe and N450. Results suggest that reappraisal is both more effective and less resource-depleting than suppression.
Nguyen, Huong-Thao; Pham, Hong-Tham; Vo, Dang-Khoa; Nguyen, Tuan-Dung; van den Heuvel, Edwin R; Haaijer-Ruskamp, Flora M; Taxis, Katja
2014-04-01
Little is known about interventions to reduce intravenous medication administration errors in hospitals, especially in low- and middle-income countries. To assess the effect of a clinical pharmacist-led training programme on clinically relevant errors during intravenous medication preparation and administration in a Vietnamese hospital. A controlled before and after study with baseline and follow-up measurements was conducted in an intensive care unit (ICU) and a post-surgical unit (PSU). The intervention comprised lectures, practical ward-based teaching sessions and protocols/guidelines, and was conducted by a clinical pharmacist and a nurse. Data on intravenous medication preparation and administration errors were collected by direct observation 12 h/day for seven consecutive days. Generalised estimating equations (GEE) were used to assess the effect of the intervention on the prevalence of clinically relevant erroneous doses, corrected for confounding factors. 1204 intravenous doses were included, 516 during the baseline period (236 on ICU and 280 on PSU) and 688 during the follow-up period (407 on ICU and 281 on PSU). The prevalence of clinically relevant erroneous doses decreased significantly on the intervention ward (ICU) from 64.0% to 48.9% (pclinically relevant errors (p=0.013). The pharmacist-led training programme was effective, but the error rate remained relatively high. Further quality improvement strategies are needed, including changes to the working environment and promotion of a safety culture.
A study on fatigue measurement of operators for human error prevention in NPPs
Energy Technology Data Exchange (ETDEWEB)
Ju, Oh Yeon; Il, Jang Tong; Meiling, Luo; Hee, Lee Young [KAERI, Daejeon (Korea, Republic of)
2012-10-15
The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and
Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred
2015-01-01
Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.
Energy Technology Data Exchange (ETDEWEB)
Olivares Gomez, Edgardo; Cortez, Luis A. Barbosa [Universidade Estadual de Campinas (FEAGRI/UNICAMP), SP (Brazil). Fac. de Engenharia Agricola. Lab. de Termodinamica e Energia; Alarcon, Guillermo A. Rocca; Perez, Luis E. Brossard [Universidad de Oriente, Santiago de Cuba (Cuba)
2008-07-01
Two types of biomass solid particles, elephant grass (Pennisetum purpureum Schum. variety) and sugar cane trash, were studied in laboratory in order to obtain information about several physical and geometrical properties. In the both case, the length, breadth, and thickness of fifty particles selected randomly from each fraction of the size class, obtained by mechanical fractionation through sieves, were measured manually given their size. A geometric model of type rectangular base prism was adopted because based on observations it was demonstrated that the most of particles that were measured exhibited length which was significantly greater that width ( l >> a ). From these measurements average values for other properties were estimated, for example, characteristic dimension of particle, projected area of the rectangular prism, area of the prism rectangular section, volume of the rectangular prism, shape factors, sphericity, particles specific superficial area and equivalent diameter. A statistical analysis was done and proposed empirical and semi-empirical mathematical correlation models obtained by lineal regression, which show a goodness of fit of these equations to the reported experimental data. (author)
Some Asymptotic Inference in Multinomial Nonlinear Models (a Geometric Approach)
Institute of Scientific and Technical Information of China (English)
WEIBOCHENG
1996-01-01
A geometric framework is proposed for multinomlat nonlinear modelsbased on a modified vemlon of the geometric structure presented by Bates & Watts[4]. We use this geometric framework to study some asymptotic inference in terms ofcurvtures for multlnomial nonlinear models. Our previous results [15] for ordlnary nonlinear regression models are extended to multlnomlal nonlinear models.
Non-adiabatic geometrical quantum gates in semiconductor quantum dots
Solinas, P; Zanghì, N; Rossi, F; Solinas, Paolo; Zanardi, Paolo; Zanghì, Nino; Rossi, Fausto
2003-01-01
In this paper we study the implementation of non-adiabatic geometrical quantum gates with in semiconductor quantum dots. Different quantum information enconding/manipulation schemes exploiting excitonic degrees of freedom are discussed. By means of the Aharanov-Anandan geometrical phase one can avoid the limitations of adiabatic schemes relying on adiabatic Berry phase; fast geometrical quantum gates can be in principle implemented
On the a priori estimation of collocation error covariance functions: a feasibility study
DEFF Research Database (Denmark)
Arabelos, D.N.; Forsberg, René; Tscherning, C.C.
2007-01-01
Error covariance estimates are necessary information for the combination of solutions resulting from different kinds of data or methods, or for the assimilation of new results in already existing solutions. Such a combination or assimilation process demands proper weighting of the data, in order ...
A study and simulation of the impact of high-order aberrations to overlay error distribution
Sun, G.; Wang, F.; Zhou, C.
2011-03-01
With reduction of design rules, a number of corresponding new technologies, such as i-HOPC, HOWA and DBO have been proposed and applied to eliminate overlay error. When these technologies are in use, any high-order error distribution needs to be clearly distinguished in order to remove the underlying causes. Lens aberrations are normally thought to mainly impact the Matching Machine Overlay (MMO). However, when using Image-Based overlay (IBO) measurement tools, aberrations become the dominant influence on single machine overlay (SMO) and even on stage repeatability performance. In this paper, several measurements of the error distributions of the lens of SMEE SSB600/10 prototype exposure tool are presented. Models that characterize the primary influence from lens magnification, high order distortion, coma aberration and telecentricity are shown. The contribution to stage repeatability (as measured with IBO tools) from the above errors was predicted with simulator and compared to experiments. Finally, the drift of every lens distortion that impact to SMO over several days was monitored and matched with the result of measurements.
A Developmental Study of Children's Ability to Adopt Perspectives and Find Errors in Text.
Walczyk, Jeffrey J.; Hall, Vernon C.
1991-01-01
Children's ability to adopt perspectives and then apply schematic information on-line while listening to stories was investigated in 2 experiments with 59 second graders and 60 fourth graders. Although both subject groups had the knowledge to identify errors, fourth graders were more likely to apply such knowledge on-line during comprehension.…
Are Divorce Studies Trustworthy? The Effects of Survey Nonresponse and Response Errors
Mitchell, Colter
2010-01-01
Researchers rely on relationship data to measure the multifaceted nature of families. This article speaks to relationship data quality by examining the ramifications of different types of error on divorce estimates, models predicting divorce behavior, and models employing divorce as a predictor. Comparing matched survey and divorce certificate…
On the a priori estimation of collocation error covariance functions: a feasibility study
DEFF Research Database (Denmark)
Arabelos, D.N.; Forsberg, René; Tscherning, C.C.
2007-01-01
Error covariance estimates are necessary information for the combination of solutions resulting from different kinds of data or methods, or for the assimilation of new results in already existing solutions. Such a combination or assimilation process demands proper weighting of the data, in order ...
Directory of Open Access Journals (Sweden)
Tushar Kanti Bera
2011-03-01
Full Text Available A Projection Error Propagation-based Regularization (PEPR method is proposed and the reconstructed image quality is improved in Electrical Impedance Tomography (EIT. A projection error is produced due to the misfit of the calculated and measured data in the reconstruction process. The variation of the projection error is integrated with response matrix in each iterations and the reconstruction is carried out in EIDORS. The PEPR method is studied with the simulated boundary data for different inhomogeneity geometries. Simulated results demonstrate that the PEPR technique improves image reconstruction precision in EIDORS and hence it can be successfully implemented to increase the reconstruction accuracy in EIT.>doi:10.5617/jeb.158 J Electr Bioimp, vol. 2, pp. 2-12, 2011
Open quantum systems and error correction
Shabani Barzegar, Alireza
Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC
Probabilistic quantum error correction
Fern, J; Fern, Jesse; Terilla, John
2002-01-01
There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.
Estimation of geometrically undistorted B(0) inhomogeneity maps.
Matakos, A; Balter, J; Cao, Y
2014-09-01
Geometric accuracy of MRI is one of the main concerns for its use as a sole image modality in precision radiation therapy (RT) planning. In a state-of-the-art scanner, system level geometric distortions are within acceptable levels for precision RT. However, subject-induced B0 inhomogeneity may vary substantially, especially in air-tissue interfaces. Recent studies have shown distortion levels of more than 2 mm near the sinus and ear canal are possible due to subject-induced field inhomogeneity. These distortions can be corrected with the use of accurate B0 inhomogeneity field maps. Most existing methods estimate these field maps from dual gradient-echo (GRE) images acquired at two different echo-times under the assumption that the GRE images are practically undistorted. However distortion that may exist in the GRE images can result in estimated field maps that are distorted in both geometry and intensity, leading to inaccurate correction of clinical images. This work proposes a method for estimating undistorted field maps from GRE acquisitions using an iterative joint estimation technique. The proposed method yields geometrically corrected GRE images and undistorted field maps that can also be used for the correction of images acquired by other sequences. The proposed method is validated through simulation, phantom experiments and applied to patient data. Our simulation results show that our method reduces the root-mean-squared error of the estimated field map from the ground truth by ten-fold compared to the distorted field map. Both the geometric distortion and the intensity corruption (artifact) in the images caused by the B0 field inhomogeneity are corrected almost completely. Our phantom experiment showed improvement in the geometric correction of approximately 1 mm at an air-water interface using the undistorted field map compared to using a distorted field map. The proposed method for undistorted field map estimation can lead to improved geometric
Chattopadhyay, Bhargab; Kelley, Ken
2016-01-01
The coefficient of variation is an effect size measure with many potential uses in psychology and related disciplines. We propose a general theory for a sequential estimation of the population coefficient of variation that considers both the sampling error and the study cost, importantly without specific distributional assumptions. Fixed sample size planning methods, commonly used in psychology and related fields, cannot simultaneously minimize both the sampling error and the study cost. The sequential procedure we develop is the first sequential sampling procedure developed for estimating the coefficient of variation. We first present a method of planning a pilot sample size after the research goals are specified by the researcher. Then, after collecting a sample size as large as the estimated pilot sample size, a check is performed to assess whether the conditions necessary to stop the data collection have been satisfied. If not an additional observation is collected and the check is performed again. This process continues, sequentially, until a stopping rule involving a risk function is satisfied. Our method ensures that the sampling error and the study costs are considered simultaneously so that the cost is not higher than necessary for the tolerable sampling error. We also demonstrate a variety of properties of the distribution of the final sample size for five different distributions under a variety of conditions with a Monte Carlo simulation study. In addition, we provide freely available functions via the MBESS package in R to implement the methods discussed.
Byun, Tara McAllister
2017-01-01
Purpose: This study documented the efficacy of visual-acoustic biofeedback intervention for residual rhotic errors, relative to a comparison condition involving traditional articulatory treatment. All participants received both treatments in a single-subject experimental design featuring alternating treatments with blocked randomization of…
Jodaie, Mina; Farrokhi, Farahman; Zoghi, Masoud
2011-01-01
This study was an attempt to compare EFL teachers' and intermediate high school students' perceptions of written corrective feedback on grammatical errors and also to specify their reasons for choosing comprehensive or selective feedback and some feedback strategies over some others. To collect the required data, the student version of…
Frè, Pietro Giuseppe
2013-01-01
‘Gravity, a Geometrical Course’ presents general relativity (GR) in a systematic and exhaustive way, covering three aspects that are homogenized into a single texture: i) the mathematical, geometrical foundations, exposed in a self consistent contemporary formalism, ii) the main physical, astrophysical and cosmological applications, updated to the issues of contemporary research and observations, with glimpses on supergravity and superstring theory, iii) the historical development of scientific ideas underlying both the birth of general relativity and its subsequent evolution. The book is divided in two volumes. Volume One is dedicated to the development of the theory and basic physical applications. It guides the reader from the foundation of special relativity to Einstein field equations, illustrating some basic applications in astrophysics. A detailed account of the historical and conceptual development of the theory is combined with the presentation of its mathematical foundations. Differe...
Dynamics in geometrical confinement
Kremer, Friedrich
2014-01-01
This book describes the dynamics of low molecular weight and polymeric molecules when they are constrained under conditions of geometrical confinement. It covers geometrical confinement in different dimensionalities: (i) in nanometer thin layers or self supporting films (1-dimensional confinement) (ii) in pores or tubes with nanometric diameters (2-dimensional confinement) (iii) as micelles embedded in matrices (3-dimensional) or as nanodroplets.The dynamics under such conditions have been a much discussed and central topic in the focus of intense worldwide research activities within the last two decades. The present book discusses how the resulting molecular mobility is influenced by the subtle counterbalance between surface effects (typically slowing down molecular dynamics through attractive guest/host interactions) and confinement effects (typically increasing the mobility). It also explains how these influences can be modified and tuned, e.g. through appropriate surface coatings, film thicknesses or pore...
Progressive geometric algorithms
Directory of Open Access Journals (Sweden)
Sander P.A. Alewijnse
2015-01-01
Full Text Available Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms for two geometric problems: computing the convex hull of a planar point set, and finding popular places in a set of trajectories.
Geometric Time Delay Interferometry
Vallisneri, Michele
2005-01-01
The space-based gravitational-wave observatory LISA, a NASA-ESA mission to be launched after 2012, will achieve its optimal sensitivity using Time Delay Interferometry (TDI), a LISA-specific technique needed to cancel the otherwise overwhelming laser noise in the inter-spacecraft phase measurements. The TDI observables of the Michelson and Sagnac types have been interpreted physically as the virtual measurements of a synthesized interferometer. In this paper, I present Geometric TDI, a new an...
Geometric Stochastic Resonance
Ghosh, Pulak Kumar; Savel'ev, Sergey E; Nori, Franco
2015-01-01
A Brownian particle moving across a porous membrane subject to an oscillating force exhibits stochastic resonance with properties which strongly depend on the geometry of the confining cavities on the two sides of the membrane. Such a manifestation of stochastic resonance requires neither energetic nor entropic barriers, and can thus be regarded as a purely geometric effect. The magnitude of this effect is sensitive to the geometry of both the cavities and the pores, thus leading to distinctive optimal synchronization conditions.
Geometrically Consistent Mesh Modification
Bonito, A.
2010-01-01
A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.
Geometric properties of eigenfunctions
Energy Technology Data Exchange (ETDEWEB)
Jakobson, D; Nadirashvili, N [McGill University, Montreal, Quebec (Canada); Toth, John [University of Chicago, Chicago, Illinois (United States)
2001-12-31
We give an overview of some new and old results on geometric properties of eigenfunctions of Laplacians on Riemannian manifolds. We discuss properties of nodal sets and critical points, the number of nodal domains, and asymptotic properties of eigenfunctions in the high-energy limit (such as weak * limits, the rate of growth of L{sup p} norms, and relationships between positive and negative parts of eigenfunctions)
Geometric theory of information
2014-01-01
This book brings together geometric tools and their applications for Information analysis. It collects current and many uses of in the interdisciplinary fields of Information Geometry Manifolds in Advanced Signal, Image & Video Processing, Complex Data Modeling and Analysis, Information Ranking and Retrieval, Coding, Cognitive Systems, Optimal Control, Statistics on Manifolds, Machine Learning, Speech/sound recognition, and natural language treatment which are also substantially relevant for the industry.
Perspective: Geometrically frustrated assemblies
Grason, Gregory M.
2016-09-01
This perspective will overview an emerging paradigm for self-organized soft materials, geometrically frustrated assemblies, where interactions between self-assembling elements (e.g., particles, macromolecules, proteins) favor local packing motifs that are incompatible with uniform global order in the assembly. This classification applies to a broad range of material assemblies including self-twisting protein filament bundles, amyloid fibers, chiral smectics and membranes, particle-coated droplets, curved protein shells, and phase-separated lipid vesicles. In assemblies, geometric frustration leads to a host of anomalous structural and thermodynamic properties, including heterogeneous and internally stressed equilibrium structures, self-limiting assembly, and topological defects in the equilibrium assembly structures. The purpose of this perspective is to (1) highlight the unifying principles and consequences of geometric frustration in soft matter assemblies; (2) classify the known distinct modes of frustration and review corresponding experimental examples; and (3) describe outstanding questions not yet addressed about the unique properties and behaviors of this broad class of systems.
Lloyd, Seth
2012-01-01
This letter analyzes the limits that quantum mechanics imposes on the accuracy to which spacetime geometry can be measured. By applying the fundamental physical bounds to measurement accuracy to ensembles of clocks and signals moving in curved spacetime -- e.g., the global positioning system -- I derive a covariant version of the quantum geometric limit: the total number of ticks of clocks and clicks of detectors that can be contained in a four volume of spacetime of radius r and temporal extent t is less than or equal to rt/\\pi x_P t_P, where x_P, t_P are the Planck length and time. The quantum geometric limit bounds the number of events or `ops' that can take place in a four-volume of spacetime: each event is associated with a Planck-scale area. Conversely, I show that if each quantum event is associated with such an area, then Einstein's equations must hold. The quantum geometric limit is consistent with and complementary to the holographic bound which limits the number of bits that can exist within a spat...
The use of a covariate reduces experimental error in nutrient digestion studies in growing pigs.
Jacobs, B M; Patience, J F; Lindemann, M D; Stalder, K J; Kerr, B J
2013-02-01
Covariance analysis limits error, the degree of nuisance variation, and overparameterizing factors to accurately measure treatment effects. Data dealing with growth, carcass composition, and genetics often use covariates in data analysis. In contrast, nutritional studies typically do not. The objectives of this study were to 1) determine the effect of feeding diets containing dehulled, degermed corn, corn-soybean meal, or distillers dried grains with solubles on nutrient digestibility coefficients, 2) evaluate potential interactive effects between initial and final treatment diets on the final treatment diet effects, and 3) determine if initial criterion (digestibility or physiological values) would effectively correct for variation among pigs that could thereby affect final treatment diet digestibility coefficients. Seventy-two crossbred barrows [(Yorkshire × Landrace × Duroc) × Chester White] were randomly assigned to 1 of the 3 diets within initial dietary treatment for Phase-2 (P2; 14 d). Fecal and blood samples were collected after feeding the Phase-1 (P1) diets for 14 d (trial d-14) and on d 28 after feeding the P2 diets for 14 d. Fecal samples were dried and analyzed for C, ether extract, GE, N, NDF, P, and S. Plasma samples were analyzed for plasma urea N and triacylglycerides. Pigs were fed diets that differed widely in CP, NDF, and P, resulting in an overall decrease in C, GE, NDF, N, P, and S digestibility and plasma urea N and triacylglycerides as dietary fiber increased in P1 and P2 (P < 0.10). There were no differences in P2 criteria due to blocking for the P1 diet. There tended (P = 0.10 to 0.20) to be P1 × P2 interactions for NDF and S, indicating that the response of pigs to the P2 diet may depend on the P1 diet. In contrast, when the P1 variable was used as a covariate for P2 data, it was statistically significant for GE, NDF, N, S, and plasma urea N (P < 0.10) whereas C and ether extract showed tendencies but not for P digestibility or plasma
Qiu, Weiliang; Rosner, Bernard
2010-01-01
The use of the cumulative average model to investigate the association between disease incidence and repeated measurements of exposures in medical follow-up studies can be dated back to the 1960s (Kahn and Dawber, J Chron Dis 19:611-620, 1966). This model takes advantage of all prior data and thus should provide a statistically more powerful test of disease-exposure associations. Measurement error in covariates is common for medical follow-up studies. Many methods have been proposed to correct for measurement error. To the best of our knowledge, no methods have been proposed yet to correct for measurement error in the cumulative average model. In this article, we propose a regression calibration approach to correct relative risk estimates for measurement error. The approach is illustrated with data from the Nurses' Health Study relating incident breast cancer between 1980 and 2002 to time-dependent measures of calorie-adjusted saturated fat intake, controlling for total caloric intake, alcohol intake, and baseline age.
Deformable image registration with geometric changes
Institute of Scientific and Technical Information of China (English)
Yu LIU; Bo ZHU
2015-01-01
Geometric changes present a number of difficulties in deformable image registration. In this paper, we propose a global deformation framework to model geometric changes whilst promoting a smooth transformation between source and target images. To achieve this, we have developed an innovative model which significantly reduces the side effects of geometric changes in image registration, and thus improves the registration accuracy. Our key contribution is the introduction of a sparsity-inducing norm, which is typically L1 norm regularization targeting regions where geometric changes occur. This preserves the smoothness of global transformation by eliminating local transformation under different conditions. Numerical solutions are discussed and analyzed to guarantee the stability and fast convergence of our algorithm. To demonstrate the effectiveness and utility of this method, we evaluate it on both synthetic data and real data from traumatic brain injury (TBI). We show that the transformation estimated from our model is able to reconstruct the target image with lower instances of error than a standard elastic registration model.
Study on Cell Error Rate of a Satellite ATM System Based on CDMA
Institute of Scientific and Technical Information of China (English)
赵彤宇; 张乃通
2003-01-01
In this paper, the cell error rate (CER) of a CDMA-based satellite ATM system is analyzed. Two fading models, i.e. the partial fading model and the total fading model are presented according to multi-path propagation fading and shadow effect. Based on the total shadow model, the relation of CER vs. the number of subscribers at various elevations under 2D-RAKE receiving and non-diversity receiving is got. The impact on cell error rate with pseudo noise (PN) code length is also considered. The result that the maximum likelihood combination of multi-path signal would not improve the system performance when multiple access interference (MAI) is small, on the contrary the performance may be even worse is abtained.
Body composition in young adults with inborn errors of protein metabolism--a pilot study.
Wilcox, G; Strauss, B J G; Francis, D E M; Upton, H; Boneh, A
2005-01-01
The natural history of inborn errors of protein metabolism and the long-term effects of prescribed semisynthetic therapeutic diets are largely unknown. We assessed body composition, measuring body-fat mass and distribution, fat-free mass, total body protein, total body potassium, bone density and skeletal muscle mass, in young adults (age > 18 years; 6 female, 5 male) with inborn errors of protein metabolism maintained on long-term low-protein diets, compared with controls. Female patients were significantly shorter (159.4 cm vs 169.2 cm, p = 0.013) and had higher BMI (25.3 vs 22.0 kg/m2, p metabolic syndrome and cardiovascular disease in this population.
Directory of Open Access Journals (Sweden)
Juan Mario Torres Nova
2010-05-01
Full Text Available Gaussian minimum shift keying (GMSK and differential binary phase shift keying (DBPSK are two digital modulation schemes which are -frequently used in radio communication systems; however, there is interdependence in the use of its benefits (spectral efficiency, low bit error rate, low inter symbol interference, etc. Optimising one parameter creates problems for another; for example, the GMSK scheme succeeds in reducing bandwidth when introducing a Gaussian filter into an MSK (minimum shift ke-ying modulator in exchange for increasing inter-symbol interference in the system. The DBPSK scheme leads to lower error pro-bability, occupying more bandwidth; it likewise facilitates synchronous data transmission due to the receiver’s bit delay when re-covering a signal.
Sarazan, R Dustan
2014-01-01
these factors are understood, a pressure sensing and measurement system can be selected that is optimized for the experimental model being studied, thus eliminating errors or inaccurate results. Copyright © 2014. Published by Elsevier Inc.
Geometric Modelling by Recursively Cutting Vertices
Institute of Scientific and Technical Information of China (English)
吕伟; 梁友栋; 等
1989-01-01
In this paper,a new method for curve and surface modelling is introduced which generates curves and surfaces by recursively cutting and grinding polygons and polyhedra.It is a generalization of the existing corner-cutting methods.A lot of properties,such as geometric continuity,representation,shape-preserving,and the algorithm are studied which show that such curves and surfaces are suitable for geometric designs in CAD,computer graphics and their application fields.
Refractive error study in young subjects: results from a rural area in Paraguay
Directory of Open Access Journals (Sweden)
Isabel Signes-Soler
2017-03-01
Full Text Available AIM: To evaluate the distribution of refractive error in young subjects in a rural area of Paraguay in the context of an international cooperation campaign for the prevention of blindness. METHODS: A sample of 1466 young subjects (ranging from 3 to 22 years old, with a mean age of 11.21±3.63 years old, were examined to assess their distance visual acuity (VA and refractive error. The first screening examination performed by trained volunteers, included visual acuity testing, autokeratometry and non-cycloplegic autorefraction. Inclusion criteria for a second complete cycloplegic eye examination by an optometrist were VA <20/25 (0.10 logMAR or 0.8 decimal and/or corneal astigmatism ≥1.50 D. RESULTS: An uncorrected distance VA of 0 logMAR (1.0 decimal was found in 89.2% of children. VA <20/25 and/or corneal astigmatism ≥1.50 D was found in 3.9% of children (n=57, with a prevalence of hyperopia of 5.2% (0.2% of the total in this specific group. Furthermore, myopia (spherical equivalent ≤-0.5 D was found in 37.7% of the refracted children (0.5% of the total. The prevalence of refractive astigmatism (cylinder ≤-1.50 D was 15.8% (0.6% of the total. Visual impairment (VI (0.05≤VA≤0.3 was found in 12/114 (0.4% of the refracted eyes. Main causes for VI were refractive error (58%, retinal problems (17%, 2/12, albinism (17%, 2/12 and unknown (8%, 1/12. CONCLUSION: A low prevalence of refractive error has been found in this rural area of Paraguay, with higher prevalence of myopia than of hyperopia.
A research agenda: Does geocoding positional error matter in health GIS studies?
Jacquez, Geoffrey
2012-01-01
Until recently, little attention has been paid to geocoding positional accuracy and its impacts on accessibility measures; estimates of disease rates; findings of disease clustering; spatial prediction and modeling of health outcomes; and estimates of individual exposures based on geographic proximity to pollutant and pathogen sources. It is now clear that positional errors can result in flawed findings and poor public health decisions. Yet the current state-of-practice is to ignore geocoding...
Directory of Open Access Journals (Sweden)
Carlos de Castro
2008-06-01
Full Text Available En este estudio se analizan los errores, en el ajuste del valor posicional, en los que incurren maestros en formación en tareas de estimación de multiplicación y división con números naturales y decimales. Para ello, se elaboró una prueba de estimación compuesta por 24 cálculos directos, sin contexto, que se aplicó a 26 futuros maestros. Posteriormente se realizaron entrevistas para determinar los errores en los que incurrieron y se encontraron 8 tipos diferentes de errores. Los errores más frecuentes son los debidos a un conteo defectuoso de las posiciones para esta-blecer el orden de magnitud de los resultados y los que se producen al dividir un número por otro mayor añadiendo un cero de más al cociente. La colocación de la coma decimal en el resultado es, en todos los casos, una gran fuente de dificultad. In this study we analyze elementary preservice teachers’ errors, in the adjustment of positional value, in computational estimation tasks involving multiplication and division with integers and decimals. We designed an estimation test composed by 24 direct computations, without context, which we administered to 26 future teachers. Later we interviewed them to determine the errors made in estimation and found 8 different types of errors. The more frequent errors are the ones due to a wrong counting of positions to establish the order of magnitude of the results, and errors produced in division when the divisor is greater than the dividend by adding one extra cero to the quotient. Finding the place for the decimal point is, in all the cases, a great source of difficulty.
Experimental and numerical study of error fields in the CNT stellarator
Hammond, K. C.; Anichowski, A.; Brenner, P. W.; Pedersen, T. S.; Raftopoulos, S.; Traverso, P.; Volpe, F. A.
2016-07-01
Sources of error fields were indirectly inferred in a stellarator by reconciling computed and numerical flux surfaces. Sources considered so far include the displacements and tilts of the four circular coils featured in the simple CNT stellarator. The flux surfaces were measured by means of an electron beam and fluorescent rod, and were computed by means of a Biot-Savart field-line tracing code. If the ideal coil locations and orientations are used in the computation, agreement with measurements is poor. Discrepancies are ascribed to errors in the positioning and orientation of the in-vessel interlocked coils. To that end, an iterative numerical method was developed. A Newton-Raphson algorithm searches for the coils’ displacements and tilts that minimize the discrepancy between the measured and computed flux surfaces. This method was verified by misplacing and tilting the coils in a numerical model of CNT, calculating the flux surfaces that they generated, and testing the algorithm’s ability to deduce the coils’ displacements and tilts. Subsequently, the numerical method was applied to the experimental data, arriving at a set of coil displacements whose resulting field errors exhibited significantly improved agreement with the experimental results.
Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek
2016-07-01
Effective and efficient medication reporting processes are essential in promoting patient safety. Few qualitative studies have explored reporting of medication errors by health professionals, and none have made reference to behavioural theories. The objective was to describe and understand the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE). This was a qualitative study comprising face-to-face, semi-structured interviews within three major medical/surgical hospitals of Abu Dhabi, the UAE. Health professionals were sampled purposively in strata of profession and years of experience. The semi-structured interview schedule focused on behavioural determinants around medication error reporting, facilitators, barriers and experiences. The Theoretical Domains Framework (TDF; a framework of theories of behaviour change) was used as a coding framework. Ethical approval was obtained from a UK university and all participating hospital ethics committees. Data saturation was achieved after interviewing ten nurses, ten pharmacists and nine physicians. Whilst it appeared that patient safety and organisational improvement goals and intentions were behavioural determinants which facilitated reporting, there were key determinants which deterred reporting. These included the beliefs of the consequences of reporting (lack of any feedback following reporting and impacting professional reputation, relationships and career progression), emotions (fear and worry) and issues related to the environmental context (time taken to report). These key behavioural determinants which negatively impact error reporting can facilitate the development of an intervention, centring on organisational safety and reporting culture, to enhance reporting effectiveness and efficiency.
DEFF Research Database (Denmark)
Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René;
2013-01-01
injury in car crashes. The calculated odds ratios in these studies showed large variations, despite the use of uniform guidelines for the study designs. The main objective of the present article is to provide insight into the presence of random and systematic errors in the six DRUID case–control studies....... Relevant information was gathered from the DRUID-reports for eleven indicators for errors. The results showed that differences between the odds ratios in the DRUID case–control studies may indeed be (partially) explained by random and systematic errors. Selection bias and errors due to small sample sizes...... and cell counts were the most frequently observed errors in the six DRUID case–control studies. Therefore, it is recommended that epidemiological studies that assess the risk of psychoactive substances in traffic pay specific attention to avoid these potential sources of random and systematic errors...
Wang, Tiansheng; Benedict, Neal; Olsen, Keith M; Luan, Rong; Zhu, Xi; Zhou, Ningning; Tang, Huilin; Yan, Yingying; Peng, Yao; Shi, Luwen
2015-10-01
Pharmacists are integral members of the multidisciplinary team for critically ill patients. Multiple nonrandomized controlled studies have evaluated the outcomes of pharmacist interventions in the intensive care unit (ICU). This systematic review focuses on controlled clinical trials evaluating the effect of pharmacist intervention on medication errors (MEs) in ICU settings. Two independent reviewers searched Medline, Embase, and Cochrane databases. The inclusion criteria were nonrandomized controlled studies that evaluated the effect of pharmacist services vs no intervention on ME rates in ICU settings. Four studies were included in the meta-analysis. Results suggest that pharmacist intervention has no significant contribution to reducing general MEs, although pharmacist intervention may significantly reduce preventable adverse drug events and prescribing errors. This meta-analysis highlights the need for high-quality studies to examine the effect of the critical care pharmacist.
Directory of Open Access Journals (Sweden)
Trimarchi Francesco
2006-02-01
Full Text Available Abstract Background Most patients with growth hormone deficiency (GHD show high body mass index. Overweight subjects, but GHD patients, were demonstrated to have high left ventricular mass index (LVMi and abnormal LV geometric remodeling. We sought to study these characteristics in a group of GHD patients, in an attempt to establish the BMI-independent role of GHD. Methods Fifty-four patients, 28 F and 26 M, aged 45.9 ± 13.1, with adult-onset GHD (pituitary adenomas 48.2%, empty sella 27.8%, pituitary inflammation 5.5%, cranio-pharyngioma 3.7%, not identified pathogenesis 14.8% were enrolled. To minimize any possible interferences of BMI on the aim of this study, the control group included 20 age- and weight-matched healthy subjects. The LV geometry was identified by the relationship between LVMi (cut-off 125 g/m2 and relative wall thickness (cut-off 0.45 at echocardiography. Results There was no significant between-group difference in resting cardiac morphology and function, nor when considering age-related discrepancy. The majority of patients had normal-low LVM/LVMi, but about one fourth of them showed higher values. These findings correlated to relatively high circulating IGF-1 and systolic blood pressure at rest. The main LV geometric pattern was eccentric hypertrophy in 22% of GHD population (26% of with severe GHD and in 15% of controls (p = NS. Conclusion Though the lack of significant differences in resting LV morphology and function, about 25% of GHD patients showed high LVMi (consisting of eccentric hypertrophy, not dissimilarly to overweight controls. This finding, which prognostic role is well known in obese and hypertensive patients, is worthy to be investigated in GHD patients through wider controlled trials.
Bell, Thomas L.; Abdullah, A.; Martin, Russell L.; North, Gerald R.
1990-01-01
Estimates of monthly average rainfall based on satellite observations from a low earth orbit will differ from the true monthly average because the satellite observes a given area only intermittently. This sampling error inherent in satellite monitoring of rainfall would occur even if the satellite instruments could measure rainfall perfectly. The size of this error is estimated for a satellite system being studied at NASA, the Tropical Rainfall Measuring Mission (TRMM). First, the statistical description of rainfall on scales from 1 to 1000 km is examined in detail, based on rainfall data from the Global Atmospheric Research Project Atlantic Tropical Experiment (GATE). A TRMM-like satellite is flown over a two-dimensional time-evolving simulation of rainfall using a stochastic model with statistics tuned to agree with GATE statistics. The distribution of sampling errors found from many months of simulated observations is found to be nearly normal, even though the distribution of area-averaged rainfall is far from normal. For a range of orbits likely to be employed in TRMM, sampling error is found to be less than 10 percent of the mean for rainfall averaged over a 500 x 500 sq km area.
Directory of Open Access Journals (Sweden)
Finch Stephen J
2005-04-01
Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.
Institute of Scientific and Technical Information of China (English)
无
2004-01-01
with validation study, J. Nonparametric Statistics, 1995, 4: 365-394.［15］Wang, Q. H., Estimation of partial linear error-in-variables model, Jourmal of Multivariate Analysis, 1999, 69:30-64.［16］Wang, Q. H., Estimation of linear error-in-covariables models with validation data under random censorship,Journal of Multivariate Analysis, 2000, 74: 245-266.［17］Wang, Q. H., Estimation of partial linear error-in-response models with validation data, Ann. Inst. Statist. Math.,2003, 55(1): 21～39［18］Wang, Q. H., Dimension reduction in partly linear error-in-response model error-in-response models with validation data, Journal of Multivariate Analysis, 2003, 85(2): 234-252.［19］Wang, Q. H., Rao, J. N. K., Empirical likelihood-based in linear errors-in-covariables models with validation data, Biometrika, 2002, 89: 345-358.［20］Owen, A., Empirical likelihood for linear models, Ann. Statist., 1991, 19: 1725-1747.［21］Li, K. C., Sliced inverse regression for dimension reduction (with discussion), J. Amer. Statist. Assoc., 1991,86: 337-342.［22］Duan, N., Li, K. C., Slicing regression: a link-free regression method, Ann. Statist. 1991, 19, 505-530.［23］Zhu, L. X., Fang, K. T, Asymptotics for kernel estimator of sliced inverse regression, Ann. Statist., 1996, 24:1053-1068.［24］Carroll, R. J., Li, K. C., Errors in variables for nonlinear regression: dimension reduction and data visualization,J. Amer. Statist. Assoc., 1992, 87: 1040-1050.［25］Rosner, B., Willett, W. C., Spiegelman, D., Correction of logistic regression relative risk estimates and confidence intervals for systematic within-person measurement error, Statist. Med., 1989, 8: 1075-1093.［26］H(a)rdle, W., Stoke, T M., Investigating smooth multiple regression by the method of average derivatives, J. Amer.Statist. Assoc., 1989, 84: 986-995.
Geometric transition in Non-perturbative Topological string
Sugimoto, Yuji
2016-01-01
We study a geometric transition in non-perturbative topological string. We consider two cases. One is the geometric transition from the closed topological string on the local $\\mathcal{B}_{3}$ to the closed topological string on the resolved conifold. The other is the geometric transition from the closed topological string on the local $\\mathcal{B}_{3}$ to the open topological string on the resolved conifold with a toric A-brane. We find that, in both cases, the geometric transition can be applied for the non-perturbative topological string. We also find the corrections of the value of K\\"ahler parameters at which the geometric transition occurs.
... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...
... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...
Geometric Number Systems and Spinors
Sobczyk, Garret
2015-01-01
The real number system is geometrically extended to include three new anticommuting square roots of plus one, each such root representing the direction of a unit vector along the orthonormal coordinate axes of Euclidean 3-space. The resulting geometric (Clifford) algebra provides a geometric basis for the famous Pauli matrices which, in turn, proves the consistency of the rules of geometric algebra. The flexibility of the concept of geometric numbers opens the door to new understanding of the nature of space-time, and of Pauli and Dirac spinors as points on the Riemann sphere, including Lorentz boosts.
Ambrosetti, Antonio; Malchiodi, Andrea
2009-01-01
This volume contains lecture notes on some topics in geometric analysis, a growing mathematical subject which uses analytical techniques, mostly of partial differential equations, to treat problems in differential geometry and mathematical physics. The presentation of the material should be rather accessible to non-experts in the field, since the presentation is didactic in nature. The reader will be provided with a survey containing some of the most exciting topics in the field, with a series of techniques used to treat such problems.
Corrochano, Eduardo Bayro
2010-01-01
This book presents contributions from a global selection of experts in the field. This useful text offers new insights and solutions for the development of theorems, algorithms and advanced methods for real-time applications across a range of disciplines. Written in an accessible style, the discussion of all applications is enhanced by the inclusion of numerous examples, figures and experimental analysis. Features: provides a thorough discussion of several tasks for image processing, pattern recognition, computer vision, robotics and computer graphics using the geometric algebra framework; int
Shapere, Alfred D
1989-01-01
During the last few years, considerable interest has been focused on the phase that waves accumulate when the equations governing the waves vary slowly. The recent flurry of activity was set off by a paper by Michael Berry, where it was found that the adiabatic evolution of energy eigenfunctions in quantum mechanics contains a phase of geometric origin (now known as 'Berry's phase') in addition to the usual dynamical phase derived from Schrödinger's equation. This observation, though basically elementary, seems to be quite profound. Phases with similar mathematical origins have been identified
A Study on Measurement Error during Alternating Current Induced Voltage Tests on Large Transformers
Institute of Scientific and Technical Information of China (English)
WANG Xuan; LI Yun-ge; CAO Xiao-long; LIU Ying
2006-01-01
The large transformer is pivotal equipment in an electric power supply system; Its partial discharge test and the induced voltage withstand test on large transformers are carried out at a frequency about twice the working frequency. If the magnetizing inductance cannot compensate for the stray capacitance, the test sample turns into a capacitive load and a capacitive rise exhibits in the testing circuit. For self-restoring insulation, a method has been recommended in IEC60-1 that an unapproved measuring system be calibrated by an approved system at a voltage not less than 50% of the rated testing voltage, and the result then be extrapolated linearly. It has been found that this method leads to great error due to the capacitive rise if it is not correctly used during a withstand voltage test under certain testing conditions, especially for a test on high voltage transformers with large capacity. Since the withstand voltage test is the most important means to examine the operation reliability of a transformer, and it can be destructive to the insulation, a precise measurement must be guaranteed. In this paper a factor, named as the capacitive rise factor, is introduced to assess the rise. The voltage measurement error during the calibration is determined by the parameters of the test sample and the testing facilities, as well as the measuring point. Based on theoretical analysis in this paper, a novel method is suggested and demonstrated to estimate the error by using the capacitive rise factor and other known parameters of the testing circuit.
Theoretical and experimental studies of error in square-law detector circuits
Stanley, W. D.; Hearn, C. P.; Williams, J. B.
1984-01-01
Square law detector circuits to determine errors from the ideal input/output characteristic function were investigated. The nonlinear circuit response is analyzed by a power series expansion containing terms through the fourth degree, from which the significant deviation from square law can be predicted. Both fixed bias current and flexible bias current configurations are considered. The latter case corresponds with the situation where the mean current can change with the application of a signal. Experimental investigations of the circuit arrangements are described. Agreement between the analytical models and the experimental results are established. Factors which contribute to differences under certain conditions are outlined.
Hruby, R. J.; Bjorkman, W. S.; Schmidt, S. F.; Carestia, R. A.
1979-01-01
Algorithms were developed that attempt to identify which sensor in a tetrad configuration has experienced a step failure. An algorithm is also described that provides a measure of the confidence with which the correct identification was made. Experimental results are presented from real-time tests conducted on a three-axis motion facility utilizing an ortho-skew tetrad strapdown inertial sensor package. The effects of prediction errors and of quantization on correct failure identification are discussed as well as an algorithm for detecting second failures through prediction.
Error Analysis and Its Implication
Institute of Scientific and Technical Information of China (English)
崔蕾
2007-01-01
Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.
Directory of Open Access Journals (Sweden)
Mpho M. Makola
2016-01-01
Full Text Available A potent plant-derived HIV-1 inhibitor, 3,5-dicaffeoylquinic acid (diCQA, has been shown to undergo isomerisation upon UV exposure where the naturally occurring 3trans,5trans-diCQA isomer gives rise to the 3cis,5trans-diCQA, 3trans,5cis-diCQA, and 3cis,5cis-diCQA isomers. In this study, inhibition of HIV-1 INT by UV-induced isomers was investigated using molecular docking methods. Here, density functional theory (DFT models were used for geometry optimization of the 3,5-diCQA isomers. The YASARA and Autodock VINA software packages were then used to determine the binding interactions between the HIV-1 INT catalytic domain and the 3,5-diCQA isomers and the Discovery Studio suite was used to visualise the interactions between the isomers and the protein. The geometrical isomers of 3,5-diCQA were all found to bind to the catalytic core domain of the INT enzyme. Moreover, the cis geometrical isomers were found to interact with the metal cofactor of HIV-1INT, a phenomenon which has been linked to antiviral potency. Furthermore, the 3trans,5cis-diCQA isomer was also found to interact with both LYS156 and LYS159 which are important residues for viral DNA integration. The differences in binding modes of these naturally coexisting isomers may allow wider synergistic activity which may be beneficial in comparison to the activities of each individual isomer.
Directory of Open Access Journals (Sweden)
Nasrin Zahmatkeshan
2010-09-01
Full Text Available Background: Medication errors refer to inappropriate use of drugs, can lead to harmful and serious consequent. Many factors contribute to incidence of these errors. To investigate this factors a descriptive analytic study was done that assess clinical staff medication errors in Bushehr medical centers. Methods: The participants were 400 clinical staff, including nurses, midwives and nurse assistances to complete designed medication errors questionnaire. This questionnaire include 2 parts, part one was demographic data and part two, assess influencing factors of medication errors in six domain. Results: Results showed that the half of participants (49.9% had medication errors in acquaintance and the most error in dosage (37.7% and then type of drugs(27.7%. 73.3% of participants reported their errors and in unreported cases the most cause was fear of managers. According to participants attitude factors that interfering to medication errors were physicians factor, including illegible order in patient file (24.94%, nurses factors including, incorrect documentation (24.38%, interpersonal relationship (19.45%, inappropriate environment (15.3%, knowledge deficit and lack of experience (11.23% and stressful events (4.66%. No statistical significant correlation between situation of job and shift work. Conclusion: Results show that medication errors are common and human factors are the most factors in these errors.
Energy Technology Data Exchange (ETDEWEB)
Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Tapparo, Andrea; Pastore, Paolo, E-mail: paolo.pastore@unipd.it
2015-05-01
In this paper the detection limit was estimated when signals were affected by two error contributions, namely instrumental errors and operational-non-instrumental errors. The detection limit was theoretically obtained following the hypothesis testing schema implemented with the calibration curve methodology. The experimental calibration design was based on J standards measured I times with non-instrumental errors affecting each standard systematically but randomly among the J levels. A two-component variance regression was performed to determine the calibration curve and to define the detection limit in these conditions. The detection limit values obtained from the calibration at trace levels of 41 elements by ICP-MS resulted larger than those obtainable from a one component variance regression. The role of the reagent impurities on the instrumental errors was ascertained and taken into account. Environmental pollution was studied as source of non-instrumental errors. The environmental pollution role was evaluated by Principal Component Analysis technique (PCA) applied to a series of nine calibrations performed in fourteen months. The influence of the seasonality of the environmental pollution on the detection limit was evidenced for many elements usually present in the urban air particulate. The obtained results clearly indicated the need of using the two-component variance regression approach for the calibration of all the elements usually present in the environment at significant concentration levels. - Highlights: • Limit of detection was obtained considering a two variance component regression. • Calibration data may be affected by instrumental and operational conditions errors. • Calibration model was applied to determine 41 elements at trace level by ICP-MS. • Non instrumental errors were evidenced by PCA analysis.
Kukush, Alexander
2011-01-16
With a binary response Y, the dose-response model under consideration is logistic in flavor with pr(Y=1 | D) = R (1+R)(-1), R = λ(0) + EAR D, where λ(0) is the baseline incidence rate and EAR is the excess absolute risk per gray. The calculated thyroid dose of a person i is expressed as Dimes=fiQi(mes)/Mi(mes). Here, Qi(mes) is the measured content of radioiodine in the thyroid gland of person i at time t(mes), Mi(mes) is the estimate of the thyroid mass, and f(i) is the normalizing multiplier. The Q(i) and M(i) are measured with multiplicative errors Vi(Q) and ViM, so that Qi(mes)=Qi(tr)Vi(Q) (this is classical measurement error model) and Mi(tr)=Mi(mes)Vi(M) (this is Berkson measurement error model). Here, Qi(tr) is the true content of radioactivity in the thyroid gland, and Mi(tr) is the true value of the thyroid mass. The error in f(i) is much smaller than the errors in ( Qi(mes), Mi(mes)) and ignored in the analysis. By means of Parametric Full Maximum Likelihood and Regression Calibration (under the assumption that the data set of true doses has lognormal distribution), Nonparametric Full Maximum Likelihood, Nonparametric Regression Calibration, and by properly tuned SIMEX method we study the influence of measurement errors in thyroid dose on the estimates of λ(0) and EAR. The simulation study is presented based on a real sample from the epidemiological studies. The doses were reconstructed in the framework of the Ukrainian-American project on the investigation of Post-Chernobyl thyroid cancers in Ukraine, and the underlying subpolulation was artificially enlarged in order to increase the statistical power. The true risk parameters were given by the values to earlier epidemiological studies, and then the binary response was simulated according to the dose-response model.
Experimental and numerical study of error fields in the CNT stellarator
Hammond, K C; Brenner, P W; Pedersen, T S; Raftopoulos, S; Traverso, P; Volpe, F A
2016-01-01
Sources of error fields were indirectly inferred in a stellarator by reconciling computed and numerical flux surfaces. Sources considered so far include the displacements and tilts (but not the deformations, yet) of the four circular coils featured in the simple CNT stellarator. The flux surfaces were measured by means of an electron beam and phosphor rod, and were computed by means of a Biot-Savart field-line tracing code. If the ideal coil locations and orientations are used in the computation, agreement with measurements is poor. Discrepancies are ascribed to errors in the positioning and orientation of the in-vessel interlocked coils. To that end, an iterative numerical method was developed. A Newton-Raphson algorithm searches for the coils' displacements and tilts that minimize the discrepancy between the measured and computed flux surfaces. This method was verified by misplacing and tilting the coils in a numerical model of CNT, calculating the flux surfaces that they generated, and testing the algorith...
Feedback-based error monitoring processes during musical performance: an ERP study.
Katahira, Kentaro; Abla, Dilshat; Masuda, Sayaka; Okanoya, Kazuo
2008-05-01
Auditory feedback is important in detecting and correcting errors during sound production when a current performance is compared to an intended performance. In the context of vocal production, a forward model, in which a prediction of action consequence (corollary discharge) is created, has been proposed to explain the dampened activity of the auditory cortex while producing self-generated vocal sounds. However, it is unclear how auditory feedback is processed and what neural mechanism underlies the process during other sound production behavior, such as musical performances. We investigated the neural correlates of human auditory feedback-based error detection using event-related potentials (ERPs) recorded during musical performances. Keyboard players of two different skill levels played simple melodies using a musical score. During the performance, the auditory feedback was occasionally altered. Subjects with early and extensive piano training produced a negative ERP component N210, which was absent in non-trained players. When subjects listened to music that deviated from a corresponding score without playing the piece, N210 did not emerge but the imaginary mismatch negativity (iMMN) did. Therefore, N210 may reflect a process of mismatch between the intended auditory image evoked by motor activity, and actual auditory feedback.
Bidimensionality and Geometric Graphs
Fomin, Fedor V; Saurabh, Saket
2011-01-01
In this paper we use several of the key ideas from Bidimensionality to give a new generic approach to design EPTASs and subexponential time parameterized algorithms for problems on classes of graphs which are not minor closed, but instead exhibit a geometric structure. In particular we present EPTASs and subexponential time parameterized algorithms for Feedback Vertex Set, Vertex Cover, Connected Vertex Cover, Diamond Hitting Set, on map graphs and unit disk graphs, and for Cycle Packing and Minimum-Vertex Feedback Edge Set on unit disk graphs. Our results are based on the recent decomposition theorems proved by Fomin et al [SODA 2011], and our algorithms work directly on the input graph. Thus it is not necessary to compute the geometric representations of the input graph. To the best of our knowledge, these results are previously unknown, with the exception of the EPTAS and a subexponential time parameterized algorithm on unit disk graphs for Vertex Cover, which were obtained by Marx [ESA 2005] and Alber and...
Hitchcock, Elaine R; Harel, Daphna; Byun, Tara McAllister
2015-11-01
Children with residual speech errors face an increased risk of social, emotional, and/or academic challenges relative to their peers with typical speech. Previous research has shown that the effects of speech sound disorder may persist into adulthood and span multiple domains of activity limitations and/or participation restrictions, as defined by the World Health Organization's International Classification of Functioning, Disability and Health model. However, the nature and extent of these influences varies widely across children. This study aimed to expand the evidence base on the social, emotional, and academic impact of residual speech errors by collecting survey data from parents of children receiving treatment for /r/ misarticulation. By examining the relationship between an overall measure of impact (weighted summed score) and responses to 11 survey items, the present study offers preliminary suggestions for factors that could be considered when making decisions pertaining to treatment allocation in this population.
2014-01-01
Background: Nurses experience insufficient medication knowledge; particularly in drug dose calculations, but also in drug management and pharmacology. The weak knowledge could be a result of deficiencies in the basic nursing education, or lack of continuing maintenance training during working years. The aim of this study was to compare the medication knowledge, certainty and risk of error between graduating bachelor students in nursing and experienced registered nurses. Methods: Bac...
Geometric Quality Assessment of Bundle Block Adjusted Mulit- Sensor Satellite Imageries
Ghosh, S.; Bhawani Kumar, P. S.; Radhadevi, P. V.; Srinivas, V.; Saibaba, J.; Varadan, G.
2014-11-01
The integration of multi-sensor earth observation data belonging to same area has become one of the most important input for resource mapping and management. Geometric error and fidelity between adjacent scenes affects large-area digital mosaic if the images/ scenes are processed independently. A block triangulation approach "Bundle Block Adjustment (BBA)" system has been developed at ADRIN for combined processing of multi-sensor, multi-resolution satellite imagery to achieve better geometric continuity. In this paper we present the evaluation results of BBA software along with performance assessment and operational use of products thus generated. The application evaluation deals with functional aspects of block-adjustment of satellite imagery consisting of data from multiple sources, i.e. AWiFs, LISS-3, LISS-4 and Cartosat-1 in various combinations as single block. It has provision for automatic generation of GCPs and tie-points using image metafile/ Rational Polynomial Coefficient's (RPC's) and ortho/ merged/ mosaicked products generation. The study is carried out with datasets covering different terrain types (ranging from high mountainous area, moderately undulating terrain, coastal plain, agriculture fields, urban area and water-body) across Indian subcontinent with varying block sizes and spatial reference systems. Geometric accuracy assessment is carried out to figure out error propagation at scene based ortho/ merged products as well as block level. The experimental results confirm that pixel tagging, geometric fidelity and feature continuity across adjacent scenes as well as for multiple sensors reduced to a great extent, due to the high redundancy. The results demonstrate that it is one of the most affective geometric corrections for generating large area digital mosaic over High mountainous terrain using high resolution good swath satellite imagery, like Cartosat-1, with minimum human intervention.
Geometric Rationalization for Freeform Architecture
Jiang, Caigui
2016-06-20
The emergence of freeform architecture provides interesting geometric challenges with regards to the design and manufacturing of large-scale structures. To design these architectural structures, we have to consider two types of constraints. First, aesthetic constraints are important because the buildings have to be visually impressive. Sec- ond, functional constraints are important for the performance of a building and its e cient construction. This thesis contributes to the area of architectural geometry. Specifically, we are interested in the geometric rationalization of freeform architec- ture with the goal of combining aesthetic and functional constraints and construction requirements. Aesthetic requirements typically come from designers and architects. To obtain visually pleasing structures, they favor smoothness of the building shape, but also smoothness of the visible patterns on the surface. Functional requirements typically come from the engineers involved in the construction process. For exam- ple, covering freeform structures using planar panels is much cheaper than using non-planar ones. Further, constructed buildings have to be stable and should not collapse. In this thesis, we explore the geometric rationalization of freeform archi- tecture using four specific example problems inspired by real life applications. We achieve our results by developing optimization algorithms and a theoretical study of the underlying geometrical structure of the problems. The four example problems are the following: (1) The design of shading and lighting systems which are torsion-free structures with planar beams based on quad meshes. They satisfy the functionality requirements of preventing light from going inside a building as shad- ing systems or reflecting light into a building as lighting systems. (2) The Design of freeform honeycomb structures that are constructed based on hex-dominant meshes with a planar beam mounted along each edge. The beams intersect without
STUDY OF THE EFFECTS OF REDUCING SYSTEMATIC ERRORS ON MONTHLY REGIONAL CLIMATE DYNAMICAL FORECAST
Institute of Scientific and Technical Information of China (English)
ZENG Xin-min; XI Chao-li
2009-01-01
A nested-model system is constructed by embedding the regional climate model RegCM3 into a general circulation model tbr monthly-scale regional climate forecast over East China. The systematic errors are formulated for the region on the basis of 10-yr (1991-2000) results of the nested-model system,and of the datasets of the Climate Prediction Center (CPC) Merged Analysis of Precipitation (CMAP) and the temperature analysis of the National Meteorological Center (NMC),U.S.A.,which are then used for correcting the original forecast by the system for the period 2001-2005. After the assessment of the original and corrected forecasts for monthly precipitation and surthce air temperature,it is found that the corrected forecast is apparently better than the original,suggesting that the approach can be applied for improving monthly-scale regional climate dynamical lbrecast.
Sampling errors in the estimation of empirical orthogonal functions. [for climatology studies
North, G. R.; Bell, T. L.; Cahalan, R. F.; Moeng, F. J.
1982-01-01
Empirical Orthogonal Functions (EOF's), eigenvectors of the spatial cross-covariance matrix of a meteorological field, are reviewed with special attention given to the necessary weighting factors for gridded data and the sampling errors incurred when too small a sample is available. The geographical shape of an EOF shows large intersample variability when its associated eigenvalue is 'close' to a neighboring one. A rule of thumb indicating when an EOF is likely to be subject to large sampling fluctuations is presented. An explicit example, based on the statistics of the 500 mb geopotential height field, displays large intersample variability in the EOF's for sample sizes of a few hundred independent realizations, a size seldom exceeded by meteorological data sets.
Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD
Energy Technology Data Exchange (ETDEWEB)
Smith, Thomas Michael; Shadid, John N; Pawlowski, Roger P; Cyr, Eric C; Wildey, Timothy Michael
2014-01-01
This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.
Bouaziz, Serge; Magnan, Annie
2007-01-01
The aim of this study was to determine the contribution of the visual perception and graphic production systems [Van Sommers, P. (1989). "A system for drawing and drawing-related neuropsychology." "Cognitive Neuropsychology," 6, 117-164] to the manifestation of the "Centripetal Execution Principle" (CEP), a graphic rule for the copying of drawings…
Directory of Open Access Journals (Sweden)
Kovacevic, Srdja
2016-12-01
Full Text Available This paper describes the two-step method used to analyse the factors and aspects influencing human error during the maintenance of mining machines. The first step is the cause-effect analysis, supported by brainstorming, where five factors and 21 aspects are identified. During the second step, the group fuzzy analytic hierarchy process is used to rank the identified factors and aspects. A case study is done on mining companies in Serbia. The key aspects are ranked according to an analysis that included experts who assess risks in mining companies (a maintenance engineer, a technologist, an ergonomist, a psychologist, and an organisational scientist. Failure to follow technical maintenance instructions, poor organisation of the training process, inadequate diagnostic equipment, and a lack of understanding of the work process are identified as the most important causes of human error.
Insights into the error bypass of 1-Nitropyrene DNA adduct by DNA polymerase ι: A QM/MM study
Li, Yanwei; Bao, Lei; Zhang, Ruiming; Tang, Xiaowen; Zhang, Qingzhu; Wang, Wenxing
2017-10-01
The error bypass mechanism of DNA polymerase ι toward N-(deoxyguanosin-8-yl)-1-aminopyrene adduction was studied by using quantum mechanics/molecular mechanics method. The most favorable mechanism highlights three elementary steps: proton transfer from dC to dATP, phosphoryl transfer, and deprotonation of dAMP. The phosphoryl transfer step was found to be rate-determining. The calculated average barrier (23.8 kcal mol-1) is in accordance with the experimental value (21.5 kcal mol-1). Electrostatic influence analysis indicates that residues Asp126 and Lys207 significantly suppress the error bypass while Glu127 facilitates the process. These results highlight the origins of the mutagenicity of nitrated polycyclic aromatic hydrocarbons in molecular detail.
Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang
2017-06-01
The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.
Geometric Complexity Theory: Introduction
Sohoni, Ketan D Mulmuley Milind
2007-01-01
These are lectures notes for the introductory graduate courses on geometric complexity theory (GCT) in the computer science department, the university of Chicago. Part I consists of the lecture notes for the course given by the first author in the spring quarter, 2007. It gives introduction to the basic structure of GCT. Part II consists of the lecture notes for the course given by the second author in the spring quarter, 2003. It gives introduction to invariant theory with a view towards GCT. No background in algebraic geometry or representation theory is assumed. These lecture notes in conjunction with the article \\cite{GCTflip1}, which describes in detail the basic plan of GCT based on the principle called the flip, should provide a high level picture of GCT assuming familiarity with only basic notions of algebra, such as groups, rings, fields etc.
The Geometric Transition Revisited
Gwyn, Rhiannon
2007-01-01
Our intention in this article is to review known facts and to summarise recent advances in the understanding of geometric transitions and the underlying open/closed duality in string theory. We aim to present a pedagogical discussion of the gauge theory underlying the Klebanov--Strassler model and review the Gopakumar--Vafa conjecture based on topological string theory. These models are also compared in the T-dual brane constructions. We then summarise a series of papers verifying both models on the supergravity level. An appendix provides extensive background material about conifold geometries. We pay special attention to their complex structures and re-evaluate the supersymmetry conditions on the background flux in constructions with fractional D3-branes on the singular (Klebanov--Strassler) and resolved (Pando Zayas--Tseytlin) conifolds. We agree with earlier results that only the singular solution allows a supersymmetric flux, but point out the importance of using the correct complex structure to reach th...
Geometrical Destabilization of Inflation
Renaux-Petel, Sébastien; Turzyński, Krzysztof
2016-09-01
We show the existence of a general mechanism by which heavy scalar fields can be destabilized during inflation, relying on the fact that the curvature of the field space manifold can dominate the stabilizing force from the potential and destabilize inflationary trajectories. We describe a simple and rather universal setup in which higher-order operators suppressed by a large energy scale trigger this instability. This phenomenon can prematurely end inflation, thereby leading to important observational consequences and sometimes excluding models that would otherwise perfectly fit the data. More generally, it modifies the interpretation of cosmological constraints in terms of fundamental physics. We also explain how the geometrical destabilization can lead to powerful selection criteria on the field space curvature of inflationary models.
Directory of Open Access Journals (Sweden)
Boulesteix Anne-Laure
2009-12-01
Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.
Influence of measurement errors and estimated parameters on combustion diagnosis
Energy Technology Data Exchange (ETDEWEB)
Payri, F.; Molina, S.; Martin, J. [CMT-Motores Termicos, Universidad Politecnica de Valencia, Camino de Vera s/n. 46022 Valencia (Spain); Armas, O. [Departamento de Mecanica Aplicada e Ingenieria de proyectos, Universidad de Castilla-La Mancha. Av. Camilo Jose Cela s/n 13071,Ciudad Real (Spain)
2006-02-01
Thermodynamic diagnosis models are valuable tools for the study of Diesel combustion. Inputs required by such models comprise measured mean and instantaneous variables, together with suitable values for adjustable parameters used in different submodels. In the case of measured variables, one may estimate the uncertainty associated with measurement errors; however, the influence of errors in model parameter estimation may not be so easily established on an experimental basis. In this paper, a simulated pressure cycle has been used along with known input parameters, so that any uncertainty in the inputs is avoided. Then, the influence of errors in measured variables and geometric and heat transmission parameters on the results of a diagnosis combustion model for direct injection diesel engines have been studied. This procedure allowed to establish the relative importance of these parameters and to set limits to the maximal errors of the model, accounting for both the maximal expected errors in the input parameters and the sensitivity of the model to those errors. (author)
Geometrical form factor calculation using Monte Carlo integration for lidar
Mao, Feiyue; Gong, Wei; Li, Jun
2012-06-01
We proposed a geometrical form factor (GFF) calculation using Monte Carlo integration (GFF-MC) for lidar that is practical and can be applied to any laser intensity distribution. Theoretical results have been calculated with our method based on the functions of measured, uniform and Gaussian laser intensity distribution. Two experimental GFF traces on clear days are obtained to verify the validity of the theoretical results. The results indicated that the measured distribution function outperformed the Gaussian and uniform functions. That means that the deviation of the measured laser intensity distribution from an ideal one can be too large to neglect. In addition, the theoretical GFF of the uniform distribution had a larger error than that of the Gaussian distribution. Furthermore, the effects of the inclination angle of the laser beam and the central obstruction of the support structure of the second mirror of the telescope are discussed in this study.
Directory of Open Access Journals (Sweden)
Kovin S Naidoo
2012-01-01
Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.
Uncorrected refractive errors.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.
Charland, Jenna; Touboul, Julien; Rey, Vincent
2013-04-01
Wave propagation against current : a study of the effects of vertical shears of the mean current on the geometrical focusing of water waves J. Charland * **, J. Touboul **, V. Rey ** jenna.charland@univ-tln.fr * Direction Générale de l'Armement, CNRS Délégation Normandie ** Université de Toulon, 83957 La Garde, France Mediterranean Institute of Oceanography (MIO) Aix Marseille Université, 13288 Marseille, France CNRS/INSU, IRD, MIO, UM 110 In the nearshore area, both wave propagation and currents are influenced by the bathymetry. For a better understanding of wave - current interactions in the presence of a 3D bathymetry, a large scale experiment was carried out in the Ocean Basin FIRST, Toulon, France. The 3D bathymetry consisted of two symmetric underwater mounds on both sides in the mean wave direction. The water depth at the top the mounds was hm=1,5m, the slopes of the mounds were of about 1:3, the water depth was h=3 m elsewhere. For opposite current conditions (U of order 0.30m/s), a huge focusing of the wave up to twice its incident amplitude was observed in the central part of the basin for T=1.4s. Since deep water conditions are verified, the wave amplification is ascribed to the current field. The mean velocity fields at a water depth hC=0.25m was measured by the use of an electromagnetic current meter. The results have been published in Rey et al [4]. The elliptic form of the "mild slope" equation including a uniform current on the water column (Chen et al [1]) was then used for the calculations. The calculated wave amplification of factor 1.2 is significantly smaller than observed experimentally (factor 2). So, the purpose of this study is to understand the physical processes which explain this gap. As demonstrated by Kharif & Pelinovsky [2], geometrical focusing of waves is able to modify significantly the local wave amplitude. We consider this process here. Since vertical velocity profiles measured at some locations have shown significant
DEFF Research Database (Denmark)
Persson, Johan Mikael; Wagner, Jakob Birkedal; Dunin-Borkowski, Rafal E.
2011-01-01
Semiconductor nanowires have been studied using electron microscopy since the early days of nanowire growth, e.g. [1]. A common approach for analysing nanowires using transmission electron microscopy (TEM) involves removing them from their substrate and subsequently transferring them onto carbon...... films. This sample preparation method is fast and usually results in little structural change in the nanowires [2]. However, it does not provide information about the interface between the nanowires and the substrate, who’s physical and electrical properties are important for many modern applications...... of nanowires. In particular, strain and crystallographic defects can have a major influence on the electronic structure of the material. In improved method for the characterization of such interfaces would be valuable for optimizing and understanding the transport properties of devices based on nanowires. Here...
Mandal, Abhishek; Singh, Neera
2016-01-01
The aim of this study was to establish the bark of Eucalyptus tereticornis L. (EB) as a low cost bio-adsorbent for the removal of imidacloprid and atrazine from aqueous medium. The pseudo-first-order (PFO), pseudo-second-order (PSO), Elovich and intra-particle diffusion (IPD) models were used to describe the kinetic data and rate constants were evaluated. Adsorption data was analysed using ten 2-, 3- and 4-parameter models viz. Freundlich, Jovanovic, Langmuir, Temkin, Koble-Corrigan, Redlich-Peterson, Sips, Toth, Radke-Prausnitz, and Fritz-Schluender isotherms. Six error functions were used to compute the best fit single component isotherm parameters by nonlinear regression analysis. The results showed that the sorption of atrazine was better explained by PSO model, whereas the sorption of imidacloprid followed the PFO kinetic model. Isotherm model optimization analysis suggested that the Freundlich along with Koble-Corrigan, Toth and Fritz-Schluender were the best models to predict atrazine and imidacloprid adsorption onto EB. Error analysis suggested that minimization of chi-square (χ(2)) error function provided the best determination of optimum parameter sets for all the isotherms.
Kim, Chanmi; Kim, Eun-San; Hahn, Garam
2016-11-01
The Korea Heavy Ion Medical Accelerator consists of an injector and a synchrotron for an ion medical accelerator that is the first carbon-ion therapy system in Korea. The medium energy beam transport(MEBT) line connects the interdigital H-mode drift tube linac and the synchrotron. We investigated the beam conditions after the charge stripper by using the LISE++ and the SRIM codes. The beam was stripped from C4+ into C6+ by using the charge stripper. We investigated the performance of a de-buncher in optimizing the energy spread and the beam distribution in z-dW/W (direction of beam progress-beam and energy) phase. We obtained the results of the tracking simulation and the error analysis by using the TRACK code. Possible misalignments and rotations of the magnets were considered in the simulations. States of the beam were examined when errors occurred in the magnets by the applying analytic fringe field model in TRACK code. The condition for the beam orbit was optimized by using correctors and profile monitors to correct the orbit. In this paper, we focus on the beam dynamics and the error studies dedicated to the MEBT beam line and show the optimized beam parameters for the MEBT.
Directory of Open Access Journals (Sweden)
Karyn Heavner
2015-08-01
Full Text Available Variation in the odds ratio (OR resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to “small numbers.” Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.
Directory of Open Access Journals (Sweden)
Prieto Iván
2014-07-01
Full Text Available The aim of this article was to suggest some changes in the teaching-learning process methodology of the judo osoto-guruma technique, establishing the action sequences and the most frequent technical errors committed when performing them. The study was carried out with the participation of 45 students with no experience regarding the fundamentals of judo (21 men and 24 women; age=24.02±3.98 years old from the Bachelor of Science of Physical Activity and Sport Science at the University of Vigo. The proceeding consisted of a systematic observation of a video recording registered during the technique execution. Data obtained were analyzed by means of descriptive statistics and sequential analysis of T-Patterns (obtained with THEME v.5. Software, identifying: a the presence of typical inaccuracies during the technique performance; b a number of chained errors affecting body balance, the position of the supporting foot, the blocking action and the final action of the arms. Findings allowed to suggest some motor tasks to correct the identified inaccuracies, the proper sequential actions to make the execution more effective and some recommendations for the use of feedback. Moreover, these findings could be useful for other professionals in order to correct the key technical errors and prevent diverse injuries.
Heavner, Karyn; Burstyn, Igor
2015-08-24
Variation in the odds ratio (OR) resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations) used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff) curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to "small numbers." Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship.
Diagnostic errors in pediatric radiology
Energy Technology Data Exchange (ETDEWEB)
Taylor, George A.; Voss, Stephan D. [Children' s Hospital Boston, Department of Radiology, Harvard Medical School, Boston, MA (United States); Melvin, Patrice R. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Graham, Dionne A. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Harvard Medical School, The Department of Pediatrics, Boston, MA (United States)
2011-03-15
Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)
The persistence of error: a study of retracted articles on the Internet and in personal libraries*
Davis, Philip M.
2012-01-01
Objective: To determine the accessibility of retracted articles residing on non-publisher websites and in personal libraries. Methods: Searches were performed to locate Internet copies of 1,779 retracted articles identified in MEDLINE, published between 1973 and 2010, excluding the publishers' website. Found copies were classified by article version and location. Mendeley (a bibliographic software) was searched for copies residing in personal libraries. Results: Non-publisher websites provided 321 publicly accessible copies for 289 retracted articles: 304 (95%) copies were the publisher' versions, and 13 (4%) were final manuscripts. PubMed Central had 138 (43%) copies; educational websites 94 (29%); commercial websites 24 (7%); advocacy websites 16 (5%); and institutional repositories 10 (3%). Just 15 (5%) full-article views included a retraction statement. Personal Mendeley libraries contained records for 1,340 (75%) retracted articles, shared by 3.4 users, on average. Conclusions: The benefits of decentralized access to scientific articles may come with the cost of promoting incorrect, invalid, or untrustworthy science. Automated methods to deliver status updates to readers may reduce the persistence of error in the scientific literature. PMID:22879807
The persistence of error: a study of retracted articles on the Internet and in personal libraries.
Davis, Philip M
2012-07-01
To determine the accessibility of retracted articles residing on non-publisher websites and in personal libraries. Searches were performed to locate Internet copies of 1,779 retracted articles identified in MEDLINE, published between 1973 and 2010, excluding the publishers' website. Found copies were classified by article version and location. Mendeley (a bibliographic software) was searched for copies residing in personal libraries. Non-publisher websites provided 321 publicly accessible copies for 289 retracted articles: 304 (95%) copies were the publisher' versions, and 13 (4%) were final manuscripts. PubMed Central had 138 (43%) copies; educational websites 94 (29%); commercial websites 24 (7%); advocacy websites 16 (5%); and institutional repositories 10 (3%). Just 16 [corrected] (5%) full-article views included a retraction statement. Personal Mendeley libraries contained records for 1,340 (75%) retracted articles, shared by 3.4 users, on average. The benefits of decentralized access to scientific articles may come with the cost of promoting incorrect, invalid, or untrustworthy science. Automated methods to deliver status updates to readers may reduce the persistence of error in the scientific literature.
Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca
2015-09-01
Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.
Data filtering with support vector machines in geometric camera calibration.
Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C
2010-02-01
The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.
Introduction to Dynamical Systems and Geometric Mechanics
Maruskin, Jared M.
2012-01-01
Introduction to Dynamical Systems and Geometric Mechanics provides a comprehensive tour of two fields that are intimately entwined: dynamical systems is the study of the behavior of physical systems that may be described by a set of nonlinear first-order ordinary differential equations in Euclidean space, whereas geometric mechanics explores similar systems that instead evolve on differentiable manifolds. In the study of geometric mechanics, however, additional geometric structures are often present, since such systems arise from the laws of nature that govern the motions of particles, bodies, and even galaxies. In the first part of the text, we discuss linearization and stability of trajectories and fixed points, invariant manifold theory, periodic orbits, PoincarÃ© maps, Floquet theory, the PoincarÃ©-Bendixson theorem, bifurcations, and chaos. The second part of the text begins with a self-contained chapter on differential geometry that introduces notions of manifolds, mappings, vector fields, the Jacobi-Lie bracket, and differential forms. The final chapters cover Lagrangian and Hamiltonian mechanics from a modern geometric perspective, mechanics on Lie groups, and nonholonomic mechanics via both moving frames and fiber bundle decompositions. The text can be reasonably digested in a single-semester introductory graduate-level course. Each chapter concludes with an application that can serve as a springboard project for further investigation or in-class discussion.
Error bounds for set inclusions
Institute of Scientific and Technical Information of China (English)
ZHENG; Xiyin(郑喜印)
2003-01-01
A variant of Robinson-Ursescu Theorem is given in normed spaces. Several error bound theorems for convex inclusions are proved and in particular a positive answer to Li and Singer's conjecture is given under weaker assumption than the assumption required in their conjecture. Perturbation error bounds are also studied. As applications, we study error bounds for convex inequality systems.
Vinay BC; Nikhitha MK; Patel Sunil B
2015-01-01
In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.
Vinay BC; Nikhitha MK; Patel Sunil B
2015-01-01
In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.
Franklin, Bryony Dean; Reynolds, Matthew; Sadler, Stacey; Hibberd, Ralph; Avery, Anthony J; Armstrong, Sarah J; Mehta, Rajnikant; Boyd, Matthew J; Barber, Nick
2014-01-01
Objectives To compare prevalence and types of dispensing errors and pharmacists’ labelling enhancements, for prescriptions transmitted electronically versus paper prescriptions. Design Naturalistic stepped wedge study. Setting 15 English community pharmacies. Intervention Electronic transmission of prescriptions between prescriber and pharmacy. Main outcome measures Prevalence of labelling errors, content errors and labelling enhancements (beneficial additions to the instructions), as identified by researchers visiting each pharmacy. Results Overall, we identified labelling errors in 5.4% of 16 357 dispensed items, and content errors in 1.4%; enhancements were made for 13.6%. Pharmacists also edited the label for a further 21.9% of electronically transmitted items. Electronically transmitted prescriptions had a higher prevalence of labelling errors (7.4% of 3733 items) than other prescriptions (4.8% of 12 624); OR 1.46 (95% CI 1.21 to 1.76). There was no difference for content errors or enhancements. The increase in labelling errors was mainly accounted for by errors (mainly at one pharmacy) involving omission of the indication, where specified by the prescriber, from the label. A sensitivity analysis in which these cases (n=158) were not considered errors revealed no remaining difference between prescription types. Conclusions We identified a higher prevalence of labelling errors for items transmitted electronically, but this was predominantly accounted for by local practice in a single pharmacy, independent of prescription type. Community pharmacists made labelling enhancements to about one in seven dispensed items, whether electronically transmitted or not. Community pharmacists, prescribers, professional bodies and software providers should work together to agree how items should be dispensed and labelled to best reap the benefits of electronically transmitted prescriptions. Community pharmacists need to ensure their computer systems are promptly updated
Arrighi, Pieranna; Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno; Andre, Paolo
2016-01-01
Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor