Evaluation of subject contrast and normalized average glandular dose by semi-analytical models
International Nuclear Information System (INIS)
Tomal, A.; Poletti, M.E.; Caldas, L.V.E.
2010-01-01
In this work, two semi-analytical models are described to evaluate the subject contrast of nodules and the normalized average glandular dose in mammography. Both models were used to study the influence of some parameters, such as breast characteristics (thickness and composition) and incident spectra (kVp and target-filter combination) on the subject contrast of a nodule and on the normalized average glandular dose. From the subject contrast results, detection limits of nodules were also determined. Our results are in good agreement with those reported by other authors, who had used Monte Carlo simulation, showing the robustness of our semi-analytical method.
A Framework for Control System Design Subject to Average Data-Rate Constraints
DEFF Research Database (Denmark)
Silva, Eduardo; Derpich, Milan; Østergaard, Jan
2011-01-01
This paper studies discrete-time control systems subject to average data-rate limits. We focus on a situation where a noisy linear system has been designed assuming transparent feedback and, due to implementation constraints, a source-coding scheme (with unity signal transfer function) has to be ...
How Uninformed is the Average Data Subject? A Quest for Benchmarks in EU Personal Data Protection
Directory of Open Access Journals (Sweden)
Gloria González Fuster
2014-11-01
Full Text Available
Information obligations have always been crucial in personal data protection law. Reinforcing these obligations is one of the priorities of the legislative package introduced in 2012 by the European Commission to redefine the personal data protection legal landscape of the European Union (EU. Those responsible for processing personal data (the data controllers must imperatively convey certain pieces of information to those whose data is processed (the data subjects, and they are expected to do so in an increasingly transparent manner. Beyond these punctual information requirements, however, data subjects appear to always be and inevitably remain in a state of relative ignorance, as in almost constant need of further guidance. Data subjects are nowadays often depicted as unknowing consumers of online services, services which surreptitiously take away from them personal data thus conceived as a valuable asset. In light of these developments, this contribution critically investigates how EU law is envisaging data subjects in terms of knowledge. The paper reviews the birth and evolution of information obligations as an element of European personal data protection law, and asks whether thinking of data subjects as consumers is consistent with the notion of average consumer functioning in EU consumer law. Finally, it argues that the time might have come to openly clarify when data subjects are unlawfully misinformed, and that, in the meantime, individuals might benefit not only from accessing more transparent information, but also from being made more aware of the limitations of the information available to them.
19 CFR 145.2 - Mail subject to Customs examination.
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Mail subject to Customs examination. 145.2 Section 145.2 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) MAIL IMPORTATIONS General Provisions § 145.2 Mail subject to Customs...
A critical reflection on subjectivity in examination of higher degrees
Directory of Open Access Journals (Sweden)
Collins C Ngwakwe
2015-11-01
Full Text Available This paper is a critical reflection on seemingly embedded subjectivity in external examination of higher degrees. The paper is significant given that education is a vital pillar of sustainable development; hence, identification of obscure obstacles to this goal is imperative for an equitable and sustainable education that is devoid of class, race and gender. Adopting a critical review approach, the paper rummaged some related researches that bemoan apparent subjectivity amongst some examiners of higher degrees. Findings show a regrettable and seemingly obscured subjectivity and/or misjudgement that constitute an impediment in higher degrees examination process. Thus the paper highlights that whilst it is understandable that misjudgement or error is innate in every human endeavour including higher degree examination, however an error caused by examiner’s partisanship and/or maladroitness in the research focus may be avoidable. In conclusion, the paper stresses that prejudice or ineptitude in higher degree examination should be bridled by inter alia implementing the policy of alternative assessor; checking the pedigree of examiner’s assessment experience and an opportunity for the supervisor/s to present a rebuttal in circumstances where one examiner’s opinion is fraught with apparent subjectivity.
Newton, Sarah E; Moore, Gary
2007-01-01
Graduate nursing programs frequently use undergraduate grade point average (UGPA) and Graduate Record Examination (GRE) scores for admission decisions. The literature indicates that both UGPA and GRE scores are predictive of graduate school success, but that UGPA may be the better predictor. If that is so, one must ask if both are necessary for graduate nursing admission decisions. This article presents research on one graduate nursing program's experience with UGPA and GRE scores and offers a perspective regarding their continued usefulness for graduate admission decisions. Data from 120 graduate students were examined, and regression analysis indicated that UGPA significantly predicted GRE verbal and quantitative scores (p < .05). Regression analysis also determined a UGPA score above which the GRE provided little additional useful data for graduate nursing admission decisions.
The Validity of Subjects in Korean Dental Technicians' Licensing Examination
Directory of Open Access Journals (Sweden)
Woong-chul Kim
2005-06-01
Full Text Available This study prepared a basic framework for the development and improvement of Korean Dental Technicians' Licensing Examination, based on actual test questions. A peer review was conducted to ensure relevance to current practices in dental technology. For the statistical analysis, 1000 dental laboratory technicians were selected; specialists in dental laboratory technology (laboratory owners, educators, etc. were involved in creating valid and reliable questions. Results indicated that examination subjects should be divided into three categories: basic dental laboratory theory, dental laboratory specialties, and a practical examination. To ensure relevance to current practice, there should be less emphasis on basic dental laboratory theory, including health-related laws, and more emphasis on dental laboratory specialties. Introduction to dental anatomy should be separated from oral anatomy and tooth morphology; and fixed prosthodontics should be separated from crown and bridge technology and dental ceramics technology. Removable orthodontic appliance technology should be renamed 'orthodontic laboratory technology'. There should be less questions related to health related law, oral anatomy, dental hygiene, dental materials science and inlay, while the distribution ratio of questions related to tooth morphology should be maintained. There should be a decrease in the distribution ratio of questions related to crown and bridge technology, dental ceramics technology, complete dentures and removable partial dentures technology, and orthodontic laboratory technology. In the practical examination, the current multiple choice test should be replaced with tooth carving using wax or plaster. In dental laboratory specialties, subjects related to contemporary dental laboratory technology should be included in the test items.
Carriaga, Benito T.
2012-01-01
This study evaluated the impact of the master schedule design on student attendance, discipline, and grade point averages. Unexcused and excused absences, minor and major infraction, and grade point averages in three high schools during the 2008-09 and 2009-10 school years were included in the study. The purpose was to examine if any difference…
Soury, Hamza
2012-06-01
This letter considers the average bit error probability of binary coherent signaling over flat fading channels subject to additive generalized Gaussian noise. More specifically, a generic closed form expression in terms of the Fox\\'s H function is offered for the extended generalized-K fading case. Simplifications for some special fading distributions such as generalized-K fading and Nakagami-m fading and special additive noise distributions such as Gaussian and Laplacian noise are then presented. Finally, the mathematical formalism is illustrated by some numerical examples verified by computer based simulations for a variety of fading and additive noise parameters. © 2012 IEEE.
Post-deformation examination of specimens subjected to SCC testing
Energy Technology Data Exchange (ETDEWEB)
Gussev, Maxim N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Field, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Busby, Jeremy T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Leonard, Keith J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-09-01
This report details the results of post-radiation and post-deformation characterizations performed during FY 2015–FY 2016 on a subset of specimens that had previously been irradiated at high displacement per atom (dpa) damage doses. The specimens, made of commercial austenitic stainless steels and alloys, were subjected to stress-corrosion cracking tests (constant extension rate testing and crack growth testing) at the University of Michigan under conditions typical of nuclear power plants. After testing, the specimens were returned to Oak Ridge National Laboratory (ORNL) for further analysis and evaluation.
Middlemas, David A.; Manning, James M.; Gazzillo, Linda M.; Young, John
2001-06-01
OBJECTIVE: To determine whether grade point average, hours of clinical education, or both are significant predictors of performance on the National Athletic Trainers' Association Board of Certification examination and whether curriculum and internship candidates' scores on the certification examination can be differentially predicted. DESIGN AND SETTING: Data collection forms and consent forms were mailed to the subjects to collect data for predictor variables. Subject scores on the certification examination were obtained from Columbia Assessment Services. SUBJECTS: A total of 270 first-time candidates for the April and June 1998 certification examinations. MEASUREMENTS: Grade point average, number of clinical hours completed, sex, route to certification eligibility (curriculum or internship), scores on each section of the certification examination, and pass/fail criteria for each section. RESULTS: We found no significant difference between the scores of men and women on any section of the examination. Scores for curriculum and internship candidates differed significantly on the written and practical sections of the examination but not on the simulation section. Grade point average was a significant predictor of scores on each section of the examination and the examination as a whole. Clinical hours completed did not add a significant increment for any section but did add a significant increment for the examination overall. Although no significant difference was noted between curriculum and internship candidates in predicting scores on sections of the examination, a significant difference by route was found in predicting whether candidates would pass the examination as a whole (P =.047). Proportion of variance accounted for was less than R(2) = 0.0723 for any section of the examination and R(2) = 0.057 for the examination as a whole. CONCLUSIONS: Potential predictors of performance on the certification examination can be useful to athletic training educators in
Miuţescu, Bogdan; Sporea, Ioan; Popescu, Alina; Bota, Simona; Iovănescu, Dana; Burlea, Amelia; Mos, Liana; Miuţescu, Eftimie
2013-01-01
The aim of this study is to evaluate the usefulness of the fecal immunochemical test (FIT) in colorectal cancer screening, detection of precancerous lesions and early colorectal cancer. The study evaluated asymptomatic patients with average risk (no personal or family antecedents of polyps or colorectal cancer), aged between 50 and 74 years. The presence of the occult haemorrhage was tested with the immunochemical faecal test Hem Check 1 (Veda Lab, France). The subjects were not requested to have any dietary or drug restrictions. Colonoscopy was recommended in all subjects that tested positive. In our study, we had a total of 1389 participants who met the inclusion criteria, with a mean age of 61.2 ± 12.8 years, 565 (40.7%) men and 824 (59.3%) women. FIT was positive in 87 individuals (6.3%). In 57/87 subjects (65.5%) with positive FIT, colonoscopy was performed, while the rest of the subjects refused or delayed the investigation. A number of 5 (8.8%) patients were not able to have a complete colonoscopy, due to neoplastic stenosis. The colonoscopies revealed in 10 cases (0.7%) cancer, in 29 cases (2.1%) advanced adenomas and in 15 cases (1.1%) non advanced adenomas from the total participants in the study. The colonoscopies performed revealed a greater percentage of advanced adenomas in the left colon compared to the right colon, 74.1% vs. 28.6% (p<0.001). In our study, FIT had a positivity rate of 6.3%. The detection rate for advanced neoplasia was 2.8% (0.7% for cancer, 2.1% for advanced adenomas) in our study group. Adherence to colonoscopy for FIT-positive subjects was 65.5%.
Ouyang, Wenli; Cuddy, Monica M; Swanson, David B
2015-09-01
Prior to graduation, US medical students are required to complete clinical clerkship rotations, most commonly in the specialty areas of family medicine, internal medicine, obstetrics and gynecology (ob/gyn), pediatrics, psychiatry, and surgery. Within a school, the sequence in which students complete these clerkships varies. In addition, the length of these rotations varies, both within a school for different clerkships and between schools for the same clerkship. The present study investigated the effects of clerkship sequence and length on performance on the National Board of Medical Examiner's subject examination in internal medicine. The study sample included 16,091 students from 67 US Liaison Committee on Medical Education (LCME)-accredited medical schools who graduated in 2012 or 2013. Student-level measures included first-attempt internal medicine subject examination scores, first-attempt USMLE Step 1 scores, and five dichotomous variables capturing whether or not students completed rotations in family medicine, ob/gyn, pediatrics, psychiatry, and surgery prior to taking the internal medicine rotation. School-level measures included clerkship length and average Step 1 score. Multilevel models with students nested in schools were estimated with internal medicine subject examination scores as the dependent measure. Step 1 scores and the five dichotomous variables were treated as student-level predictors. Internal medicine clerkship length and average Step 1 score were used to predict school-to-school variation in average internal medicine subject examination scores. Completion of rotations in surgery, pediatrics and family medicine prior to taking the internal medicine examination significantly improved scores, with the largest benefit observed for surgery (coefficient = 1.58 points; p value internal medicine subject examination performance. At the school level, longer internal medicine clerkships were associated with higher scores on the internal medicine
Directory of Open Access Journals (Sweden)
Jesús Vega Encabo
2015-11-01
Full Text Available In this paper, I claim that subjectivity is a way of being that is constituted through a set of practices in which the self is subject to the dangers of fictionalizing and plotting her life and self-image. I examine some ways of becoming subject through narratives and through theatrical performance before others. Through these practices, a real and active subjectivity is revealed, capable of self-knowledge and self-transformation.
ACT, Inc., 2014
2014-01-01
Female students who graduated from high school in 2013 averaged higher grades than their male counterparts in all subjects, but male graduates earned higher scores on the math and science sections of the ACT. This information brief looks at high school grade point average and ACT test score by subject and gender
Directory of Open Access Journals (Sweden)
Isaac Amankwaa
2015-01-01
Full Text Available Introduction. Success in the licensure examination is the only legal prerequisite to practice as a nurse in Ghana. However, a large percentage of nursing students who sit fail this examination for the first time. This study sought to unravel whether prior education, sociodemographic characteristics, and nursing Cumulative Grade Point Average (CGPA could predict performance in the licensure examinations. Methods. The study was a descriptive cross-sectional survey conducted from November 2014 to April 2015 in the Kumasi metropolis, Ghana on 176 past nursing students. Data was collected using questionnaires and analyzed using SPSS version 22. A logistic regression model was fitted to look at the influence of the explanatory variables on the odds of passing the licensure examinations. All statistical significances were tested at p value of <0.05. Results. Majority, 56.3%, were females and 86.4% were between the ages of 25 and 31 years. Most of the students (88.6% entered the nursing training colleges with a WASSCE qualification and 38% read general science. 73.9% passed the licensure examinations and the mean CGPA of the students was 2.89 SD=0.37. Sociodemographic characteristics and previous education had no influence on performance in the licensure examinations. CGPA had strong positive relationship with performance in licensure examinations (AOR = 15.27; 95% CI = 6.28, 27.11. Conclusion. Students CGPA could be a good predictor of their performance in the licensure examinations. On the other hand, students’ sociodemographic and previous educational characteristics might not be important factors to consider in admitting students into the nursing training programme.
Amankwaa, Isaac; Agyemang-Dankwah, Anabella; Boateng, Daniel
2015-01-01
Introduction. Success in the licensure examination is the only legal prerequisite to practice as a nurse in Ghana. However, a large percentage of nursing students who sit fail this examination for the first time. This study sought to unravel whether prior education, sociodemographic characteristics, and nursing Cumulative Grade Point Average (CGPA) could predict performance in the licensure examinations. Methods. The study was a descriptive cross-sectional survey conducted from November 2014 to April 2015 in the Kumasi metropolis, Ghana on 176 past nursing students. Data was collected using questionnaires and analyzed using SPSS version 22. A logistic regression model was fitted to look at the influence of the explanatory variables on the odds of passing the licensure examinations. All statistical significances were tested at p value of <0.05. Results. Majority, 56.3%, were females and 86.4% were between the ages of 25 and 31 years. Most of the students (88.6%) entered the nursing training colleges with a WASSCE qualification and 38% read general science. 73.9% passed the licensure examinations and the mean CGPA of the students was 2.89 (SD = 0.37). Sociodemographic characteristics and previous education had no influence on performance in the licensure examinations. CGPA had strong positive relationship with performance in licensure examinations (AOR = 15.27; 95% CI = 6.28, 27.11). Conclusion. Students CGPA could be a good predictor of their performance in the licensure examinations. On the other hand, students' sociodemographic and previous educational characteristics might not be important factors to consider in admitting students into the nursing training programme.
Kung, Justin W; Levine, Marc S; Glick, Seth N; Lakhani, Paras; Rubesin, Stephen E; Laufer, Igor
2006-09-01
To retrospectively determine the diagnostic yield of double-contrast barium enema examinations performed for colorectal cancer screening of neoplasms 1 cm or larger or advanced neoplastic lesions of any size in average-risk adults older than 50 years. The Institutional Review Board at the affiliated Veterans Affairs Medical Center approved this HIPAA-compliant study protocol and did not require informed consent from patients. Computerized databases revealed 276 double-contrast barium enema examinations performed for colorectal cancer screening in average-risk adults older than 50 years. Radiographic and pathologic reports were reviewed to determine the number of patients who had polypoid lesions 1 cm or larger, polyps smaller than 1 cm, or advanced neoplastic lesions of any size. Forty-five (16.3%) of the 276 patients underwent follow-up sigmoidoscopy or colonoscopy. Medical, endoscopic, and pathologic records were reviewed and compared with radiographic findings. The results of double-contrast barium enema examination revealed 74 (26.8%) of 276 patients with 104 polypoid lesions in the colon, including 32 patients (11.6%) with 41 polypoid lesions 1 cm or larger, 15 patients (5.4%) with 19 polyps 6-9 mm, and 27 patients (9.8%) with 44 polyps 5 mm or smaller. Endoscopy was performed in 24 (75%) of 32 patients, the results of which confirmed 23 (72%) of 32 radiographically diagnosed lesions 1 cm or larger in 16 (67%) of 24 patients. In two of these individuals, the polyps were hyperplastic. The remaining 14 patients had a total of 21 neoplastic lesions 1 cm or larger, including 11 tubular adenomas, seven tubulovillous adenomas, one villous adenoma with marked dysplasia, and two cancers. The diagnostic yield of screening double-contrast barium enema examination was 5.1% (14 of 276 patients) for neoplastic lesions 1 cm or larger and 6.2% (17 of 276 patients) for advanced neoplastic lesions of any size. Double-contrast barium enema examinations performed in average
Immediate Memory for Haptically-Examined Braille Symbols by Blind and Sighted Subjects.
Newman, Slater E.; And Others
The paper reports on two experiments in Braille learning which compared blind and sighted subjects on the immediate recall of haptically-examined Braille symbols. In the first study, sighted subjects (N=64) haptically examined each of a set of Braille symbols with their preferred or nonpreferred hand and immediately recalled the symbol by drawing…
Bailey, J E; Yackle, K A; Yuen, M T; Voorhees, L I
2000-04-01
To evaluate preoptometry and optometry school grade point averages and Optometry Admission Test (OAT) scores as predictors of performance on the National Board of Examiners in Optometry NBEO Part I (Basic Science) (NBEOPI) examination. Simple and multiple correlation coefficients were computed from data obtained from a sample of three consecutive classes of optometry students (1995-1997; n = 278) at Southern California College of Optometry. The GPA after year two of optometry school was the highest correlation (r = 0.75) among all predictor variables; the average of all scores on the OAT was the highest correlation among preoptometry predictor variables (r = 0.46). Stepwise regression analysis indicated a combination of the optometry GPA, the OAT Academic Average, and the GPA in certain optometry curricular tracks resulted in an improved correlation (multiple r = 0.81). Predicted NBEOPI scores were computed from the regression equation and then analyzed by receiver operating characteristic (roc) and statistic of agreement (kappa) methods. From this analysis, we identified the predicted score that maximized identification of true and false NBEOPI failures (71% and 10%, respectively). Cross validation of this result on a separate class of optometry students resulted in a slightly lower correlation between actual and predicted NBEOPI scores (r = 0.77) but showed the criterion-predicted score to be somewhat lax. The optometry school GPA after 2 years is a reasonably good predictor of performance on the full NBEOPI examination, but the prediction is enhanced by adding the Academic Average OAT score. However, predicting performance in certain subject areas of the NBEOPI examination, for example Psychology and Ocular/Visual Biology, was rather insubstantial. Nevertheless, predicting NBEOPI performance from the best combination of year two optometry GPAs and preoptometry variables is better than has been shown in previous studies predicting optometry GPA from the best
NBME subject examination in surgery scores correlate with surgery clerkship clinical experience.
Myers, Jonathan A; Vigneswaran, Yalini; Gabryszak, Beth; Fogg, Louis F; Francescatti, Amanda B; Golner, Christine; Bines, Steven D
2014-01-01
Most medical schools in the United States use the National Board of Medical Examiners Subject Examinations as a method of at least partial assessment of student performance, yet there is still uncertainty of how well these examination scores correlate with clinical proficiency. Thus, we investigated which factors in a surgery clerkship curriculum have a positive effect on academic achievement on the National Board of Medical Examiners Subject Examination in Surgery. A retrospective analysis of 83 third-year medical students at our institution with 4 unique clinical experiences on the general surgery clerkship for the 2007-2008 academic year was conducted. Records of the United States Medical Licensing Examination Step 1 scores, National Board of Medical Examiners Subject Examination in Surgery scores, and essay examination scores for the groups were compared using 1-way analysis of variance testing. Rush University Medical Center, Chicago IL, an academic institution and tertiary care center. Our data demonstrated National Board of Medical Examiners Subject Examination in Surgery scores from the group with the heavier clinical loads and least time for self-study were statistically higher than the group with lighter clinical services and higher rated self-study time (p = 0.036). However, there was no statistical difference of National Board of Medical Examiners Subject Examination in Surgery scores between the groups with equal clinical loads (p = 0.751). Students experiencing higher clinical volumes on surgical services, but less self-study time demonstrated statistically higher academic performance on objective evaluation, suggesting clinical experience may be of higher value than self-study and reading. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Bo Ahrén
2016-01-01
Full Text Available We hypothesized that the relative contribution of fasting plasma glucose (FPG versus postprandial plasma glucose (PPG to glycated haemoglobin (HbA1c could be calculated using an algorithm developed by the A1c-Derived Average Glucose (ADAG study group to make HbA1c values more clinically relevant to patients. The algorithm estimates average glucose (eAG exposure, which can be used to calculate apparent PPG (aPPG by subtracting FPG. The hypothesis was tested in a large dataset (comprising 17 studies from the vildagliptin clinical trial programme. We found that 24 weeks of treatment with vildagliptin monotherapy (n=2523 reduced the relative contribution of aPPG to eAG from 8.12% to 2.95% (by 64%, p<0.001. In contrast, when vildagliptin was added to metformin (n=2752, the relative contribution of aPPG to eAG insignificantly increased from 1.59% to 2.56%. In conclusion, glucose peaks, which are often prominent in patients with type 2 diabetes, provide a small contribution to the total glucose exposure assessed by HbA1c, and the ADAG algorithm is not robust enough to assess this small relative contribution in patients receiving combination therapy.
Retamero, Carolina; Ramchandani, Dilip
2013-01-01
Objective: The authors compared the NBME subject examination scores and subspecialty profiles of 3rd-year medical students who were assigned to psychiatry subspecialties during their clerkship with those who were not. Method: The authors collated and analyzed the shelf examination scores, the clinical grades, and the child psychiatry and emergency…
Gratitude and Subjective Well-Being in Early Adolescence: Examining Gender Differences
Froh, Jeffrey J.; Yurkewicz, Charles; Kashdan, Todd B.
2009-01-01
Gratitude was examined among 154 students to identify benefits from its experience and expression. Students completed measures of subjective well-being, social support, prosocial behavior, and physical symptoms. Positive associations were found between gratitude and positive affect, global and domain specific life satisfaction, optimism, social…
Gaines, Carmen Veronica
The early stages of chemical tooth decay are governed by dynamic processes of demineralization and remineralization of dental enamel that initiates along the surface of the tooth. Conventional diagnostic techniques lack the spatial resolution required to analyze near-surface structural changes in enamel at the submicron level. In this study, slabs of highly-polished, decay-free human enamel were subjected to 0.12M EDTA and buffered lactic acid demineralizing agents and MI Paste(TM) and calcifying (0.1 ppm F) remineralizing treatments in vitro. Grazing incidence x-ray diffraction (GIXD), a technique typically used for thin film analysis, provided depth profiles of crystallinity changes in surface enamel with a resolution better than 100 nm. In conjunction with nanoindentation, a technique gaining acceptance as a means of examining the mechanical properties of sound enamel, these results were corroborated with well-established microscopy and Raman techniques to assess the nanohardness, morphologies and chemical nature of treated enamel. Interestingly, the average crystallite size of surface enamel along its c-axis dimension increased by nearly 40% after a 60 min EDTA treatment as detected by GIXD. This result was in direct contrast to the obvious surface degradation observed by microscopic and confocal Raman imaging. A decrease in nanohardness from 4.86 +/- 0.44 GPa to 0.28 +/- 0.10 GPa was observed. Collective results suggest that mineral dissolution characteristics evident on the micron scale may not be fully translated to the nanoscale in assessing the integrity of chemically-modified tooth enamel. While an intuitive decrease in enamel crystallinity was observed with buffered lactic acid-treated samples, demineralization was too slow to adequately quantify the enamel property changes seen. MI Paste(TM) treatment of EDTA-demineralized enamel showed preferential growth along the a-axis direction. Calcifying solution treatments of both demineralized sample types
Kimbrough, Tiffany N; Heh, Victor; Wijesooriya, N Romesh; Ryan, Michael S
2016-01-01
To determine the association between family-centered rounds (FCR) and medical student knowledge acquisition as assessed by the National Board of Medical Examiners (NBME) pediatric subject (shelf) exam. A retrospective cohort study was conducted of third-year medical students who graduated from Virginia Commonwealth University School of Medicine between 2009 and 2014. This timeframe represented the transition from 'traditional' rounds to FCR on the pediatric inpatient unit. Data collected included demographics, United States Medical Licensing Examination (USMLE) Step 1 and 2 scores, and NBME subject examinations in pediatrics (PSE), medicine (MSE), and surgery (SSE). Eight hundred and sixteen participants were included in the analysis. Student performance on the PSE could not be statistically differentiated from performance on the MSE for any year except 2011 (z-score=-0.17, p=0.02). Average scores on PSE for years 2009, 2010, 2013, and 2014 were significantly higher than for SSE, but not significantly different for all other years. The PSE was highly correlated with USMLE Step 1 and Step 2 examinations (correlation range 0.56-0.77) for all years. Our results showed no difference in PSE performance during a time in which our institution transitioned to FCR. These findings should be reassuring for students, attending physicians, and medical educators.
Roy, Banibrata; Ripstein, Ira; Perry, Kyle; Cohen, Barry
2016-01-01
To determine whether the pre-medical Grade Point Average (GPA), Medical College Admission Test (MCAT), Internal examinations (Block) and National Board of Medical Examiners (NBME) scores are correlated with and predict the Medical Council of Canada Qualifying Examination Part I (MCCQE-1) scores. Data from 392 admitted students in the graduating classes of 2010-2013 at University of Manitoba (UofM), College of Medicine was considered. Pearson's correlation to assess the strength of the relationship, multiple linear regression to estimate MCCQE-1 score and stepwise linear regression to investigate the amount of variance were employed. Complete data from 367 (94%) students were studied. The MCCQE-1 had a moderate-to-large positive correlation with NBME scores and Block scores but a low correlation with GPA and MCAT scores. The multiple linear regression model gives a good estimate of the MCCQE-1 (R2 =0.604). Stepwise regression analysis demonstrated that 59.2% of the variation in the MCCQE-1 was accounted for by the NBME, but only 1.9% by the Block exams, and negligible variation came from the GPA and the MCAT. Amongst all the examinations used at UofM, the NBME is most closely correlated with MCCQE-1.
Luce, David
2011-01-01
The purpose of this study was to develop an effective screening tool for identifying physician assistant (PA) program applicants at highest risk for poor academic performance. Prior to reviewing applications for the class of 2009, a retrospective analysis of preadmission data took place for the classes of 2006, 2007, and 2008. A single composite score was calculated for each student who matriculated (number of subjects, N=228) incorporating the total undergraduate grade point average (UGPA), the science GPA (SGPA), and the three component Graduate Record Examination (GRE) scores: verbal (GRE-V), quantitative (GRE-Q), analytical (GRE-A). Individual applicant scores for each of the five parameters were ranked in descending quintiles. Each applicant's five quintile scores were then added, yielding a total quintile score ranging from 25, which indicated an excellent performance, to 5, which indicated poorer performance. Thirteen of the 228 students had academic difficulty (dismissal, suspension, or one-quarter on academic warning or probation). Twelve of the 13 students having academic difficulty had a preadmission total quintile score 12 (range, 6-14). In response to this descriptive analysis, when selecting applicants for the class of 2009, the admissions committee used the total quintile score for screening applicants for interviews. Analysis of correlations in preadmission, graduate, and postgraduate performance data for the classes of 2009-2013 will continue and may help identify those applicants at risk for academic difficulty. Establishing a threshold total quintile score of applicant GPA and GRE scores may significantly decrease the number of entering PA students at risk for poor academic performance.
A statistical observation on some subjects in the whole body computed tomographic examination
International Nuclear Information System (INIS)
Murakami, Shozo; Matsumoto, Shigekazu; Murakawa, Yasuhiro; Morimoto, Mitsuo; Nakai, Toshio
1983-01-01
Since the whole body CT (computed tomography) unit (GE, CT/T) was installed in our hospital in April, 1982, a total of 2884 cases have been examined by this whole body scanner for one year from April, 1982 to March, 1983. An analysis of the relationship between the situations of the subjects in and the results of whole body CT examination disclosed some very interesting facts. Up to the present time such a study has scarcely made. That is why we wanted to make this report. The results obtained are as follows: 1. Whole body CT examinations were performed on the patients of advanced age more frequently than on those of young age and performed most frequently on the group in the sixties. 2. The number of CT examinations performed on head and abdomen of the patients was 86.7% of a total of 2884 cases. 3. Enhanced CT examinations were perfomed on 26.1% of 2884 cases and most frequently on the group in the teens. 4. The percentage of the abnormal findings found in 2884 cases was 61.5% and this rate was higher than that shown in the reports made by us in 1980 and 1982, respectively. (author)
SARIÇAM, Hakan
2016-01-01
The basic purpose of this study is to examine the mediating and moderating role of subjective vitality in relationship between rumination and subjective happiness. The participants were 420 university students. In this research, the Self-rumination Scale-SRS, the Subjective Vitality Scale and the Short Form of Oxford Happiness Questionnaire were used. The relationships between rumination, subjective vitality, and happiness were examined using correlation analysis and hierarchical regression a...
Transforming the Subject Matter: Examining the Intellectual Roots of Pedagogical Content Knowledge
Deng, Zongyi
2007-01-01
This article questions the basic assumptions of pedagogical content knowledge by analyzing the ideas of Jerome Bruner, Joseph Schwab, and John Dewey concerning transforming the subject matter. It argues that transforming the subject matter is not only a pedagogical but also a complex curricular task in terms of developing a school subject or a…
Prochaska, John D; Buschmann, Robert N; Jupiter, Daniel; Mutambudzi, Miriam; Peek, M Kristen
2018-06-01
Research suggests a linkage between perceptions of neighborhood quality and the likelihood of engaging in leisure-time physical activity. Often in these studies, intra-neighborhood variance is viewed as something to be controlled for statistically. However, we hypothesized that intra-neighborhood variance in perceptions of neighborhood quality may be contextually relevant. We examined the relationship between intra-neighborhood variance of subjective neighborhood quality and neighborhood-level reported physical inactivity across 48 neighborhoods within a medium-sized city, Texas City, Texas using survey data from 2706 residents collected between 2004 and 2006. Neighborhoods where the aggregated perception of neighborhood quality was poor also had a larger proportion of residents reporting being physically inactive. However, higher degrees of disagreement among residents within neighborhoods about their neighborhood quality was significantly associated with a lower proportion of residents reporting being physically inactive (p=0.001). Our results suggest that intra-neighborhood variability may be contextually relevant in studies seeking to better understand the relationship between neighborhood quality and behaviors sensitive to neighborhood environments, like physical activity. Copyright © 2017 Elsevier Inc. All rights reserved.
DePasquale, Nicole; Zarit, Steven H; Mogle, Jacqueline; Moen, Phyllis; Hammer, Leslie B; Almeida, David M
2018-04-01
Based on the stress process model of family caregiving, this study examined subjective stress appraisals and perceived schedule control among men employed in the long-term care industry (workplace-only caregivers) who concurrently occupied unpaid family caregiving roles for children (double-duty child caregivers), older adults (double-duty elder caregivers), and both children and older adults (triple-duty caregivers). Survey responses from 123 men working in nursing home facilities in the United States were analyzed using multiple linear regression models. Results indicated that workplace-only and double- and triple-duty caregivers' appraised primary stress similarly. However, several differences emerged with respect to secondary role strains, specifically work-family conflict, emotional exhaustion, and turnover intentions. Schedule control also constituted a stress buffer for double- and triple-duty caregivers, particularly among double-duty elder caregivers. These findings contribute to the scarce literature on double- and triple-duty caregiving men and have practical implications for recruitment and retention strategies in the health care industry.
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
DeWald, Janice P; Gutmann, Marylou E; Solomon, Eric S
2004-01-01
Passing the National Board Dental Hygiene Examination is a requirement for licensure in all but one state. There are a number of preparation courses for the examination sponsored by corporations and dental hygiene programs. The purpose of this study was to determine if taking a board review course significantly affected student performance on the board examination. Students from the last six dental hygiene classes at Baylor College of Dentistry (n = 168) were divided into two groups depending on whether they took a particular review course. Mean entering college grade point averages (GPA), exiting dental hygiene program GPAs, and National Board scores were compared for the two groups using a t-test for independent samples (p < 0.05). No significant differences were found between the two groups for entering GPA and National Board scores. Exiting GPAs, however, were slightly higher for those not taking the course compared to those taking the course. In addition, a strong correlation (0.71, Pearson Correlation) was found between exiting GPA and National Board score. Exiting GPA was found to be a strong predictor of National Board performance. These results do not appear to support this program's participation in an external preparation course as a means of increasing students' performance on the National Board Dental Hygiene Examination.
Alexander, Helen A.
1996-01-01
A study investigated the role of subjective assessment in the evaluation of physiotherapy students in clinical programs. Clinical teachers, visiting lecturers, and students recorded perceptions of daily events and interactions in journals. Analysis suggests that assessors make subjective judgments about students that influence grades, and…
Colbow, Alexander James
2017-01-01
The aim of this study was to examine the relations between aspects of subjective social class, academic performance, and subjective wellbeing in first-generation and veteran students. In recent years, both student veterans and first-generation students have become topics of interest for universities, counselors, and researchers, as they are…
Gender Differences in the Psychosomatic Reactions of Students Subjected to Examination Stress
Kosmala-Anderson, Joanna; Wallace, Louise M.
2007-01-01
Introduction: The study investigated pre-examination anxiety and emotional control strategies as possible mediators of gender differences in self reported intensity and type of psychosomatic reactions to examination stress. Method: Sample comprised 150 male and 150 female high school senior students and university students who voluntarily…
Directory of Open Access Journals (Sweden)
Saraih Ummi Naiemah
2018-01-01
Full Text Available The purpose of this research is to investigate the relationships between attitude towards behaviour, subjective norm and entrepreneurial intention among engineering students from Public Higher Educational Institution (PHEI in Malaysia. This research is carried out by using the quantitative method (questionnaire. Data are gathered from 345 respondents which consisted of the final year students from one PHEI in Malaysia. Results presented that entrepreneurial intention are positively associated with attitude towards behaviour (β=.62, p<.01 and subjective norm (β=.25, p<.01. Thus, it is confirmed that both factors of Theory of Planned Behaviour (TPB, namely attitude towards behaviour and subjective norm are significantly related to entrepreneurial intention among the engineering students in this institution. Elevating the degree of attitude towards behaviour and subjective norm are the best strategies to enhance the level of entrepreneurial intention among the engineering students in this institution. Theoretical and practical implications of the results are discussed. In this line, recommendations for the institution management are provided
Sevensma, Kara
that students employed a variety of online reading comprehension strategies in complex and dynamic ways. Among the many strategies revealed, the group of self-regulatory strategies (planning, predicting, monitoring, and evaluating) played a significant role, influencing students' use of all other strategies for locating and generating meaning from science websites. Second, the results also suggested that patterns of strategy use could be examined as unique navigational profiles. Rather than remaining fixed, the navigational profiles of each student altered in response to tasks and research methods. Importantly, all at-risk readers revealed more effective navigational profiles on Day 3 when they were forced by design of the task to attend to project goals and employ more self-regulatory strategies. Third, the results revealed that traditional reading comprehension strategies and prior knowledge of the rainforest also influenced online reading comprehension. Specifically, the at-risk readers with the lowest reading comprehension, oral reading fluency, and prior knowledge scores were more likely than the average-achieving readers to encounter issues in online texts that resulted in constructing ineffective traversals, or online reading paths, and spending significant time investing in online reading that was irrelevant to the research project. Ultimately, this study advanced the understanding about online reading comprehension for average-achieving and at-risk readers in science classrooms, contributing to a gap in the research, suggesting implications for practice, and promoting future research questions.
Examining subjective wellbeing and health-related quality of life in women with endometriosis.
Rush, Georgia; Misajon, RoseAnne
2018-03-01
The purpose of this study was to explore the subjective wellbeing, health-related quality of life and lived experience of women living with endometriosis. In 2015 five hundred participants between the ages of 18-63 (M = 30.5, SD = 7.46) were recruited through Endometriosis Australia and social media, completing an online questionnaire comprising the Personal Wellbeing Index, the Endometriosis Health Profile-30 and various open-ended questions. Results found that women with endometriosis reported low levels of subjective wellbeing (mean PWI total scores of 51.5 ± 2.03), considerably below the normative range of 70-80 for western populations. The mean Endometriosis Health Profile total score indicated a very low health-related quality of life amongst the women in this sample (78.9, ±13.14). There was also a significant relationship between scores on the Endometriosis Health Profile and Personal Wellbeing Index. The findings from the qualitative data suggest that endometriosis impacts negatively on women's lives in several areas such as; social life, relationships and future plans, this in turn affects women's overall life quality. The study highlights the strong negative impact that endometriosis can have on women's subjective wellbeing and health related quality of life, contributing to productivity issues, relationship difficulties and social dissatisfaction and increasing the risk of psychological comorbidities.
Apparatus for radiological examination of a subject through a solid angle
International Nuclear Information System (INIS)
Grady, J.K.; Rice, D.B.
1975-01-01
A framework supporting a radiation source, such as an x-ray tube, and a radiation receptor, such as an x-ray film plate holder, comprises four arms pivotally connected to form a regular parallelogram, a parallel pair of the arms extending outside the parallelogram to pivot points for the radiation source and receptor. The parallelogram is mounted on a rotor whose central axis is parallel to the parallel pair of arms. Two links between another one of the arms and the source and receptor respectively, and parallel to the central axis, hold the axis of the source and receptor aligned on a radiation axis which passes through an isocenter on the central axis as the parallelogram is angularly adjusted in planes parallel to the central axis. The angular adjustment of the parallelogram combined with turning of the parallelogram on the rotor permit the source to radiate through a subject at the isocenter, for example the human heart or brain, from throughout a solid angle while maintaining constant radiological distance between the source and subject and a constant axial alignment of the source and receptor. The radiological magnification may also be kept constant, or the receptor may be adjusted along its axis, in which case a counterweight reciprocating along the transverse arm and connected to the receptor by two cables counterbalances the receptor in all solid angle positions. (auth)
Concerning 1991 basic plan for atomic energy development and application (subjected to examination)
International Nuclear Information System (INIS)
1990-01-01
The prime minister developed a draft 1991 Basic Plan for Atomic Energy Development and Application and sent it to the Nuclear Safety Commission for examination. The Commission started the examination at its 14th meeting. The report outlines results of the examination. A Basic Plan is developed each year to promote efforts at atomic energy development and application systematically and efficiently. In particular, it identifies specific activities required to realize the basic policies shown in the Long Term Program for Atomic Energy Development and Application. In the present report, activities required for improving the safety measures in general are described first, with special emphasis placed on the improvement in nuclear safety regulations and promotion of nuclear safety research. Activities required for promoting nuclear power generation are then outlined. It also insists that the nuclear fuel cycle should be established by promoting measures for uranium resources, uranium enrichment, spent fuel enrichment, and radioactive waste disposal. Other required efforts include the development of improved power reactors, implementation of major projects, and development of basic technology. (N.K.)
Lloyd, Katrina; Emerson, Lesley
2017-01-01
In recent years wellbeing has been linked increasingly with children's rights, often characterised as central to their realisation. Indeed it has been suggested that the two concepts are so intertwined that their pairing has become something of a mantra in the literature on childhood. This paper seeks to explore the nature of the relationship between wellbeing and participation rights, using a recently developed 'rights-based' measure of children's participation in school and community, the Children's Participation Rights Questionnaire (CPRQ), and an established measure of subjective wellbeing - KIDSCREEN-10. The data for the study came from the Kids' Life and Times (KLT) which is an annual online survey of Primary 7 children carried out in Northern Ireland. In 2013 approximately 3800 children (51 % girls; 49 % boys) from 212 schools participated in KLT. The findings showed a statistically significant positive correlation between children's overall scores on the KIDSCREEN-10 subjective wellbeing measure and their perceptions that their participation rights are respected in school and community settings. Further, the results indicated that it is the social relations/autonomy questions on KIDSCREEN-10 which are most strongly related to children's perceptions that their participation rights are respected. Exploration of the findings by gender showed that there were no significant differences in overall wellbeing; however girls had higher scores than boys on the social relations/autonomy domain of KIDSCREEN-10. Girls were also more positive than boys about their participation in school and community. In light of the findings from this study, it is suggested that what lies at the heart of the relationship between child wellbeing and children's participation rights is the social/relational aspects of both participation and wellbeing.
Vera, Elizabeth M.; Vacek, Kimberly; Coyle, Laura D.; Stinson, Jennifer; Mull, Megan; Doud, Katherine; Buchheit, Christine; Gorman, Catherine; Hewitt, Amber; Keene, Chesleigh; Blackmon, Sha'kema; Langrehr, Kimberly J.
2011-01-01
This study explored relations between culturally relevant stressors (i.e., urban hassles, perceived discrimination) and subjective well-being (SWB; i.e., positive/ negative affect, life satisfaction) to examine whether ethnic identity and/or coping strategies would serve as moderators of the relations between stress and SWB for 157 urban, ethnic…
Directory of Open Access Journals (Sweden)
Светлана Евгеньевна Боброва
2013-12-01
Full Text Available The author of the article has been writing English entry examinations for PFUR for over a decade. In this article she analyses the structure and contents of the English language entry examination for prospective students of Linguistics at the Faculty of Philology. The requirements for the entry written test are set by the State standards of complete secondary education for foreign languages at the level of a major subject. The PFUR entry examination has always been written in accordance with recommendations of the Education and Science Ministry and the Federal Institute of Pedagogical Assessment.
Loacker, Bernadette Isabel; Sliwa, Martyna
2016-01-01
This paper examines how discursive codes and demands associated with ‘bureaucratic and entrepreneurial regimes’ of work and career organization shape the work, careers and subjectivities of management graduates. The study is based on an analysis of 30 narratives of management professionals who graduated from an Austrian business school in the early 1970s or 2000s. Its insights suggest that variegated discursive codes manifest in the graduates’ articulated professional practices and subjectivi...
Ye, Yinghua; Lin, Lin
2015-02-01
The unprecedented popularity of online communication has raised interests and concerns among the public as well as in scholarly circles. Online communications have pushed people farther away from one another. This study is a further examination of the effects of online communications on well-being, in particular: Locus of control, Loneliness, Subjective well-being, and Preference for online social interaction. Chinese undergraduate students (N = 260; 84 men, 176 women; M age = 20.1 yr., SD = 1.2) were questioned about demographic information and use of social media as well as four previously validated questionnaires related to well-being. Most participants used QQ, a popular social networking program, as the major channel for online social interactions. Locus of control was positively related to Loneliness and Preference for online social interaction, but negatively related to Subjective well-being; Loneliness (positively) and Subjective well-being (negatively) were related to Preference for online social interaction; and Loneliness and Subjective well-being had a full mediating effect between the relationships of Locus of control and Preference for online social interaction. The findings of the study showed that more lonely, unhappy, and externally controlled students were more likely to be engaged in online social interaction. Improving students' locus of control, loneliness, and happiness may help reduce problematic Internet use.
International Nuclear Information System (INIS)
Fukuoka, Kazuya; Uesaka, Ayuko; Kuribayashi, Kozo
2007-01-01
We evaluated the efficacy of respiratory endoscopy on subjects requiring further detailed examinations as a result of initial asbestos-related disease screening. The subjects consisted of 132 participants who underwent asbestos-related disease screening in our hospital between July 2005 and March 2006. According to their history of screening, the participants were classified into the initial screening group and the second screening group. The former consisted of 76 participants without prior screening, while the latter consisted of 56 participants who were referred to our hospital for the detailed examinations as a result of initial screening undergone elsewhere. The participants were examined concerning their history of asbestos exposure, and then underwent chest X-ray followed by chest computed tomography (CT). Respiratory endoscopic examinations were mainly performed in participants with suspected chest malignancies. There were no significant differences in the distribution of age or gender between the two screening groups. In both screening groups, more than 70% of the participants had a history of occupational exposure to asbestos. Radiological abnormalities were observed in 110 (83%) of all participants. Asbestos-related diseases were detected in a total of 90 (68%) cases. The breakdown of the 90 cases by disease was as follows: 60 cases had pleural plaque, 13 pulmonary fibrosis, 5 lung cancer (LC), 4 benign asbestos pleurisy, 4 round atelectasis, 2 diffuse pleural thickening, and 2 malignant pleural mesothelioma (MPM). The disease detection rate of LC and MPM was 3.8% and 1.5%, respectively. Respiratory endoscopic examinations were performed in a total of 15 cases. The breakdown of the 15 cases by examination was as follows: bronchoscopy was performed in 6 cases, thoracoscopy including video-assisted thoracoscopic surgery (VATS) in 8, and mediastinoscopy in 4. Two cases with early LC were diagnosed by videothoracoscopic lung biopsy. A diagnosis of MPM was
Liu, Nai-Yu; Lee, Hsiao-Hui; Chang, Zee-Fen; Tsay, Yeou-Guang
2015-09-10
It has been observed that a modified peptide and its non-modified counterpart, when analyzed with reverse phase liquid chromatography, usually share a very similar elution property [1-3]. Inasmuch as this property is common to many different types of protein modifications, we propose an informatics-based approach, featuring the generation of segmental average mass spectra ((sa)MS), that is capable of locating different types of modified peptides in two-dimensional liquid chromatography-mass spectrometric (LC-MS) data collected for regular protease digests from proteins in gels or solutions. To enable the localization of these peptides in the LC-MS map, we have implemented a set of computer programs, or the (sa)MS package, that perform the needed functions, including generating a complete set of segmental average mass spectra, compiling the peptide inventory from the Sequest/TurboSequest results, searching modified peptide candidates and annotating a tandem mass spectrum for final verification. Using ROCK2 as an example, our programs were applied to identify multiple types of modified peptides, such as phosphorylated and hexosylated ones, which particularly include those peptides that could have been ignored due to their peculiar fragmentation patterns and consequent low search scores. Hence, we demonstrate that, when complemented with peptide search algorithms, our approach and the entailed computer programs can add the sequence information needed for bolstering the confidence of data interpretation by the present analytical platforms and facilitate the mining of protein modification information out of complicated LC-MS/MS data. Copyright © 2015 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Yu-Hui Huang
Full Text Available This study compares heart rate variability (HRV and systolic blood pressure (SBP changes of spinal cord injury (SCI patients during urodynamic study (UDS with able-bodied controls.Twenty four complete suprasacral SCI patients (12 tetraplegia and 12 paraplegia and 12 age-matched able-bodied volunteers received BP and HRV evaluation throughout urodynamic examination. We chose seven time points during the examinations: resting, Foley catheter insertion, start of infusion, and infused volume reaching 1/4, 2/4, 3/4 and 4/4 of maximal capacity. At each time point, electrocardiogram with a duration of 5 min was used for power spectral density analysis of HRV.Only control subjects displayed significant elevation of SBP during Foley catheter insertion compared to resting values. Both control and tetraplegic groups experienced significant elevation of SBP at maximal bladder capacity compared to resting values. Tetraplegic values were also significantly greater than the other two groups. Control subjects displayed significant elevation of low frequency/high frequency (LF/HF ratios during Foley catheter insertion and when approaching maximum bladder capacity. These findings were not seen in the paraplegic and tetraplegic groups. However, subgroup analysis of tetraplegic subjects with SBP elevation >50 mmHg demonstrated a similar LF/HF response to the able-bodied controls.Tetraplegic patients experienced BP elevation but did not experience significant changes in HRV during bladder distension. This finding may imply that different neurological pathways contribute to AD reaction and HRV changes during bladder distension. However, profound AD during UDS in tetraplegic patients was associated with corresponding changes in HRV. Whether HRV monitoring would be beneficial in SCI patients presenting with significant AD, it needs further studies to elucidate.
Gillett, Jade E; Crisp, Dimity A
2017-09-01
The sandwich generation represents adults, often in midlife, who care for both children and ageing parents/relatives. While the stress they experience has received some attention, little research has investigated the subjective well-being (SWB) of this population. This study examined the relationship between perceived stress and SWB and the moderating effect of coping style. Ninety-three participants (80 women), aged 23-63 years, completed an online survey measuring perceived stress, coping strategies, life satisfaction and positive and negative affect. Stress was negatively associated with SWB. While emotion- and problem-focused coping were directly associated with SWB outcomes, the only moderating effect found was for avoidance-focused coping (AFC). Specifically, AFC was associated with higher positive affect for those reporting lower stress. This study highlights the need to recognise the distinct circumstances that exist for the sandwich generation. Limitations and suggestions for future research are discussed. © 2017 AJA Inc.
Wu, Mengjun; Brazier, John; Relton, Clare; Cooper, Cindy; Smith, Christine; Blackburn, Joanna
2014-04-29
Generic preference-based measures such as the EQ-5D and SF-6D have been criticised for being narrowly focused on a sub-set of dimensions of health. Our study aims to explore whether long-standing health conditions have an incremental impact on subjective well-being alongside the EQ-5D. Using data from the South Yorkshire Cohort study (N = 13,591) collected between 2010 and 2012 on the EQ-5D, long-standing health conditions (self-reported), and subjective well-being measure--life satisfaction using a response scale from 0 (completely dissatisfied) to 10 (completely satisfied), we employed generalised logit regression models. We assessed the impact of EQ-5D and long-standing health conditions together on life satisfaction by examining the size and significance of their estimated odds ratios. The EQ-5D had a significant association with life satisfaction, in which anxiety/depression and then self-care had the largest weights. Some long-standing health conditions were significant in some models, but most did not have an independent impact on life satisfaction. Overall, none of the health conditions had a consistent impact on life satisfaction alongside the EQ-5D. Out study suggests that the impact of long-standing health conditions on life satisfaction is adequately captured by the EQ-5D, although the findings are limited by reliance on self-reported conditions and a single item life satisfaction measure.
Alea, Nicole; Bluck, Susan
2013-01-01
Two studies in different cultures (Study 1: USA, N=174, Study 2: Trinidad, N=167) examined whether meaning making, (i.e., both searching for meaning, and directing behaviour) is positively related to subjective well-being (SWB) by age (younger, older adults). In both studies, participants self-reported engagement in meaning making, and SWB (e.g., affect, future time perspective, psychological well-being). In Study 1, young Americans (compared to older) more frequently used their past to direct behaviour but doing so was unrelated to SWB. In older Americans, both types of meaning making were positively associated with SWB. In Study 2, Trinidadian younger adults were again more likely than older adults to engage in meaning making. Unlike in the American sample, however, directing behaviour was positively related to SWB for both young and older adults. The studies demonstrate that whether meaning making shows benefits for SWB may depend on type of meaning, age and culture. Note that although meaning making was sometimes unrelated to SWB, no detrimental relations to meaning making were found. The discussion focuses on the role of moderators in understanding when meaning making should lead to benefits versus costs to SWB.
Kaakinen, Markus; Keipi, Teo; Räsänen, Pekka; Oksanen, Atte
2018-02-01
The wealth of beneficial tools for online interaction, consumption, and access to others also bring new risks for harmful experiences online. This study examines the association between cybercrime victimization and subjective well-being (SWB) and, based on the buffering effect hypothesis, tests the assumption of the protective function of social belonging in cybercrime victimization. Cross-national data from the United States, United Kingdom, Germany, and Finland (N = 3,557; Internet users aged 15-30 years; 49.85 percent female) were analyzed using descriptive statistics and main and moderation effect models. Results show that cybercrime victimization has a negative association with SWB after adjusting for a number of confounding factors. This association concerns both general cybercrime victimization and subcategories such as victimization to offensive cybercrime and cyberfraud. In line with the buffering effect hypothesis, social belonging to offline groups was shown to moderate the negative association between SWB and cybercrime victimization. The same effect was not found in the social belonging to online groups. Overall, the study indicates that, analogously to crime victimization in the offline context, cybercrime is a harmful experience whose negative effects mainly concern those users who have weak social ties offline to aid in coping with such stressors.
Niewiadomski, Piotr; Zielińska-Bliźniewskaw, Hanna; Miloński, Jarosław; Pietkiewicz, Piotr; Olszewski, Jurek
2015-01-01
The aim of this work was to evaluate the diagnostic value of the neck torsion test in VNG, Doppler ultrasonography and brainstem auditory evoked potentials in patients with vertigo and/or hearing loss due to intracranial vascular malformations. The study covered 47 patients, 30 female and 17 male (mean age, 55.5 years; range, 19-74 years) with vertigo and/or hearing disorders and the asymmetry of vertebral arteries. Each patient underwent a subjective examination, an otolaryngological examination, otoneurological diagnostics, VNG with gaze tracking in the straight ahead position and in the 600 left and right neck torsion, the neck torsion test, audiological diagnostics including I-, III- and V-wave latency of the brainstem evoked potentials in the straight ahead position and the right ear stimulation in the 600 right neck torsion and the left ear stimulation in the neck torsion to the left, Doppler ultrasonography with measuring the diameter of vertebral arteries and the velocity of the blood flow in these vessels with the use of the neck torsion test. In own study, in VNG, the positive neck torsion test was observed in 76.5% of the study patients, while square waves in both directions were found in 46.5% and in one direction in 10.6%. Cervical nystagmus was noticed in 19.1% of these patients. In the auditory evoked potentials test, the differences in I-, III- and V-wave latency time were not statistically significant, either at rest or in the neck torsion. In the Doppler ultrasound examination, the asymmetry of vertebral arteries were present (below 25%) in 7 women (14.9%) and 4 men (8.5%), whereas large asymmetries (above 25%) were observed in 23 women (48.9%) and 13 (27.7%) men (range, 25% - 215%) and was statistically insignificant. The resting blood flow velocity in vertebral arteries of large asymmetries, both in systole and diastole heart phases, was significantly higher in the artery with larger asymmetry. The neck torsion test can be diagnostically useful
Kroll, Christian
2011-01-01
This paper addresses a number of key challenges in current subjective well-being (SWB) research: A new wave of studies should take into account that different things may make different people happy, thus going beyond a unitary "happiness formula". Furthermore, empirical results need to be connected to broader theoretical narratives.…
Osborne, Shona Elizabeth
2009-01-01
This multivariate study aimed to further understand student stress. Associations between personality, emotional intelligence, coping and subjective well-being with perceived stress (trait and state) were examined in 238 undergraduate students, using self-report measures. Gender differences in these variables were also investigated. The results showed that students low in emotional stability, extraversion, emotional intelligence, subjective well-being and those with a tendency to use emotion...
Energy Technology Data Exchange (ETDEWEB)
Trubey, D.K.; Roussin, R.W.; Gustin, A.B.
1983-08-01
An indexed bibliography of open literature selected by the Radiation Shielding Information Center since the previous volume was published in 1980 is presented in the area of radiation transport and shielding against radiation from nuclear reactors (fission and fusion), x-ray machines, radioisotopes, nuclear weapons (including fallout, and low energy accelerators (e.g., neutron generators). The bibliography was typeset from computer files constituting the RSIC Storage and Retrieval Information System. In addition to lists of literature titles by subject categories (accessions 6201-10156), an author index is given. Most of the literature selected for Volume VII was published in the years 1977 to 1981.
International Nuclear Information System (INIS)
Bolving, L.; Noer, I.; Soegaard, P.; Christensen, T.; Funch-Jensen, P.; Thommesen, P.; Kommunehospital, Aarhus
1990-01-01
In ten healthy subjects final gastric emptying of solid food was measured by a new scintigraphic mehtod, employing 99m Tc labelled pellets, and compared to a radiologic method, employing food with incorporated barium suspension. Final gastric emptying of solid food, measured by the scintigraphic technique, was 5.2 hours and with the radiographic technique 5.5 hours, with no significant difference. It is concluded that significant information concerning gastric emptying of solid food can be obtained by the radiological method. (orig.) [de
International Nuclear Information System (INIS)
Trubey, D.K.; Roussin, R.W.; Gustin, A.B.
1983-08-01
An indexed bibliography of open literature selected by the Radiation Shielding Information Center since the previous volume was published in 1980 is presented in the area of radiation transport and shielding against radiation from nuclear reactors (fission and fusion), x-ray machines, radioisotopes, nuclear weapons (including fallout, and low energy accelerators (e.g., neutron generators). The bibliography was typeset from computer files constituting the RSIC Storage and Retrieval Information System. In addition to lists of literature titles by subject categories (accessions 6201-10156), an author index is given. Most of the literature selected for Volume VII was published in the years 1977 to 1981
Directory of Open Access Journals (Sweden)
Shahanuma Shaik
2017-12-01
Full Text Available BACKGROUND Appendicitis is one of the commonest surgical emergencies with a lifetime risk of 7-8%. The appendicectomy specimens operated upon clinically-suspected appendicitis often appear normal on gross examination, but histopathological evaluation may reveal a diverse underlying pathology. Therefore, for accurate diagnosis, histopathological examination of all appendicectomy specimens is mandatory. MATERIALS AND METHODS A retrospective study of 175 appendicectomy cases operated over a period of two years. The clinical data and histopathological reports were reviewed and various histopathological findings are categorised. RESULTS Out of the total 175 appendicectomies, 155 emergency appendicectomy cases were included in the study, while 20 cases of incidental appendicectomy were excluded. The peak incidence was found in the 2nd and 3rd decades with male predominance. Among the 155 specimens, 96.8% had histological features of appendicitis and 1.9% were normal appendix. The unusual histopathological findings were Carcinoid tumour and Enterobius vermicularis. CONCLUSION The definitive diagnoses of appendicitis as well as the unusual incidental findings that were missed intraoperatively are established by histopathological examination. The study supports the histological examination of all resected appendicectomy specimens.
Wells, Kevin Eugene; Morgan, Grant; Worrell, Frank C.; Sumnall, Harry; McKay, Michael Thomas
2018-01-01
The goal of the present study is to examine the stability of time attitudes profiles across a one-year period as well as the association between time attitudes profiles and several variables. These variables include attitudes towards alcohol, context of alcohol use, consumption of a full drink, and subjective life expectancy. We assessed the…
Koyio, L.N.; Kikwilu, E.N.; Mulder, J.; Frencken, J.E.F.M.
2013-01-01
Objectives: To assess attitudes, subjective norms, and intentions of primary health-care (PHC) providers in performing routine oral examination for oropharyngeal candidiasis (OPC) during outpatient consultations. Methods: A 47-item Theory of Planned Behaviour-based questionnaire was developed and
Drewery, Dave; Pretti, T. Judene; Barclay, Sage
2016-01-01
The purpose of this study was to examine the relationships between co-op students' perceived relevance of their work term, work-related subjective well-being (SWB), and individual performance at work. Data were collected using a survey of co-op students (n = 1,989) upon completion of a work term. Results of regression analyses testing a…
Parker, Amy T.; Grimmett, Eric S.; Summers, Sharon
2008-01-01
This review examines practices for building effective communication strategies for children with visual impairments, including those with additional disabilities, that have been tested by single-subject design methodology. The authors found 30 studies that met the search criteria and grouped intervention strategies to align any evidence of the…
International Nuclear Information System (INIS)
Garlick, A.
1985-01-01
A series of tests has been conducted in the National Research Universal (NRU) reactor, Chalk River, Canada, to investigate the behaviour of full-length 32-rod PWR fuel bundles during a simulated large-break loss of coolant accident (LOCA). In one of these tests (MT-3), 12 central rods were pre-pressurized in order to evaluate the ballooning and rupture of cladding in the Zircaloy high-α/α+β temperature region. All 12 rods ruptured after experiencing < 90% diametral strain but there was no suggestion of coplanar blockage. Post-irradiation examination was carried out on cross-sections of cladding from selected rods to determine the aximuthal distribution of wall thinning along the ballooned regions. These data are assessed to check whether they are consistent with a mechanism in which fuel stack eccentricity generates temperature gradients around the ballooning cladding and leads to premature rupture during a LOCA. After anodizing, the cladding microstructures were examined for the presence of prior-beta phase that would indicate the α/α+β transformation temperature (1078K) had been exceeded. These results were compared with isothermal annealing test data on unirradiated cladding from the same manufacturing batch
Ye, Jiawen; Yeung, Dannii Y; Liu, Elaine S C; Rochelle, Tina L
2018-04-03
Past research has often focused on the effects of emotional intelligence and received social support on subjective well-being yet paid limited attention to the effects of provided social support. This study adopted a longitudinal design to examine the sequential mediating effects of provided and received social support on the relationship between trait emotional intelligence and subjective happiness. A total of 214 Hong Kong Chinese undergraduates were asked to complete two assessments with a 6-month interval in between. The results of the sequential mediation analysis indicated that the trait emotional intelligence measured in Time 1 indirectly influenced the level of subjective happiness in Time 2 through a sequential pathway of social support provided for others in Time 1 and social support received from others in Time 2. These findings highlight the importance of trait emotional intelligence and the reciprocal exchanges of social support in the subjective well-being of university students. © 2018 International Union of Psychological Science.
Directory of Open Access Journals (Sweden)
Arun Sedhain
2015-01-01
Full Text Available Objective To assess the nutritional status of patients on maintenance hemodialysis by using modified quantitative subjective global assessment (MQSGA and anthropometric measurements. Method We Conducted a cross sectional descriptive analytical study to assess the nutritional status of fifty four patients with chronic kidney disease undergoing maintenance hemodialysis by using MQSGA and different anthropometric and laboratory measurements like body mass index (BMI, mid-arm circumference (MAC, mid-arm muscle circumference (MAMC, triceps skin fold (TSF and biceps skin fold (BSF, serum albumin, C-reactive protein (CRP and lipid profile in a government tertiary hospital at Kathmandu, Nepal. Results Based on MQSGA criteria, 66.7% of the patients suffered from mild to moderate malnutrition and 33.3% were well nourished. None of the patients were severely malnourished. CRP was positive in 56.3% patients. Serum albumin, MAC and BMI were (mean + SD 4.0 + 0.3 mg/dl, 22 + 2.6 cm and 19.6 ± 3.2 kg/m 2 respectively. MQSGA showed negative correlation with MAC ( r = −0.563; P = < 0.001, BMI ( r = −0.448; P = < 0.001, MAMC ( r = −0.506; P = < .0001, TSF ( r = −0.483; P = < .0002, and BSF ( r = −0.508; P = < 0.0001. Negative correlation of MQSGA was also found with total cholesterol, triglyceride, LDL cholesterol and HDL cholesterol without any statistical significance. Conclusion Mild to moderate malnutrition was found to be present in two thirds of the patients undergoing hemodialysis. Anthropometric measurements like BMI, MAC, MAMC, BSF and TSF were negatively correlated with MQSGA. Anthropometric and laboratory assessment tools could be used for nutritional assessment as they are relatively easier, cheaper and practical markers of nutritional status.
Kang, Min-Hyeok; Kwon, Oh-Yun; Kim, Yong-Wook; Kim, Ji-Won; Kim, Tae-Ho; Oh, Tae-Young; Weon, Jong-Hyuk; Lee, Tae-Sik; Oh, Jae-Seop
2016-01-01
To determine the agreement among the items of the Korean physical therapist licensing examination, learning objectives of class subjects, and physical therapists' job descriptions. The main tasks of physical therapists were classified, and university courses related to the main tasks were also classified. Frequency analysis was used to determine the proportions of credits for the classified courses out of the total credits of major subjects, exam items related to the classified courses out of the total number of exam items, and universities that offer courses related to the Korean physical therapist licensing examination among the surveyed universities. The proportions of credits for clinical decision making and physical therapy diagnosis-related courses out of the total number credits for major subjects at universities were relatively low (2.06% and 2.58%, respectively). Although the main tasks of physical therapists are related to diagnosis and evaluation, the proportion of physiotherapy intervention-related items (35%) was higher than that of examination and evaluation-related items (25%) on the Korean physical therapist licensing examination. The percentages of universities that offer physical therapy diagnosis and clinical decision making-related courses were 58.62% and 68.97%, respectively. Both the proportion of physiotherapy diagnosis and evaluation-related items on the Korean physical therapist licensing examination, and the number of subjects related to clinical decision making and physical therapy diagnosis in the physical therapy curriculum, should be increased to ensure that the examination items and physical therapy curriculum reflect the practical tasks of physical therapists.
Reimer, R A; Pelletier, X; Carabin, I G; Lyon, M R; Gahler, R J; Wood, S
2012-08-01
Short chain fatty acids (SCFA) are produced by the bacterial fermentation of dietary fibre and have been linked with intestinal health. The present study examined faecal SCFA concentrations in subjects consuming a novel soluble highly viscous polysaccharide (HVP) or control for 3 weeks. A total of 54 healthy adults participated in a randomised, double-blind, placebo-controlled study. Subjects were randomised to consume HVP or control (skim milk powder). A dose of 5 g day(-1) was consumed in the first week, followed by 10 g day(-1) in the second and third weeks (n = 27 per group). The primary outcome was SCFA concentrations in faecal samples collected at baseline (visit 1, V1), at 1 week (V2) and at 3 week (V3). The reduction in faecal acetate from V1 to V3 in control subjects was not observed in subjects consuming HVP. There were no differences in propionate, butyrate, valerate or caproate concentrations. There was a significant treatment effect (P = 0.03) for total SCFA, with higher concentrations observed in subjects consuming HVP versus control. HVP is a viscous functional fibre that may influence gut microbial fermentation. Further work is warranted to examine the fermentative properties of HVP and possible links with appetite regulation and reduced serum low-density lipoprotein cholesterol concentrations. © 2012 The Authors. Journal of Human Nutrition and Dietetics © 2012 The British Dietetic Association Ltd.
Hsiao, Fei-Hsiu; Lin, Shu-Mei; Liao, Hsiao-Yuan; Lai, Mei-Chih
2004-10-01
This study examined Chinese inpatients' views on what aspects of a nurses' focused, structured therapy group worked to help their psychological and interpersonal problems and what traditional Chinese cultural values influenced their viewpoints. Nine Chinese inpatients with mental illness participated in the four-session nurses' focused, structured therapy group. After they completed the last session of therapy, they were invited to participate in a structured interview and a semi-structured interview regarding their perceptions of the change mechanisms in nurses' focused, structured group therapy. The semi-structured interviews were recorded and transcribed to be further analysed according to the principal of content analysis. The results indicate that (i) all patients believed that a nurses' focused, structured group psychotherapy enhanced their interpersonal learning and improved the quality of their lives, (ii) traditional Chinese cultural values--those emphasizing the importance of maintaining harmonious interpersonal relationships--influenced the Chinese inpatients' expression of negative emotions in the group and their motivation on interpersonal learning. In conclusion, we found that transcultural modification for applying Western group psychotherapy in Chinese culture was needed. The modification included establishing a 'pseudo-kin' or 'own people' relationship among group members and the therapists, organizing warm-up exercises and structured activities, applying projective methods and focusing on the issues of interpersonal relationships and interpersonal problems. The small sample size of the present study raises questions regarding how representative the views of the sample are with respect to the majority of Chinese inpatients. Nevertheless, this preliminary study revealed a cultural aspect in nursing training that requires significant consideration in order to work effectively with Chinese patients. Copyright 2004 Blackwell Publishing Ltd
Haibin Qiu; Shanghong Shi; Tingdi Zhao; Yiwei Qiao; Jiangshi Zhang
2013-01-01
The aim of this paper is to recommend that the subjects and contents of certified safety engineers use safety engineering undergraduate curriculum system for reference. Human resources play an important role in accident prevention and loss control. Education on safety engineering develops quickly in China. Moreover, the State Administration of Work Safety and the National Human Resources and Social Security Ministry have implemented a certified safety engineer qualification and examination sy...
International Nuclear Information System (INIS)
Chrien, R.E.
1986-10-01
The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs
International Nuclear Information System (INIS)
Yamada, Masayuki; Koga, Sukehiko; Sugie, Masami; Kinoshita, Kazuo; Anno, Hirofumi; Katada, Kazuhiro.
1996-01-01
Recently, as the fast spin echo technique has become prevailing among all the techniques in this line, there has been an increasing interest in the exposure of subjects to radiofrequency (RF) radiation during magnetic resonance imaging (MRI) examinations. On the other hand, there have been no reports about the safety of the MRI examination in Japan. For this reason, in this study, the authors aimed to evaluate the extent of the exposure of subjects to RF radiation during MRI examinations, and measured the specific absorption rate (SAR) of spherical phantoms, which assumed to be adult heads, by using the procedures set forth in two safety guidelines respectively: the 1988 Guideline of the Food and Drug Administration (FDA), and the 1995 Standards of the International Electrotechnical Commission (IEC). As a result of the measurement, it was found that the highest value of the SAR was 1.361 W/kg, which stayed far below the upper limits set forth by the respective safety guidelines referred to in the above. However, the measured values of the SAR varied depending on the respective measuring procedures. As both the measuring procedures are equivalent theoretically, the authors consider the variance to be very important. (author)
Miller, Michelle; Thomas, Jolene; Suen, Jenni; Ong, De Sheng; Sharma, Yogesh
2018-05-01
Undernourished patients discharged from the hospital require follow-up; however, attendance at return visits is low. Teleconsultations may allow remote follow-up of undernourished patients; however, no valid method to remotely perform physical examination, a critical component of assessing nutritional status, exists. This study aims to compare agreement between photographs taken by trained dietitians and in-person physical examinations conducted by trained dietitians to rate the overall physical examination section of the scored Patient Generated Subjective Global Assessment (PG-SGA). Nested cross-sectional study. Adults aged ≥60 years, admitted to the general medicine unit at Flinders Medical Centre between March 2015 and March 2016, were eligible. All components of the PG-SGA and photographs of muscle and fat sites were collected from 192 participants either in the hospital or at their place of residence after discharge. Validity of photograph-based physical examination was determined by collecting photographic and PG-SGA data from each participant at one encounter by trained dietitians. A dietitian blinded to data collection later assessed de-identified photographs on a computer. Percentage agreement, weighted kappa agreement, sensitivity, and specificity between the photographs and in-person physical examinations were calculated. All data collected were included in the analysis. Overall, the photograph-based physical examination rating achieved a percentage agreement of 75.8% against the in-person assessment, with a weighted kappa agreement of 0.526 (95% CI: 0.416, 0.637; Pexamination by trained dietitians achieved a nearly acceptable percentage agreement, moderate weighted kappa, and fair sensitivity-specificity pair. Methodological refinement before field testing with other personnel may improve the agreement and accuracy of photograph-based physical examination. Copyright © 2018 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights
When good = better than average
Directory of Open Access Journals (Sweden)
Don A. Moore
2007-10-01
Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.
Kim, Kyoung-Yun; Park, Jeong Seop
2018-06-01
The effects of fish consumption by subjects with prediabetes on the metabolic risk factors were examined based on the data from the 6 th Korea National Health and Nutrition Examination Surveys in 2015. A total of 1,520 subjects who agreed to participate in a blood test and dietary intake survey were divided into a prediabetes group and normal blood glucose group, and the level of the subjects' fish consumption was divided into ≤ 17.0 g/day, 18.0-93.0 g/day, and ≥ 94 g/day. The correlation between the level of fish intake and the metabolic risk factors was evaluated by multinomial logistic regression analysis. A significant difference in the gender distribution was observed in the prediabetes group, which is a group with a high risk of non-communicable diseases, according to the fish intake, and there were significant differences in the total energy intake, protein intake, n-3 fatty acids intake, and the intakes of sodium and micro-nutrients according to the intake group ( P < 0.05). In addition, the blood total cholesterol (TC) decreased 0.422 fold in model 1 (unadjusted) [95% confidence interval (CI): 0.211-0.845] and 0.422 fold in model 2 (adjusted for sex) (95% CI: 0.210-0.846) in those with a fish intake of 18.0-93.0 g/day ( P < 0.05) compared to those with a fish intake of ≤ 17.0 g/day. The blood TC decreased 0.555 fold (95% CI: 0.311-0.989) in model 1 and 0.549 fold (95% CI: 0.302-0.997) in model 2 in those with a fish intake of ≥ 94 g/day compared to those with a fish intake of ≤ 17.0 g/day ( P < 0.05). Subjects with prediabetes or the metabolic risk factors can maintain their blood low density lipoprotein cholesterol (LDL-C) and blood TC concentrations at the optimal level by consuming fish (18.0-93.0 g/day).
DEFF Research Database (Denmark)
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Ramlall, S; Chipps, J; Bhigjee, A I; Pillay, B J
2013-01-01
The effectiveness of dementia screening depends on the availability of suitable screening tools with good sensitivity and specificity to confidently distinguish normal age-related cognitive decline from dementia. The aim of this study was to evaluate the discriminant validity of 7 screening measures for dementia. A sample of 140 participants aged ≥60 years living in a residential facility for the aged were assessed clinically and assigned caseness for dementia using the Diagnostic and Statistical Manual of Mental Disorders, 4th edition, text revised diagnostic criteria. Sensitivity and specificity of a selection of the following screening measures were tested using receiver operating characteristic (ROC) analysis for individual and combined tests: the Mini-Mental State Examination (MMSE), Six-Item Screener (SIS), Subjective Memory Complaint, Subjective Memory Complaint Clinical (SMCC), Subjective Memory Rating Scale (SMRS), Deterioration Cognitive Observee (DECO) and the Clock Drawing Test (CDT). Using ROC analyses, the SMCC, MMSE and CDT were found to be 'moderately accurate' in screening for dementia with an area under the curve (AUC) >0.70. The AUCs for the SIS (0.526), SMRS (0.661) and DECO (0.687) classified these measures as being 'less accurate'. At recommended cutoff scores, the SMCC had a sensitivity of 90.9% and specificity of 45.7%; the MMSE had a sensitivity of 63.6% and a specificity of 76.0%, and the CDT had a sensitivity of 44.4% and a specificity of 88.9%. Combining the SMCC and MMSE did not improve their predictive power except for a modest increase when using the sequential rule. The SMCC is composed of valid screening questions that have high sensitivity, are simple to administer and ideal for administration at the community or primary health care level as a first level of 'rule-out' screening. The MMSE can be included at a second stage of screening at the general hospital level and the CDT in specialist clinical settings. Sequential use of the
International Nuclear Information System (INIS)
Ichiguchi, Katsuji
1998-01-01
A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)
Determining average yarding distance.
Roger H. Twito; Charles N. Mann
1979-01-01
Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...
Watson, Jane; Chick, Helen
2012-01-01
This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…
Averaging operations on matrices
Indian Academy of Sciences (India)
2014-07-03
Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...
Directory of Open Access Journals (Sweden)
Patricia Bouyer
2015-09-01
Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.
DEFF Research Database (Denmark)
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Baiden, Philip; Tarshis, Sarah; Antwi-Boasiako, Kofi; den Dunnen, Wendy
2016-08-01
The purpose of this study was to examine the independent protective effect of subjective well-being on severe psychological distress among adult Canadians with a history of child maltreatment. Data for this study were obtained from the 2012 Canadian Community Health Survey-Mental Health (CCHS-MH). A sample of 8126 respondents aged 20-69 years old who experienced at least one child maltreatment event was analyzed using binary logistic regression with severe psychological distress as the outcome variable. Of the 8126 respondents with a history of child maltreatment, 3.9% experienced severe psychological distress within the past month. Results from the multivariate logistic regression revealed that emotional and psychological well-being each had a significant effect on severe psychological distress. For each unit increase in emotional well-being, the odds of a respondent having severe psychological distress were predicted to decrease by a factor of 28% and for each unit increase in psychological well-being, the odds of a respondent having severe psychological distress were predicted to decrease by a factor of 10%, net the effect of demographic, socioeconomic, and health factors. Other factors associated with psychological distress included: younger age, poor self-perceived physical health, and chronic condition. Having post-secondary education, having a higher income, and being non-White predicted lower odds of severe psychological distress. Although, child maltreatment is associated with stressful life events later in adulthood, subjective well-being could serve as a protective factor against severe psychological distress among adults who experienced maltreatment when they were children. Copyright © 2016 Elsevier Ltd. All rights reserved.
Delgado, Ana; Saletti-Cuesta, Lorena; López-Fernández, Luis Andrés; Toro-Cárdenas, Silvia; Luna del Castillo, Juan de Dios
2016-03-01
Two components of professional success have been defined: objective career success (OCS) and subjective career success (SCS). Despite the increasing number of women practicing medicine, gender inequalities persist. The objectives of this descriptive, cross-sectional, and multicenter study were (a) to construct and validate OCS and SCS scales, (b) to determine the relationships between OCS and SCS and between each scale and professional/family characteristics, and (c) to compare these associations between male and female family physicians (FPs). The study sample comprised 250 female and 250 male FPs from urban health centers in Andalusia (Spain). Data were gathered over 6 months on gender, age, care load, professional/family variables, and family-work balance, using a self-administered questionnaire. OSC and SCS scales were examined by using exploratory factorial analysis and Cronbach's α, and scores were compared by gender-stratified bivariate and multiple regression analyses. Intraclass correlation coefficients were calculated using a multilevel analysis. The response rate was 73.6%. We identified three OCS factors and two SCS factors. Lower scores were obtained by female versus male FPs in the OCS dimensions, but there were no gender differences in either SCS dimension. © The Author(s) 2014.
Should the average tax rate be marginalized?
Czech Academy of Sciences Publication Activity Database
Feldman, N. E.; Katuščák, Peter
-, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf
Eliazar, Iddo
2018-02-01
The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.
Directory of Open Access Journals (Sweden)
Mary Hokazono
Full Text Available CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD detects stroke risk among children with sickle cell anemia (SCA. Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD genotypes and correlate the time-averaged maximum mean (TAMM velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia; and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia. TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05. RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6% and five conditional results (8.1% in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics.
Barnwell-Sanders, Pamela
2015-01-01
Graduates of associate degree (AD) nursing programs form the largest segment of first-time National Council Licensure Examination for Registered Nurses (NCLEX-RN®) test takers, yet also experience the highest rate of NCLEX-RN® failures. NCLEX-RN® failure delays entry into the profession, adding an emotional and financial toll to the unsuccessful…
Average nuclear surface properties
International Nuclear Information System (INIS)
Groote, H. von.
1979-01-01
The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)
Americans' Average Radiation Exposure
International Nuclear Information System (INIS)
2000-01-01
We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body
Energy Technology Data Exchange (ETDEWEB)
Lipnharski, I; Quails, N; Carranza, C; Correa, N; Bidari, S; Bickelhaup, M; Rill, L; Arreola, M [University of Florida, Gainesville, FL (United States)
2016-06-15
Purpose: The imaging of pregnant patients is medically necessary in certain clinical situations. The purpose of this work was to directly measure uterine doses in a cadaver scanned with CT protocols commonly performed on pregnant patients in order to estimate fetal dose and assess potential risk. Method: One postmortem subject was scanned on a 320-slice CT scanner with standard pulmonary embolism, trauma, and appendicitis protocols. All protocols were performed with the scan parameters and ranges currently used in clinical practice. Exams were performed both with and without iterative reconstruction to highlight the dose savings potential. Optically stimulated luminescent dosimeters (OSLDs) were inserted into the uterus in order to approximate fetal doses. Results: In the pulmonary embolism CT protocol, the uterus is outside of the primary beam, and the dose to the uterus was under 1 mGy. In the trauma and appendicitis protocols, the uterus is in the primary beam, the fetal dose estimates were 30.5 mGy for the trauma protocol, and 20.6 mGy for the appendicitis protocol. Iterative reconstruction reduced fetal doses by 30%, with uterine doses at 21.3 for the trauma and 14.3 mGy for the appendicitis protocol. Conclusion: Fetal doses were under 1 mGy when exposed to scatter radiation, and under 50 mGy when exposed to primary radiation with the trauma and appendicitis protocols. Consistent with the National Council on Radiation Protection & Measurements (NCRP) and the International Commission on Radiological Protection (ICRP), these doses exhibit a negligible risk to the fetus, with only a small increased risk of cancer. Still, CT scans are not recommended during pregnancy unless the benefits of the exam clearly outweigh the potential risk. Furthermore, when possible, pregnant patients should be examined on CT scanners equipped with iterative reconstruction in order to keep patient doses as low as reasonable achievable.
International Nuclear Information System (INIS)
Lipnharski, I; Quails, N; Carranza, C; Correa, N; Bidari, S; Bickelhaup, M; Rill, L; Arreola, M
2016-01-01
Purpose: The imaging of pregnant patients is medically necessary in certain clinical situations. The purpose of this work was to directly measure uterine doses in a cadaver scanned with CT protocols commonly performed on pregnant patients in order to estimate fetal dose and assess potential risk. Method: One postmortem subject was scanned on a 320-slice CT scanner with standard pulmonary embolism, trauma, and appendicitis protocols. All protocols were performed with the scan parameters and ranges currently used in clinical practice. Exams were performed both with and without iterative reconstruction to highlight the dose savings potential. Optically stimulated luminescent dosimeters (OSLDs) were inserted into the uterus in order to approximate fetal doses. Results: In the pulmonary embolism CT protocol, the uterus is outside of the primary beam, and the dose to the uterus was under 1 mGy. In the trauma and appendicitis protocols, the uterus is in the primary beam, the fetal dose estimates were 30.5 mGy for the trauma protocol, and 20.6 mGy for the appendicitis protocol. Iterative reconstruction reduced fetal doses by 30%, with uterine doses at 21.3 for the trauma and 14.3 mGy for the appendicitis protocol. Conclusion: Fetal doses were under 1 mGy when exposed to scatter radiation, and under 50 mGy when exposed to primary radiation with the trauma and appendicitis protocols. Consistent with the National Council on Radiation Protection & Measurements (NCRP) and the International Commission on Radiological Protection (ICRP), these doses exhibit a negligible risk to the fetus, with only a small increased risk of cancer. Still, CT scans are not recommended during pregnancy unless the benefits of the exam clearly outweigh the potential risk. Furthermore, when possible, pregnant patients should be examined on CT scanners equipped with iterative reconstruction in order to keep patient doses as low as reasonable achievable.
Guo, Ling-Yu; Owen Van Horne, Amanda J; Tomblin, J Bruce
2011-12-01
Prior work (Guo, Owen, & Tomblin, 2010) has shown that at the group level, auxiliary is production by young English-speaking children was symmetrical across lexical noun and pronominal subjects. Individual data did not uniformly reflect these patterns. On the basis of the framework of the gradual morphosyntactic learning (GML) hypothesis, the authors tested whether the addition of a theoretically motivated developmental measure, tense productivity (TP), could assist in explaining these individual differences. Using archival data from 20 children between age 2;8 and 3;4 (years;months), the authors tested the ability of 3 developmental measures (TP; finite verb morphology composite, FVMC; mean length of utterance, MLU) to predict use of auxiliary is with different subject types. TP, but not MLU or FVMC, significantly improved model fit. Children with low TP scores produced auxiliary is more accurately with pronominal subjects than with lexical subjects. The facilitative effect of pronominal subjects on the production of auxiliary is, however, was not found in children with high TP scores. The finding that the effect of subject types on the production accuracy of auxiliary is changed with children's TP is consistent with the GML hypothesis.
LaMonica, Haley M; English, Amelia; Hickie, Ian B; Ip, Jerome; Ireland, Catriona; West, Stacey; Shaw, Tim; Mowszowski, Loren; Glozier, Nick; Duffy, Shantel; Gibson, Alice A; Naismith, Sharon L
2017-10-25
Interest in electronic health (eHealth) technologies to screen for and treat a variety of medical and mental health problems is growing exponentially. However, no studies to date have investigated the feasibility of using such e-tools for older adults with mild cognitive impairment (MCI) or dementia. The objective of this study was to describe patterns of Internet use, as well as interest in and preferences for eHealth technologies among older adults with varying degrees of cognitive impairment. A total of 221 participants (mean age=67.6 years) attending the Healthy Brain Ageing Clinic at the University of Sydney, a specialist mood and memory clinic for adults ≥50 years of age, underwent comprehensive clinical and neuropsychological assessment and completed a 20-item self-report survey investigating current technology use and interest in eHealth technologies. Descriptive statistics and Fisher exact tests were used to characterize the findings, including variability in the results based on demographic and diagnostic factors, with diagnoses including subjective cognitive impairment (SCI), MCI, and dementia. The sample comprised 27.6% (61/221) SCI, 62.0% (137/221) MCI, and 10.4% (23/221) dementia (mean Mini-Mental State Examination=28.2). The majority of participants reported using mobile phones (201/220, 91.4%) and computers (167/194, 86.1%) routinely, with most respondents having access to the Internet at home (204/220, 92.6%). Variability was evident in the use of computers, mobile phones, and health-related websites in relation to sociodemographic factors, with younger, employed respondents with higher levels of education being more likely to utilize these technologies. Whereas most respondents used email (196/217, 90.3%), the use of social media websites was relatively uncommon. The eHealth intervention of most interest to the broader sample was memory strategy training, with 82.7% (172/208) of participants reporting they would utilize this form of intervention
Kooij, D.T.A.M.; Lange, A.H. de; Jansen, P.G.W.; Dikkers, J.S.E.
2013-01-01
Since workforces across the world are aging, researchers and organizations need more insight into how and why occupational well-being, together with work-related attitudes and motivations, change with age. Lifespan theories point to subjective health and future time perspective (i.e. an individual's
Kooij, T.A.M.; de Lange, A.H.; Jansen, P.G.W.; Dikkers, J.S.E.
2013-01-01
Since workforces across the world are aging, researchers and organizations need more insight into how and why occupational well-being, together with work-related attitudes and motivations, change with age. Lifespan theories point to subjective health and future time perspective (i.e. an individual's
Guo, Ling-Yu; Van Horne, Amanda J. Owen; Tomblin, J. Bruce
2011-01-01
Purpose: Prior work (Guo, Owen, & Tomblin, 2010) has shown that at the group level, auxiliary "is" production by young English-speaking children was symmetrical across lexical noun and pronominal subjects. Individual data did not uniformly reflect these patterns. On the basis of the framework of the gradual morphosyntactic learning (GML)…
Average glandular dose in patients submitted to mammographic examinations
International Nuclear Information System (INIS)
Nogueira, M.S.; Silva, T.A. da; Oliveira, M. de; Joana, G.S.; Oliveira, A.L.K.
2008-01-01
Doses in mammography should be maintained as low as possible without reducing the high image quality needed to the early detection of the breast cancer. As the breast is composed of tissues with very soft composition and densities, it increases the difficulty to detect small changes in the normal anatomical structures that may be associated with breast cancer. To achieve the standards of resolution and contrast for mammography, the quality and intensity of the X-ray beam, the breast positioning and compression, the film screen system, and the film processing must be in optimal operational conditions. This study intended to evaluate the mean glandular dose of patients undergoing routine exams in one mammography unit. Patient image analyses were done by a radiologist doctor who took into account 10 evaluation criteria for each CC and MLO incidences. For estimating each patient glandular dose the radiographic technique parameters (kV and mAs) and the thickness of the compressed breast were recorded. European image quality criteria were adopted by the radiologist doctor to accept the image for diagnostic purpose. For breast densities of 50% adipose and 50% glandular tissues the incident air-kerma was measured and the glandular dose calculated considering the x-ray output during the exam. In the study of 50 patients the mean glandular dose varied from 0.90 to 3.27 mGy with a mean value of 1.98 mGy for CC incidences. For MLO incidences the mean glandular doses ranged from 0.97 to 3.98 mGy and a mean value of 2.60 mGy. (author)
Energy Technology Data Exchange (ETDEWEB)
1978-01-01
An indexed bibliography is presented of literature selected by the Radiation Shielding Information Center since the previous volume was published in 1974 in the area of radiation transport and shielding against radiation from nuclear reactors, x-ray machines, radioisotopes, nuclear weapons (including fallout), and low-energy accelerators (e.g., neutron generators). In addition to lists of literature titles by subject categories (accessions 3501-4950), author and keyword indexes are given. Most of the literature selected for Vol. V was published in the years 1973 to 1976.
International Nuclear Information System (INIS)
1980-05-01
An indexed bibliography is presented of literature selected by the Radiation Shielding Information Center since the previous volume was published in 1978 in the area of radiation transport and shielding against radiation from nuclear reactors, x-ray machines, radioisotopes, nuclear weapons (including fallout), and low energy accelerators (e.g., neutron generators). The bibliography was typeset from data processed by computer from magnetic tape files. In addition to lists of literature titles by subject categories (accessions 4951-6200), an author index is given
International Nuclear Information System (INIS)
1978-01-01
An indexed bibliography is presented of literature selected by the Radiation Shielding Information Center since the previous volume was published in 1974 in the area of radiation transport and shielding against radiation from nuclear reactors, x-ray machines, radioisotopes, nuclear weapons (including fallout), and low-energy accelerators (e.g., neutron generators). In addition to lists of literature titles by subject categories (accessions 3501-4950), author and keyword indexes are given. Most of the literature selected for Vol. V was published in the years 1973 to 1976
The average Indian female nose.
Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh
2011-12-01
This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.
DEFF Research Database (Denmark)
Poggenborg, René Panduro; Eshed, Iris; Østergaard, Mikkel
2015-01-01
and clinical examination, and compared. Three new WBMRI enthesitis indices were developed. RESULTS: WBMRI allowed evaluation of 888 (53%) of 1680 sites investigated, and 19 (54%) of 35 entheses had a readability >70%. The percentage agreement between WBMRI and clinical enthesitis was 49-100%, when compared...
Abbas, Richat; Leister, Cathie; Sonnichsen, Daryl
2013-08-01
Bosutinib is an orally bioavailable, dual Src and Abl tyrosine kinase inhibitor approved in the USA for the treatment of Philadelphia chromosome-positive chronic myeloid leukemia following development of resistance or intolerance to prior therapy. In vitro studies demonstrated that bosutinib displays pH-dependent aqueous solubility, suggesting that concomitant administration of agents that alter gastric pH could affect bosutinib absorption. The objectives of this study were to evaluate the effect of lansoprazole, a gastric proton pump inhibitor, on the pharmacokinetics and safety of bosutinib. This open-label, non-randomized, phase I study involved inpatients and outpatients at a single site. The study participants were healthy men or women of non-childbearing potential aged 18-50 years. Each subject received bosutinib 400 mg on Day 1, lansoprazole 60 mg on Day 14, and bosutinib 400 mg co-administered with lansoprazole 60 mg on Day 15 under fasting conditions. The main outcome measure was the effect of multiple doses of lansoprazole on the pharmacokinetic profile of a single oral dose of bosutinib. A total of 24 healthy male subjects were enrolled. Co-administration with lansoprazole decreased the mean maximum plasma concentration (C(max)) of bosutinib from 70.2 to 42.9 ng/mL, and the total area under the plasma concentration-time curve (AUC) from 1,940 to 1,470 ng·h/mL. Log-transformed bosutinib pharmacokinetic parameters indicated significant between-treatment differences; the least squares geometric mean ratio for C(max) was 54 % (95 % CI 42-70) and for AUC was 74 % (95 % CI 60-90). Mean apparent total body clearance from plasma after oral administration increased from 237 to 330 L/h, and the median time to reach Cmax increased from 5 to 6 h, although this change may be related to decreased bosutinib absorption when combined with lansoprazole. When co-administered with lansoprazole, bosutinib maintained an acceptable safety profile, which was primarily
Spindle, Tory R; Breland, Alison B; Karaoghlanian, Nareg V; Shihadeh, Alan L; Eissenberg, Thomas
2015-02-01
Electronic cigarettes (ECIGs) heat a nicotine-containing solution; the resulting aerosol is inhaled by the user. Nicotine delivery may be affected by users' puffing behavior (puff topography), and little is known about the puff topography of ECIG users. Puff topography can be measured using mouthpiece-based computerized systems. However, the extent to which a mouthpiece influences nicotine delivery and subjective effects in ECIG users is unknown. Plasma nicotine concentration, heart rate, and subjective effects were measured in 13 experienced ECIG users who used their preferred ECIG and liquid (≥ 12 mg/ml nicotine) during 2 sessions (with or without a mouthpiece). In both sessions, participants completed an ECIG use session in which they were instructed to take 10 puffs with 30-second inter-puff intervals. Puff topography was recorded in the mouthpiece condition. Almost all measures of the effects of ECIG use were independent of topography measurement. Collapsed across session, mean plasma nicotine concentration increased by 16.8 ng/ml, and mean heart rate increased by 8.5 bpm (ps topography measurement equipment, ECIG-using participants took larger and longer puffs with lower flow rates. In experienced ECIG users, measuring ECIG topography did not influence ECIG-associated nicotine delivery or most measures of withdrawal suppression. Topography measurement systems will need to account for the low flow rates observed for ECIG users. © The Author 2014. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The difference between alternative averages
Directory of Open Access Journals (Sweden)
James Vaupel
2012-09-01
Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.
Lange, Toni; Struyf, Filip; Schmitt, Jochen; Lützner, Jörg; Kopkow, Christian
2017-07-01
Systematic review. The aim of this systematic review was to summarize and evaluate intra- and interrater reliability research of physical examination tests used for the assessment of scapular dyskinesis. Scapular dyskinesis, defined as alteration of normal scapular kinematics, is described as a non-specific response to different shoulder pathologies. A systematic literature search was conducted in MEDLINE, EMBASE, AMED and PEDro until March 20th, 2015. Methodological quality was assessed with the Quality Appraisal of Reliability Studies (QAREL) by two independent reviewers. The search strategy revealed 3259 articles, of which 15 met the inclusion criteria. These studies evaluated the reliability of 41 test and test variations used for the assessment of scapular dyskinesis. This review identified a lack of high-quality studies evaluating intra- as well as interrater reliability of tests used for the assessment of scapular dyskinesis. In addition, reliability measures differed between included studies hindering proper cross-study comparisons. The effect of manual correction of the scapula on shoulder symptoms was evaluated in only one study, which is striking, since symptom alteration tests are used in routine care to guide further treatment. Thus, there is a strong need for further research in this area. Diagnosis, level 3a. Copyright © 2016. Published by Elsevier Ltd.
International Nuclear Information System (INIS)
Ikeda, Eiji; Shiozaki, Kazumasa; Takahashi, Nobukazu; Togo, Takashi; Odawara, Toshinari; Oka, Takashi; Inoue, Tomio; Hirayasu, Yoshio
2008-01-01
The Mini-Mental State Examination (MMSE) is considered a useful supplementary method to diagnose dementia and evaluate the severity of cognitive disturbance. However, the region of the cerebrum that correlates with the MMSE score is not clear. Recently, a new method was developed to analyze regional cerebral blood flow (rCBF) using a Z score imaging system (eZIS). This system shows changes of rCBF when compared with a normal database. In addition, a three-dimensional stereotaxic region of interest (ROI) template (3DSRT), fully automated ROI analysis software was developed. The objective of this study was to investigate the correlation between rCBF changes and total MMSE score using these new methods. The association between total MMSE score and rCBF changes was investigated in 24 patients (mean age±standard deviation (SD) 71.5±9.2 years; 6 men and 18 women) with memory impairment using eZIS and 3DSRT. Step-wise multiple regression analysis was used for multivariate analysis, with the total MMSE score as the dependent variable and rCBF change in 24 areas as the independent variable. Total MMSE score was significantly correlated only with the reduction of left hippocampal perfusion but not with right (P<0.01). Total MMSE score is an important indicator of left hippocampal function. (author)
Katz, Jeffrey N.; Smith, Savannah R.; Yang, Heidi Y.; Martin, Scott D.; Wright, John; Donnell-Fink, Laurel A.; Losina, Elena
2016-01-01
Objective To evaluate the utility of clinical history, radiographic and physical exam findings in the diagnosis of symptomatic meniscal tear (SMT) in patients over age 45, in whom concomitant osteoarthritis is prevalent. Methods In a cross-sectional study of patients from two orthopedic surgeons’ clinics we assessed clinical history, physical examination and radiographic findings in patients over 45 with knee pain. The orthopedic surgeons rated their confidence that subjects’ symptoms were due to MT; we defined the diagnosis of SMT as at least 70% confidence. We used logistic regression to identify factors independently associated with diagnosis of SMT and we used the regression results to construct an index of the likelihood of SMT. Results In 174 participants, six findings were associated independently with the expert clinician having ≥70% confidence that symptoms were due to MT: localized pain, ability to fully bend the knee, pain duration <1 year, lack of varus alignment, lack of pes planus, and absence of joint space narrowing on radiographs. The index identified a low risk group with 3% likelihood of SMT. Conclusion While clinicians traditionally rely upon mechanical symptoms in this diagnostic setting, our findings did not support the conclusion that mechanical symptoms were associated with the expert’s confidence that symptoms were due to MT. An index that includes history of localized pain, full flexion, duration <1 year, pes planus, varus alignment, and joint space narrowing can be used to stratify patients according to their risk of SMT and it identifies a subgroup with very low risk. PMID:27390312
How to average logarithmic retrievals?
Directory of Open Access Journals (Sweden)
B. Funke
2012-04-01
Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
Aperture averaging in strong oceanic turbulence
Gökçe, Muhsin Caner; Baykal, Yahya
2018-04-01
Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.
Lagrangian averaging with geodesic mean.
Oliver, Marcel
2017-11-01
This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.
Weighted estimates for the averaging integral operator
Czech Academy of Sciences Publication Activity Database
Opic, Bohumír; Rákosník, Jiří
2010-01-01
Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231
Averaging in spherically symmetric cosmology
International Nuclear Information System (INIS)
Coley, A. A.; Pelavas, N.
2007-01-01
The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis
Averaging models: parameters estimation with the R-Average procedure
Directory of Open Access Journals (Sweden)
S. Noventa
2010-01-01
Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.
Dall'Oglio, Federica; Tedeschi, Aurora; Guardabasso, Vincenzo; Micali, Giuseppe
2015-09-01
To evaluate if nonprescription topical agents may provide positive outcomes in the management of mild-to-moderate facial seborrheic dermatitis by reducing inflammation and scale production through clinical evaluation and erythema-directed digital photography. Open-label, prospective, not-blinded, intra-patient, controlled, clinical trial (target area). Twenty adult subjects affected by mild-to-moderate facial seborrheic dermatitis were enrolled and instructed to apply the study cream two times daily, initially on a selected target area only for seven days. If the subject developed visible improvement, it was advised to extend the application to all facial affected area for 21 additional days. Efficacy was evaluated by measuring the grade of erythema (by clinical examination and by erythema-directed digital photography), desquamation (by clinical examination), and pruritus (by subject-completed visual analog scale). Additionally, at the end of the protocol, a Physician Global Assessment was carried out. Eighteen subjects completed the study, whereas two subjects were lost to follow-up for nonadherence and personal reasons, respectively. Day 7 data from target areas showed a significant reduction in erythema. At the end of study, a significant improvement was recorded for erythema, desquamation, and pruritus compared to baseline. Physician Global Assessment showed improvement in 89 percent of patients, with a complete response in 56 percent of cases. These preliminary results indicate that the study cream may be a viable nonprescription therapeutic option for patients affected by facial seborrheic dermatitis able to determine early and significant improvement. This study also emphasizes the advantages of using an erythema-directed digital photography system for assisting in a simple, more accurate erythema severity grading and therapeutic monitoring in patients affected by seborrheic dermatitis.
Evaluations of average level spacings
International Nuclear Information System (INIS)
Liou, H.I.
1980-01-01
The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables
Ergodic averages via dominating processes
DEFF Research Database (Denmark)
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....
High average power supercontinuum sources
Indian Academy of Sciences (India)
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.
Model averaging, optimal inference and habit formation
Directory of Open Access Journals (Sweden)
Thomas H B FitzGerald
2014-06-01
Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.
Autoregressive Moving Average Graph Filtering
Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert
2016-01-01
One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...
Averaging Robertson-Walker cosmologies
International Nuclear Information System (INIS)
Brown, Iain A.; Robbers, Georg; Behrend, Juliane
2009-01-01
The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models
DEFF Research Database (Denmark)
Lundgaard Andersen, Linda; Soldz, Stephen
2012-01-01
A major theme in recent psychoanalytic thinking concerns the use of therapist subjectivity, especially “countertransference,” in understanding patients. This thinking converges with and expands developments in qualitative research regarding the use of researcher subjectivity as a tool......-Saxon and continental traditions, this special issue provides examples of the use of researcher subjectivity, informed by psychoanalytic thinking, in expanding research understanding....
Topological quantization of ensemble averages
International Nuclear Information System (INIS)
Prodan, Emil
2009-01-01
We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states
Flexible time domain averaging technique
Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng
2013-09-01
Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.
Hart, Robert J; Zhupanska, Olesya I
2016-01-01
A new fully automated experimental setup has been developed to study the response of carbon fiber reinforced polymer (CFRP) composites subjected to a high-intensity pulsed electric field and low-velocity impact. The experimental setup allows for real-time measurements of the pulsed electric current, voltage, impact load, and displacements on the CFRP composite specimens. The setup includes a new custom-built current pulse generator that utilizes a bank of capacitor modules capable of producing a 20 ms current pulse with an amplitude of up to 2500 A. The setup enabled application of the pulsed current and impact load and successfully achieved coordination between the peak of the current pulse and the peak of the impact load. A series of electrical, impact, and coordinated electrical-impact characterization tests were performed on 32-ply IM7/977-3 unidirectional CFRP composites to assess their ability to withstand application of a pulsed electric current and determine the effects of the pulsed current on the impact response. Experimental results revealed that the electrical resistance of CFRP composites decreased with an increase in the electric current magnitude. It was also found that the electrified CFRP specimens withstood higher average impact loads compared to the non-electrified specimens.
Statistics on exponential averaging of periodograms
Energy Technology Data Exchange (ETDEWEB)
Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).
Statistics on exponential averaging of periodograms
International Nuclear Information System (INIS)
Peeters, T.T.J.M.; Ciftcioglu, Oe.
1994-11-01
The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)
Indian Academy of Sciences (India)
Unknown
Activation energy of viscous flow. Density and viscosity of magnesium sulphate formamide + ethylene glycol mixed solvents ... high average power TEA CO2 laser. 659. Carpaine ... organic sulphides 2,2′-bipyridinium chlo- rochromate. 137.
Changing mortality and average cohort life expectancy
Directory of Open Access Journals (Sweden)
Robert Schoen
2005-10-01
Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Reynolds averaged simulation of unsteady separated flow
International Nuclear Information System (INIS)
Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.
2003-01-01
The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation
Averaging of nonlinearity-managed pulses
International Nuclear Information System (INIS)
Zharnitsky, Vadim; Pelinovsky, Dmitry
2005-01-01
We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons
Aktas, İdris; Bılgın, İbrahim
2015-01-01
Background:Many researchers agree that students, especially primary students, have learning difficulties on the 'Particulate Nature of Matter' unit. One reason for this difficulty is not considering individual differences for teaching science. In 4MAT model learning, environment is arranged according to individual differences. Purpose:The purpose of this study is to examine (1) the effects of the 4MAT learning model on the7th grade students' academic achievement and motivation on the 'Particulate Nature of Matter' unit and (2) identify student opinions on the 4MAT model. Sample:The sample consists of 235 students (115 experimental, 120 control) in Turkey. Design and methods:Experimental groups were instructed with the 4MAT model while control groups were instructed with a traditional method. Achievement Test (AchToM) and Motivation Scale (MotScl) were administered to students as pre- and post-tests. Moreover, the opinions of students in the experimental groups on the 4MAT model were ascertained through open-ended questions after the application. Results:According to independent t-test results, statistical difference in favour of the experimental groups was detected between the post-AchToM (ES = 1.43; p motivation and participation in the lesson, lessons are more amusing and enjoyable, and the self-confidence of the students increases. Besides these positive opinions, however, a few students stated that the method took too much time, they were not motivated and it did not help them in understanding the subject. Conclusions:The 4MAT model is more effective than traditional method in terms of increasing achievement and motivation. The model takes all learners into account. Thus, the teacher or educator should use the 4MAT model to ensure all students' learning in their classroom.
Unpredictable visual changes cause temporal memory averaging.
Ohyama, Junji; Watanabe, Katsumi
2007-09-01
Various factors influence the perceived timing of visual events. Yet, little is known about the ways in which transient visual stimuli affect the estimation of the timing of other visual events. In the present study, we examined how a sudden color change of an object would influence the remembered timing of another transient event. In each trial, subjects saw a green or red disk travel in circular motion. A visual flash (white frame) occurred at random times during the motion sequence. The color of the disk changed either at random times (unpredictable condition), at a fixed time relative to the motion sequence (predictable condition), or it did not change (no-change condition). The subjects' temporal memory of the visual flash in the predictable condition was as veridical as that in the no-change condition. In the unpredictable condition, however, the flash was reported to occur closer to the timing of the color change than actual timing. Thus, an unpredictable visual change distorts the temporal memory of another visual event such that the remembered moment of the event is closer to the timing of the unpredictable visual change.
DEFF Research Database (Denmark)
Hjørland, Birger
2017-01-01
This article presents and discuss the concept “subject” or subject matter (of documents) as it has been examined in library and information science (LIS) for more than 100 years. Different theoretical positions are outlined and it is found that the most important distinction is between document......-oriented views versus request-oriented views. The document-oriented view conceive subject as something inherent in documents, whereas the request-oriented view (or the policy based view) understand subject as an attribution made to documents in order to facilitate certain uses of them. Related concepts...
Domain-averaged Fermi-hole Analysis for Solids
Czech Academy of Sciences Publication Activity Database
Baranov, A.; Ponec, Robert; Kohout, M.
2012-01-01
Roč. 137, č. 21 (2012), s. 214109 ISSN 0021-9606 R&D Projects: GA ČR GA203/09/0118 Institutional support: RVO:67985858 Keywords : bonding in solids * domain averaged fermi hole * natural orbitals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.164, year: 2012
The average size of ordered binary subgraphs
van Leeuwen, J.; Hartel, Pieter H.
To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a
Web-based pathology practice examination usage
Directory of Open Access Journals (Sweden)
Edward C Klatt
2014-01-01
Full Text Available Context: General and subject specific practice examinations for students in health sciences studying pathology were placed onto a free public internet web site entitled web path and were accessed four clicks from the home web site menu. Subjects and Methods: Multiple choice questions were coded into. html files with JavaScript functions for web browser viewing in a timed format. A Perl programming language script with common gateway interface for web page forms scored examinations and placed results into a log file on an internet computer server. The four general review examinations of 30 questions each could be completed in up to 30 min. The 17 subject specific examinations of 10 questions each with accompanying images could be completed in up to 15 min each. The results of scores and user educational field of study from log files were compiled from June 2006 to January 2014. Results: The four general review examinations had 31,639 accesses with completion of all questions, for a completion rate of 54% and average score of 75%. A score of 100% was achieved by 7% of users, ≥90% by 21%, and ≥50% score by 95% of users. In top to bottom web page menu order, review examination usage was 44%, 24%, 17%, and 15% of all accessions. The 17 subject specific examinations had 103,028 completions, with completion rate 73% and average score 74%. Scoring at 100% was 20% overall, ≥90% by 37%, and ≥50% score by 90% of users. The first three menu items on the web page accounted for 12.6%, 10.0%, and 8.2% of all completions, and the bottom three accounted for no more than 2.2% each. Conclusions: Completion rates were higher for shorter 10 questions subject examinations. Users identifying themselves as MD/DO scored higher than other users, averaging 75%. Usage was higher for examinations at the top of the web page menu. Scores achieved suggest that a cohort of serious users fully completing the examinations had sufficient preparation to use them to support
Operator licensing examiner standards
International Nuclear Information System (INIS)
1987-05-01
The Operator Licensing Examiner Standards provide policy and guidance to NRC examiners and establish the procedures and practices for examining and licensing of applicants for NRC operator licenses pursuant to Part 55 of Title 10 of the Code of Federal Regulations (10 CFR 55). They are intended to assist NRC examiners and facility licensees to understand the examination process better and to provide for equitable and consistent administration of examinations to all applicants by NRC examiners. These standards are not a substitute for the operator licensing regulations and are subject to revision or other internal operator examination licensing policy changes
Operator licensing examiner standards
International Nuclear Information System (INIS)
1993-01-01
The Operator Licensing Examiner Standards provide policy and guidance to NRC examiners and establish the procedures and practices for examining licensees and applicants for reactor operator and senior reactor operator licenses at power reactor facilities pursuant to Part 55 of Title 10 of the Code of Federal Regulations (10 CFR 55). The Examiner Standards are intended to assist NRC examiners and facility licensees to better understand the initial and requalification examination processes and to ensure the equitable and consistent administration of examinations to all applicants. These standards are not a substitute for the operator licensing regulations and are subject to revision or other internal operator licensing policy changes
Cruz-Orcutt, Noemi; Warren, John J.; Broffitt, Barbara; Levy, Steven M.; Weber-Gasparoni, Karin
2012-01-01
Objective To assess and compare examiner reliability of clinical and photographic fluorosis examinations using the Fluorosis Risk Index (FRI) among children in the Iowa Fluoride Study (IFS). Methods The IFS examined 538 children for fluorosis and dental caries at age 13 and obtained intra-oral photographs from nearly all of them. To assess examiner reliability, duplicate clinical examinations were conducted for 40 of the subjects. In addition, 200 of the photographs were scored independently for fluorosis by two examiners in a standardized manner. Fluorosis data were compared between examiners for the clinical exams and separately for the photographic exams, and a comparison was made between clinical and photographic exams. For all 3 comparisons, examiner reliability was assessed using kappa statistics at the tooth level. Results Inter-examiner reliability for the duplicate clinical exams on the sample of 40 subjects as measured by kappa was 0.59, while the repeat exams of the 200 photographs yielded a kappa of 0.64. For the comparison of photographic and clinical exams, inter-examiner reliability, as measured by weighted kappa, was 0.46. FRI scores obtained using the photographs were higher on average than those obtained from the clinical exams. Fluorosis prevalence was higher for photographs (33%) than found for clinical exam (18%). Conclusion Results suggest inter-examiner reliability is greater and fluorosis scores higher when using photographic compared to clinical examinations. PMID:22316120
Averaging for solitons with nonlinearity management
International Nuclear Information System (INIS)
Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.
2003-01-01
We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations
Subjective versus objective assessment of breast reconstruction.
Henseler, Helga; Smith, Joanna; Bowman, Adrian; Khambay, Balvinder S; Ju, Xiangyang; Ayoub, Ashraf; Ray, Arup K
2013-05-01
To date breast assessment has been conducted mainly subjectively. However lately validated objective three-dimensional (3D) imaging was developed. The study aimed to assess breast reconstruction subjectively and objectively and conduct a comparison. In forty-four patients after immediate unilateral breast reconstruction with solely the extended latissimus dorsi flap the breast was captured by validated 3D imaging method and standardized 2D photography. Breast symmetry was subjectively evaluated by six experts who applied the Harris score giving a mark of 1-4 for a poor to excellent result. An error study was conducted by examination of the intra and inter-observer agreement and agreement on controls. By Procrustes analysis an objective asymmetry score was obtained and compared to the subjective assessment. The subjective assessment showed that the inter-observer agreement was good or substantial (p-value: value: fair (p-values: 0.159, 0.134, 0.099) to substantial (p-value: 0.005) intra-observer agreement. The objective assessment revealed that the reconstructed breast showed a significantly smaller volume compared to the opposite side and that the average asymmetry score was 0.052, ranging from 0.019 to 0.136. When comparing the subjective and objective method the relationship between the two scores was highly significant. Subjective breast assessment lacked accuracy and reproducibility. This was the first error study of subjective breast assessment versus an objective validated 3D imaging method based on true 3D parameters. The substantial agreement between established subjective breast assessment and new validated objective method supported the value of the later and we expect its future role to expand. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
DSCOVR Magnetometer Level 2 One Minute Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data
DSCOVR Magnetometer Level 2 One Second Averages
National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data
Spacetime averaging of exotic singularity universes
International Nuclear Information System (INIS)
Dabrowski, Mariusz P.
2011-01-01
Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.
NOAA Average Annual Salinity (3-Zone)
California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
Improving consensus structure by eliminating averaging artifacts
Directory of Open Access Journals (Sweden)
KC Dukka B
2009-03-01
Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which
Research & development and growth: A Bayesian model averaging analysis
Czech Academy of Sciences Publication Activity Database
Horváth, Roman
2011-01-01
Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economic s Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf
Memory and subjective workload assessment
Staveland, L.; Hart, S.; Yeh, Y. Y.
1986-01-01
Recent research suggested subjective introspection of workload is not based upon specific retrieval of information from long term memory, and only reflects the average workload that is imposed upon the human operator by a particular task. These findings are based upon global ratings of workload for the overall task, suggesting that subjective ratings are limited in ability to retrieve specific details of a task from long term memory. To clarify the limits memory imposes on subjective workload assessment, the difficulty of task segments was varied and the workload of specified segments was retrospectively rated. The ratings were retrospectively collected on the manipulations of three levels of segment difficulty. Subjects were assigned to one of two memory groups. In the Before group, subjects knew before performing a block of trials which segment to rate. In the After group, subjects did not know which segment to rate until after performing the block of trials. The subjective ratings, RTs (reaction times) and MTs (movement times) were compared within group, and between group differences. Performance measures and subjective evaluations of workload reflected the experimental manipulations. Subjects were sensitive to different difficulty levels, and recalled the average workload of task components. Cueing did not appear to help recall, and memory group differences possibly reflected variations in the groups of subjects, or an additional memory task.
40 CFR 76.11 - Emissions averaging.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...
Determinants of College Grade Point Averages
Bailey, Paul Dean
2012-01-01
Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…
Computation of the bounce-average code
International Nuclear Information System (INIS)
Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.
1977-01-01
The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended
Modeling methane emission via the infinite moving average process
Czech Academy of Sciences Publication Activity Database
Jordanova, D.; Dušek, Jiří; Stehlík, M.
2013-01-01
Roč. 122, - (2013), s. 40-49 ISSN 0169-7439 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073; GA ČR(CZ) GAP504/11/1151 Institutional support: RVO:67179843 Keywords : Environmental chemistry * Pareto tails * t-Hill estimator * Weak consistency * Moving average process * Methane emission model Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013
Rotational averaging of multiphoton absorption cross sections
Energy Technology Data Exchange (ETDEWEB)
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Average Bandwidth Allocation Model of WFQ
Directory of Open Access Journals (Sweden)
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
Nonequilibrium statistical averages and thermo field dynamics
International Nuclear Information System (INIS)
Marinaro, A.; Scarpetta, Q.
1984-01-01
An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles
An approximate analytical approach to resampling averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....
Beer, Wine, Spirits and Subjective Health
DEFF Research Database (Denmark)
Grønbæk, Morten; Mortensen, Erik Lykke; Mygind, K.
1999-01-01
To examine the association between intake of different types of alcoholic beverages and self reported subjective health.......To examine the association between intake of different types of alcoholic beverages and self reported subjective health....
Improved averaging for non-null interferometry
Fleig, Jon F.; Murphy, Paul E.
2013-09-01
Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.
Estimating average glandular dose by measuring glandular rate in mammograms
International Nuclear Information System (INIS)
Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru
2003-01-01
The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)
Perceptual learning in Williams syndrome: looking beyond averages.
Directory of Open Access Journals (Sweden)
Patricia Gervan
Full Text Available Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning.
Asynchronous Gossip for Averaging and Spectral Ranking
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
An approach to averaging digitized plantagram curves.
Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B
1994-07-01
The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
Exploiting scale dependence in cosmological averaging
International Nuclear Information System (INIS)
Mattsson, Teppo; Ronkainen, Maria
2008-01-01
We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Regional averaging and scaling in relativistic cosmology
International Nuclear Information System (INIS)
Buchert, Thomas; Carfora, Mauro
2002-01-01
Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias
Average: the juxtaposition of procedure and context
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
Average-case analysis of numerical problems
2000-01-01
The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.
Grassmann Averages for Scalable Robust PCA
DEFF Research Database (Denmark)
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
International Nuclear Information System (INIS)
Mletzko, U.
1980-01-01
Visual examination is treated as a method for the control of size and shape of components, surface quality and weld performance. Dye penetrant, magnetic particle and eddy current examinations are treated as methods for the evaluation of surface defects and material properties. The limitations to certain materials, defect sizes and types are shown. (orig./RW)
Operator licensing examiner standards
International Nuclear Information System (INIS)
1983-10-01
The Operator Licensing Examiner Standards provide policy and guidance to NRC examiners and establish the procedures and practices for examining and licensing of applicants for NRC operator licenses pursuant to Part 55 of Title 10 of the Code of Federal Regulations (10 CFR 55). They are intended to assist NRC examiners and facility licensees to understand the examination process better and to provide for equitable and consistent administration of examinations to all applicants by NRC examiners. These standards are not a substitute for the operator licensing regulations and are subject to revision or other internal operator examination licensing policy changes. As appropriate, these standards will be revised periodically to accommodate comments and reflect new information or experience
Politics of modern muslim subjectivities
DEFF Research Database (Denmark)
Jung, Dietrich; Petersen, Marie Juul; Sparre, Sara Lei
Examining modern Muslim identity constructions, the authors introduce a novel analytical framework to Islamic Studies, drawing on theories of successive modernities, sociology of religion, and poststructuralist approaches to modern subjectivity, as well as the results of extensive fieldwork...
Generalized Jackknife Estimators of Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...
Average beta measurement in EXTRAP T1
International Nuclear Information System (INIS)
Hedin, E.R.
1988-12-01
Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)
HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS
International Nuclear Information System (INIS)
2005-01-01
Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department
Bayesian Averaging is Well-Temperated
DEFF Research Database (Denmark)
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...
Gibbs equilibrium averages and Bogolyubov measure
International Nuclear Information System (INIS)
Sankovich, D.P.
2011-01-01
Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure
High average-power induction linacs
International Nuclear Information System (INIS)
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Function reconstruction from noisy local averages
International Nuclear Information System (INIS)
Chen Yu; Huang Jianguo; Han Weimin
2008-01-01
A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies
A singularity theorem based on spatial averages
Indian Academy of Sciences (India)
journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.
Multiphase averaging of periodic soliton equations
International Nuclear Information System (INIS)
Forest, M.G.
1979-01-01
The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations
A dynamic analysis of moving average rules
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type
Essays on model averaging and political economics
Wang, W.
2013-01-01
This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...
High average-power induction linacs
International Nuclear Information System (INIS)
Prono, D.S.; Barrett, D.; Bowles, E.
1989-01-01
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs
Average Costs versus Net Present Value
E.A. van der Laan (Erwin); R.H. Teunter (Ruud)
2000-01-01
textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives
Average beta-beating from random errors
Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department
2018-01-01
The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic eﬀect on the tune.
Reliability Estimates for Undergraduate Grade Point Average
Westrick, Paul A.
2017-01-01
Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…
Tendon surveillance requirements - average tendon force
International Nuclear Information System (INIS)
Fulton, J.F.
1982-01-01
Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)
Directory of Open Access Journals (Sweden)
Luis Ulpiano Pérez Marqués
2013-03-01
Full Text Available Introducción: el índice de dificultad y el poder de discriminación son indicadores fáciles de calcular y útiles para el análisis de la correspondencia entre los resultados esperados y los obtenidos de un instrumento evaluativo. Objetivo: evaluar la calidad de las preguntas del examen final ordinario de Morfofisiología Humana V. Métodos: fueron incluidos en esta investigación los 265 exámenes teóricos realizados por los estudiantes del segundo año en la Facultad de Medicina No. 2 de la Universidad de Ciencias Médicas de Santiago de Cuba, durante el curso 2011-2012, a los que se les calculó el índice de dificultad y el poder de discriminación en cada una de las 7 preguntas aplicadas. Resultados: las preguntas de respuesta alternativa, que evaluaban los contenidos sobre la sangre y el corazón, mostraron un índice de dificultad por debajo de 0,1 y un poder de discriminación inferior a 0,2, lo que hace necesaria su reformulación en próximos instrumentos evaluativos. Los valores más altos para ambos indicadores fueron 0,34 y 0,86, respectivamente, y correspondieron a una pregunta de selección múltiple sobre vasos sanguíneos y linfáticos, siguiéndoles en orden las preguntas de respuesta abierta. Conclusiones: se demostró la pertinencia de la mayoría de las preguntas, destacándose la capacidad de 5 de ellas para distinguir estudiantes de alto y bajo rendimientos.Introduction: the difficulty index and the discrimination power are easy indicators to calculate and useful for the analysis of the correspondence between the expected and obtained results of an evaluative instrument. Objective: to evaluate the quality of questions of the regular final examination of Human Mophophysiology V. Methods: the 265 theoretical examinations carried out by the second year students in the Medical Faculty No. 2 of the Medical University in Santiago de Cuba, during the course 2011-2012 were included in this investigation, to which the
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Directory of Open Access Journals (Sweden)
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
Average Transverse Momentum Quantities Approaching the Lightfront
Boer, Daniel
2015-01-01
In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...
Average configuration of the geomagnetic tail
International Nuclear Information System (INIS)
Fairfield, D.H.
1979-01-01
Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed
Unscrambling The "Average User" Of Habbo Hotel
Directory of Open Access Journals (Sweden)
Mikael Johnson
2007-01-01
Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.
Association between objective and subjective measurements of comfort and discomfort in hand tools
Kuijt-Evers, L.F.M.; Bosch, T.; Huysmans, M.A.; Looze, M.P.de; Vink, P.
2007-01-01
In the current study, the relationship between objective measurements and subjective experienced comfort and discomfort in using handsaws was examined. Twelve carpenters evaluated five different handsaws. Objective measures of contact pressure (average pressure, pressure area and pressure-time (P-t)
Average of delta: a new quality control tool for clinical laboratories.
Jones, Graham R D
2016-01-01
Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.
International Nuclear Information System (INIS)
Lentle, B.C.
1989-01-01
This paper reports on radionuclide examinations of the pancreas. The pancreas, situated retroperitonally high in the epigastrium, was a particularly difficult organ to image noninvasively before ultrasonography and computed tomography (CT) became available. Indeed the organ still remains difficult to examine in some patients, a fact reflected in the variety of methods available to evaluate pancreatic morphology. It is something of a paradox that the pancreas is metabolically active and physiologically important but that its examination by radionuclide methods has virtually ceased to have any role in day-to-day clinical practice. To some extent this is caused by the tendency of the pancreas's commonest gross diseases emdash carcinoma and pancreatitis, for example emdash to result in nonfunction of the entire organ. Disorders of pancreatic endocrine function have generally not required imaging methods for diagnosis, although an understanding of diabetes mellitus and its nosology has been advanced by radioimmunoassay of plasma insulin concentrations
International Nuclear Information System (INIS)
Thoeni, R.F.
1989-01-01
The radiographic examination of the upper and lower gastrointestinal tract has been changed drastically by the introduction of endoscopic procedures that are now widely available. However, the diagnostic approach to the small bowel remains largely unchanged. Ultrasonography, computed tomography (CT), and magnetic resonance imaging (MRI) are occasionally employed but are not primary imaging modalities for small bowel disease. Even though small bowel endoscopes are available, they are infrequently used, and no scientific paper on their employment has been published. Barium studies are still the mainstay for evaluating patients with suspected small bowel abnormalities. This paper discusses the anatomy and physiology of the small bowel and lists the various types of barium and pharmacologic aids used for examining it. The different radiographic methods for examining the small bowel with barium, including SBFT, dedicated SBFT, enteroclysis, peroral pneumocolon (PPC), and retrograde small bowel examination, are described and put into perspective. To some degree such an undertaking must be a personal opinion, but certain conclusions can be made based on the available literature and practical experience. This analysis is based on the assumption that all the various barium techniques are performed with equal expertise by the individual radiologist, thus excluding bias from unfamiliarity with certain aspects of a procedure, such as intubation or skilled compression during fluoroscopy. Also, the use of water-soluble contrast material, CT, and MRI for evaluating suspected small bowel abnormalities is outlined
Operator product expansion and its thermal average
Energy Technology Data Exchange (ETDEWEB)
Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)
1998-05-01
QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.
Fluctuations of wavefunctions about their classical average
International Nuclear Information System (INIS)
Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H
2003-01-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics
Phase-averaged transport for quasiperiodic Hamiltonians
Bellissard, J; Schulz-Baldes, H
2002-01-01
For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.
Baseline-dependent averaging in radio interferometry
Wijnholds, S. J.; Willis, A. G.; Salvini, S.
2018-05-01
This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.
Multistage parallel-serial time averaging filters
International Nuclear Information System (INIS)
Theodosiou, G.E.
1980-01-01
Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)
Time-averaged MSD of Brownian motion
International Nuclear Information System (INIS)
Andreanov, Alexei; Grebenkov, Denis S
2012-01-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution
Time-dependent angularly averaged inverse transport
International Nuclear Information System (INIS)
Bal, Guillaume; Jollivet, Alexandre
2009-01-01
This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain
Independence, Odd Girth, and Average Degree
DEFF Research Database (Denmark)
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7. ...
Bootstrapping Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...
Average Nuclear properties based on statistical model
International Nuclear Information System (INIS)
El-Jaick, L.J.
1974-01-01
The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt
Time-averaged MSD of Brownian motion
Andreanov, Alexei; Grebenkov, Denis S.
2012-07-01
We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.
Construction of average adult Japanese voxel phantoms for dose assessment
International Nuclear Information System (INIS)
Sato, Kaoru; Takahashi, Fumiaki; Satoh, Daiki; Endo, Akira
2011-12-01
The International Commission on Radiological Protection (ICRP) adopted the adult reference voxel phantoms based on the physiological and anatomical reference data of Caucasian on October, 2007. The organs and tissues of these phantoms were segmented on the basis of ICRP Publication 103. In future, the dose coefficients for internal dose and dose conversion coefficients for external dose calculated using the adult reference voxel phantoms will be widely used for the radiation protection fields. On the other hand, the body sizes and organ masses of adult Japanese are generally smaller than those of adult Caucasian. In addition, there are some cases that the anatomical characteristics such as body sizes, organ masses and postures of subjects influence the organ doses in dose assessment for medical treatments and radiation accident. Therefore, it was needed to use human phantoms with average anatomical characteristics of Japanese. The authors constructed the averaged adult Japanese male and female voxel phantoms by modifying the previously developed high-resolution adult male (JM) and female (JF) voxel phantoms. It has been modified in the following three aspects: (1) The heights and weights were agreed with the Japanese averages; (2) The masses of organs and tissues were adjusted to the Japanese averages within 10%; (3) The organs and tissues, which were newly added for evaluation of the effective dose in ICRP Publication 103, were modeled. In this study, the organ masses, distances between organs, specific absorbed fractions (SAFs) and dose conversion coefficients of these phantoms were compared with those evaluated using the ICRP adult reference voxel phantoms. This report provides valuable information on the anatomical and dosimetric characteristics of the averaged adult Japanese male and female voxel phantoms developed as reference phantoms of adult Japanese. (author)
Comprehensive time average digital holographic vibrometry
Czech Academy of Sciences Publication Activity Database
Psota, Pavel; Lédl, Vít; Doleček, Roman; Mokrý, P.; Vojtíšek, Petr; Václavík, J.
2016-01-01
Roč. 55, č. 12 (2016), č. článku 121726. ISSN 0091-3286 R&D Projects: GA ČR(CZ) GA16-11965S Institutional support: RVO:61389021 Keywords : vibration analysis * digital holography * frequency shifting * phase modulation * acousto-optic modulators Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.082, year: 2016 http://dx.doi.org/10.1117/1.oe.55.12.121726
De Luca, G.; Magnus, J.R.
2011-01-01
In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares
Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.
Dirks, Jean; And Others
1983-01-01
Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)
Increase in average foveal thickness after internal limiting membrane peeling
Directory of Open Access Journals (Sweden)
Kumagai K
2017-04-01
Full Text Available Kazuyuki Kumagai,1 Mariko Furukawa,1 Tetsuyuki Suetsugu,1 Nobuchika Ogino2 1Department of Ophthalmology, Kami-iida Daiichi General Hospital, 2Department of Ophthalmology, Nishigaki Eye Clinic, Aichi, Japan Purpose: To report the findings in three cases in which the average foveal thickness was increased after a thin epiretinal membrane (ERM was removed by vitrectomy with internal limiting membrane (ILM peeling.Methods: The foveal contour was normal preoperatively in all eyes. All cases underwent successful phacovitrectomy with ILM peeling for a thin ERM. The optical coherence tomography (OCT images were examined before and after the surgery. The changes in the average foveal (1 mm thickness and the foveal areas within 500 µm from the foveal center were measured. The postoperative changes in the inner and outer retinal areas determined from the cross-sectional OCT images were analyzed.Results: The average foveal thickness and the inner and outer foveal areas increased significantly after the surgery in each of the three cases. The percentage increase in the average foveal thickness relative to the baseline thickness was 26% in Case 1, 29% in Case 2, and 31% in Case 3. The percentage increase in the foveal inner retinal area was 71% in Case 1, 113% in Case 2, and 110% in Case 3, and the percentage increase in foveal outer retinal area was 8% in Case 1, 13% in Case 2, and 18% in Case 3.Conclusion: The increase in the average foveal thickness and the inner and outer foveal areas suggests that a centripetal movement of the inner and outer retinal layers toward the foveal center probably occurred due to the ILM peeling. Keywords: internal limiting membrane, optical coherence tomography, average foveal thickness, epiretinal membrane, vitrectomy
Averaged null energy condition from causality
Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein
2017-07-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.
Beta-energy averaging and beta spectra
International Nuclear Information System (INIS)
Stamatelatos, M.G.; England, T.R.
1976-07-01
A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality
Asymptotic Time Averages and Frequency Distributions
Directory of Open Access Journals (Sweden)
Muhammad El-Taha
2016-01-01
Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t, t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.
Chaotic Universe, Friedmannian on the average 2
Energy Technology Data Exchange (ETDEWEB)
Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij
1980-11-01
The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.
Averaging in the presence of sliding errors
International Nuclear Information System (INIS)
Yost, G.P.
1991-08-01
In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms
Operator licensing examiner standards
International Nuclear Information System (INIS)
1994-06-01
The Operator Licensing Examiner Standards provide policy and guidance to NRC examiners and establish the procedures and practices for examining licensees and applicants for reactor operator and senior reactor operator licenses at power reactor facilities pursuant to Part 55 of Title 10 of the Code of Federal Regulations (10 CFR 55). The Examiner Standards are intended to assist NRC examiners and facility licensees to better understand the initial and requalification examination processes and to ensure the equitable and consistent administration of examinations to all applicants. These standards are not a substitute for the operator licensing regulations and are subject to revision or other internal operator licensing policy changes. Revision 7 was published in January 1993 and became effective in August 1993. Supplement 1 is being issued primarily to implement administrative changes to the requalification examination program resulting from the amendment to 10 CFR 55 that eliminated the requirement for every licensed operator to pass an NRC-conducted requalification examination as a condition for license renewal. The supplement does not substantially alter either the initial or requalification examination processes and will become effective 30 days after its publication is noticed in the Federal Register. The corporate notification letters issued after the effective date will provide facility licensees with at least 90 days notice that the examinations will be administered in accordance with the revised procedures
Preparing Empirical Methodologies to Examine Enactive Subjects Experiencing Musical Emotions
DEFF Research Database (Denmark)
Christensen, Justin
2016-01-01
in listeners. Many of these theories search for universal emotional essences and cause-and-effect relationships that often result in erasing the body from these experiences. Still, after reducing these emotional responses to discrete categories or localized brain functions, these theories have not been very...... successful in finding universal emotional essence in response to music. In this paper, I argue that we need to bring the body back into this research, to allow for listener variability, and include multiple levels of focus to help find meaningful relationships of emotional responses. I also appeal...
Subjective Oral Health in Dutch Adults
Directory of Open Access Journals (Sweden)
Gijsbert H.W. Verrips
2013-05-01
Full Text Available Aim: To determine whether the subjective oral health (SOH of the Dutch adult population was associated with clinical and demographic variables. Methods: A clinical examination was conducted in a sample of 1,018 people from the Dutch city of ‘s-Hertogenbosch. SOH was measured using the Dutch translation of the short form of the Oral Health Impact Profile (OHIP-NL14. Results: The average score on the OHIP-NL14 was 2.8 ± 5.9 and 51% of the respondents had a score of 0. Dental status was the most important predictor of SOH. Conclusions: The SOH in the Dutch adult population was much better than in groups of adults in Australia, the United Kingdom and New Zealand. Nevertheless, there were important variations in SOH related to dental and socio-economic status.
Relationship of Compressed Breast Thickness and Average Glandular Dose According to Focus/Filter
International Nuclear Information System (INIS)
Lee, In Ja
2009-01-01
The study examined the relationship between the compressed breast thickness and Average Glandular Dose (AGD) among 1,969 outpatients who went through breast X-ray in a university hospital for 10 months from July 1st, 2007 to April 30th, 2008. Then it analyzed the result acquired from 3,900 cases of Cranio-Caudal (CC) view, especially, when the breasts were compressed (13-15daN). The following is the conclusion driven from the relationship analysis. 1. The subjects aged in 40s and 50s were 2,679 out of 3,900 cases and this figure was 68.69% in all. 2. In terms of distribution depending on focus/filter, 41.0% was Mo/Mo, 34.8% was Mo/Rh, and 24.2% was Rh/Rh. 3. In terms of compressed breast thickness depending on focus/filter, the average thickness was 26.91 mm at Mo/Mo, 38.84 mm at Mo/Rh, and 48.80 mm at Rh/Rh. The average thickness of the entire cases was shown to be 36.27 mm. 4. AGD depending on focus/filter was 1.27 mGy at Mo/Mo, 1.55 mGy at Mo/Rh, and 1.42 mGy at Rh/Rh. The average glandular dose of the entire cases was shown to be 1.43 mGy. 5. The relationship of AGD depending on compressed breast thickness at Mo/Mo was y=0.0318x + 0.470 while it was y=0.0206x + 0.709 at Mo/Rh and y=0.0248x + 0.335 at Mo/Rh. It was highly influenced by the compressed breast thickness, however, more variation was detected at Mo/Mo depending on breast thickness.
New Nordic diet versus average Danish diet
DEFF Research Database (Denmark)
Khakimov, Bekzod; Poulsen, Sanne Kellebjerg; Savorani, Francesco
2016-01-01
and 3-hydroxybutanoic acid were related to a higher weight loss, while higher concentrations of salicylic, lactic and N-aspartic acids, and 1,5-anhydro-D-sorbitol were related to a lower weight loss. Specific gender- and seasonal differences were also observed. The study strongly indicates that healthy...... metabolites reflecting specific differences in the diets, especially intake of plant foods and seafood, and in energy metabolism related to ketone bodies and gluconeogenesis, formed the predominant metabolite pattern discriminating the intervention groups. Among NND subjects higher levels of vaccenic acid...
High average power linear induction accelerator development
International Nuclear Information System (INIS)
Bayless, J.R.; Adler, R.J.
1987-07-01
There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs
FEL system with homogeneous average output
Energy Technology Data Exchange (ETDEWEB)
Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph
2018-01-16
A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.
Quetelet, the average man and medical knowledge.
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
[Quetelet, the average man and medical knowledge].
Caponi, Sandra
2013-01-01
Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.
Asymmetric network connectivity using weighted harmonic averages
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Angle-averaged Compton cross sections
International Nuclear Information System (INIS)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV
Average Gait Differential Image Based Human Recognition
Directory of Open Access Journals (Sweden)
Jinyan Chen
2014-01-01
Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.
Angle-averaged Compton cross sections
Energy Technology Data Exchange (ETDEWEB)
Nickel, G.H.
1983-01-01
The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.
The balanced survivor average causal effect.
Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken
2013-05-07
Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.
The Pulsair 3000 tonometer--how many readings need to be taken to ensure accuracy of the average?
McCaghrey, G E; Matthews, F E
2001-07-01
Manufacturers of non-contact tonometers recommend that a number of readings are taken on each eye, and an average obtained. With the Keeler Pulsair 3000 it is advised to take four readings, and average these. This report analyses readings in 100 subjects, and compares the first reading, and the averages of the first two and first three readings with the "machine standard" of the average of four readings. It is found that, in the subject group investigated, the average of three readings is not different from the average of four in 95% of individuals, with equivalence defined as +/- 1.0 mmHg.
Dental state and subjective chewing ability of institutionalized elderly people.
Ekelund, R
1989-02-01
The purpose of the present study was to investigate the dental state of the elderly, to provide a subjective appraisal of their chewing ability and their inability to eat certain foods because of their poor dental state. The subjects were 480 residents of 24 municipal old people's homes in different parts of Finland. Of the subjects, 153 were men and 327 women, and their ages ranged from 65 to 100 years. The methods used were clinical examination and interview. The clinical examination revealed that 68% of the subjects had no natural teeth, and 22% had neither natural nor artificial teeth. The number of teeth in dentate subjects was small (average 7.6), and the condition mostly poor. Only 2% had any serviceable counterparts. 51% of the subjects wore dentures: 57 subjects in the maxilla alone, three in the mandible alone and 186 in both maxilla and mandible. 41% said that because of their teeth they were unable to eat some foods they would have liked to eat, crisp bread being mentioned most often as such a food (85% of those with chewing difficulties). Edentulous subjects and dentate subjects wearing both maxillary and mandibular dentures said more often than those without dentures that they could eat everything; those without any teeth had most often (59%) to avoid some foods. More attention should be given to the dental condition and the masticatory function of the elderly, especially of those living in institutions, to ensure that they are comfortable physically, psychologically, and socially for the rest of their lives.
Industrial Applications of High Average Power FELS
Shinn, Michelle D
2005-01-01
The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Geographic Gossip: Efficient Averaging for Sensor Networks
Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.
High-average-power solid state lasers
International Nuclear Information System (INIS)
Summers, M.A.
1989-01-01
In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs
The concept of average LET values determination
International Nuclear Information System (INIS)
Makarewicz, M.
1981-01-01
The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)
On spectral averages in nuclear spectroscopy
International Nuclear Information System (INIS)
Verbaarschot, J.J.M.
1982-01-01
In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)
Bildirici, Melike; Sonustun, Fulya Ozaksoy; Sonustun, Bahri
2018-01-01
In the regards of chaos theory, new concepts such as complexity, determinism, quantum mechanics, relativity, multiple equilibrium, complexity, (continuously) instability, nonlinearity, heterogeneous agents, irregularity were widely questioned in economics. It is noticed that linear models are insufficient for analyzing unpredictable, irregular and noncyclical oscillations of economies, and for predicting bubbles, financial crisis, business cycles in financial markets. Therefore, economists gave great consequence to use appropriate tools for modelling non-linear dynamical structures and chaotic behaviors of the economies especially in macro and the financial economy. In this paper, we aim to model the chaotic structure of exchange rates (USD-TL and EUR-TL). To determine non-linear patterns of the selected time series, daily returns of the exchange rates were tested by BDS during the period from January 01, 2002 to May 11, 2017 which covers after the era of the 2001 financial crisis. After specifying the non-linear structure of the selected time series, it was aimed to examine the chaotic characteristic for the selected time period by Lyapunov Exponents. The findings verify the existence of the chaotic structure of the exchange rate returns in the analyzed time period.
Sensibility and Subjectivity: Levinas’ Traumatic Subject
Directory of Open Access Journals (Sweden)
Rashmika Pandya
2011-02-01
Full Text Available The importance of Levinas’ notions of sensibility and subjectivity are evident in the revision of phenomenological method by current phenomenologists such as Jean-Luc Marion and Michel Henry. The criticisms of key tenants of classical phenomenology, intentionality and reduction, are of a particular note. However, there are problems with Levinas’ characterization of subjectivity as essentially sensible. In “Totality and Infinity” and “Otherwise than Being”, Levinas criticizes and recasts a traditional notion of subjectivity, particularly the notion of the subject as the first and foremost rational subject. The subject in Levinas’ works is characterized more by its sensibility and affectedness than by its capacity to reason or affect its world. Levinas ties rationality to economy and suggests an alternative notion of reason that leads to his analysis of the ethical relation as the face-to-face encounter. The ‘origin’ of the social relation is located not in our capacity to know but rather in a sensibility that is diametrically opposed to the reason understood as economy. I argue that the opposition in Levinas’ thought between reason and sensibility is problematic and essentially leads to a self-conflicted subject. In fact, it would seem that violence characterizes the subject’s self-relation and, thus, is also inscribed at the base of the social relation. Rather than overcoming a problematic tendency to dualistic thought in philosophy Levinas merely reverses traditional hierarchies of reason/emotion, subject/object and self/other.
Characterizing individual painDETECT symptoms by average pain severity
Directory of Open Access Journals (Sweden)
Sadosky A
2016-07-01
Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain
Comparison of examination grades using item response theory : a case study
Korobko, O.B.
2007-01-01
In item response theory (IRT), mathematical models are applied to analyze data from tests and questionnaires used to measure abilities, proficiency, personality traits and attitudes. This thesis is concerned with comparison of subjects, students and schools based on average examination grades using
Time-dependence and averaging techniques in atomic photoionization calculations
International Nuclear Information System (INIS)
Scheibner, K.F.
1984-01-01
Two distinct problems in the development and application of averaging techniques to photoionization calculations are considered. The first part of the thesis is concerned with the specific problem of near-resonant three-photon ionization in hydrogen, a process for which no cross section exists. Effects of the inclusion of the laser pulse characteristics (both temporal and spatial) on the dynamics of the ionization probability and of the metastable state probability are examined. It is found, for example, that the ionization probability can decrease with increasing field intensity. The temporal profile of the laser pulse is found to affect the dynamics very little, whereas the spatial character of the pulse can affect the results drastically. In the second part of the thesis techniques are developed for calculating averaged cross sections directly without first calculating a detailed cross section. Techniques are developed whereby the detailed cross section never has to be calculated as an intermediate step, but rather, the averaged cross section is calculated directly. A variation of the moment technique and a new method based on the stabilization technique are applied successfully to atomic hydrogen and helium
Aarthi, G.; Ramachandra Reddy, G.
2018-03-01
In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.
Uysal, Recep; Satici, Seydi Ahmet; Akin, Ahmet
2013-12-01
This study examined the mediating effects of Facebook addiction on the relationship between subjective vitality and subjective happiness. 297 university students (157 women, 140 men; M age = 20.1 yr., SD = 1.3) were administered the Facebook Addiction Scale, the Subjective Vitality Scale, and the Subjective Happiness Scale. Hierarchical regression analysis showed that Facebook addiction partially mediated the relationship between subjective vitality and subjective happiness.
Geum-Hee Jeong
2008-01-01
A discriminant analysis was conducted to investigate how an essay, a mathematics/science type of essay, a college scholastic ability test, and grade point average affect acceptance to a pre-med course at a Korean medical school. Subjects included 122 and 385 applicants for, respectively, early and regular admission to a medical school in Korea. The early admission examination was conducted in October 2007, and the regular admission examination was conducted in January 2008. The analysis of ea...
Directory of Open Access Journals (Sweden)
Miri Shonfeld
2015-09-01
Full Text Available The perceived contribution of science education online course to pre-service students (N=121 from diverse backgrounds - students with learning disabilities (25 LD students, 28 excellent students and 68 average students is presented in this five years research. During the online course students were asked to choose a scientific subject; to map it and to plan teaching activities; to carry out the proposed activities with students in a classroom experience; and to reflect the process. The assumption was that adapting the online course by using information and communication technology following formative assessment will improve students' self-learning ability as well as broaden their science knowledge, their lab performance and teaching skills. Data were collected using quantitative and qualitative tools including: pre and post questionnaires and nine (three students from each group depth interviews upon completion of the course. Findings, based on students` perceived evaluation, pinpointed on the advantages of the online course for students of the three groups. LD students’ achievements were not inferior to those of their peers, excellent students and average students. Yet, it carefully reports on a slight but explicitly marginal perceived evaluation of the LD students in comparison to excellent students and average students regarding: forum participation, authentic task and water lab performance. The article discusses the affordance of the online course via additional features that can be grouped into two categories: knowledge construction and flexibility in time, interaction and knowledge. Further research is suggested to extend the current study by examine the effect of other courses and different contents and by considering various evaluation methods of online courses, such as: observation, the think aloud, text and tasks analysis, and reflection.
Health examination for A-bomb survivors
International Nuclear Information System (INIS)
Ito, Chikako
1996-01-01
The health examination for A-bomb survivors by national, prefectural and city administrations was described and discussed on its general concept, history, time change of examinee number, improvement of examination, prevalence of individual diseases, significance of cancer examinations, examinees' point of view and future problems. Subjects were the survivors living in Hiroshima city: in 1994, their number was 100,188, whose ages were 63 y in average for males consisting of 39.5% and 67 y for females of 60.5%. The examination was begun in 1957 on the law for medical care for the survivors firstly and then systematically in 1961. From 1965, it was performed 4 times a year, and in 1988, one examination in the four was made for cancer. Authors' Center examined previously 90% but recently 70% of the examinees. The remainder underwent the examination in other medical facilities. Tests are blood analysis, electrocardiography and computed radiography of chest with imaging plate, of which data have been accumulated either in photodisc or in host computer. From 1973 to 1993, the cardiovascular diseases increased from 6.1% to 26.9%, metabolic and endocrinic ones like diabetes, 3.6% to 19.7%, and bowel ones, 0.9% to 12.3%. Correlations of these diseases with A-bomb irradiation are not elucidated and possibly poor. Five classes of cancer examinations are performed but the examinee rate in the survivors is as low as 7.6-21.8% (1993). The cancer of the large intestine is increasing. The overall examinee rates in the survivors were 70.6% in 1965-1967, 69.5% in 1976-1977 and 58.2% in 1990. In conclusion, how to examine the survivors, who are getting older, as many as possible is the future problem. (H.O.)
Average chewing pattern improvements following Disclusion Time reduction.
Kerstein, Robert B; Radke, John
2017-05-01
Studies involving electrognathographic (EGN) recordings of chewing improvements obtained following occlusal adjustment therapy are rare, as most studies lack 'chewing' within the research. The objectives of this study were to determine if reducing long Disclusion Time to short Disclusion Time with the immediate complete anterior guidance development (ICAGD) coronoplasty in symptomatic subjects altered their average chewing pattern (ACP) and their muscle function. Twenty-nine muscularly symptomatic subjects underwent simultaneous EMG and EGN recordings of right and left gum chewing, before and after the ICAGD coronoplasty. Statistical differences in the mean Disclusion Time, the mean muscle contraction cycle, and the mean ACP resultant from ICAGD underwent the Student's paired t-test (α = 0.05). Disclusion Time reductions from ICAGD were significant (2.11-0.45 s. p = 0.0000). Post-ICAGD muscle changes were significant in the mean area (p = 0.000001), the peak amplitude (p = 0.00005), the time to peak contraction (p chewing position became closer to centric occlusion (p chewing velocities increased (p chewing pattern (ACP) shape, speed, consistency, muscular coordination, and vertical opening improvements can be significantly improved in muscularly dysfunctional TMD patients within one week's time of undergoing the ICAGD enameloplasty. Computer-measured and guided occlusal adjustments quickly and physiologically improved chewing, without requiring the patients to wear pre- or post-treatment appliances.
To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space
International Nuclear Information System (INIS)
Khrennikov, Andrei
2007-01-01
We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'
Determining average path length and average trapping time on generalized dual dendrimer
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
Pyryt, Michael C.; Sandals, Lauran H.; Begoray, John
1998-01-01
Compared learning-style preferences of intellectually gifted, average-ability, and special-needs students on the Learning Style Inventory. Also examined the general differences among ability level and gender. Analyses indicated that gifted students preferred learning alone, being self-motivated, and using tactile learning approaches, and that…
Duration of the pubertal peak in skeletal Class I and Class III subjects.
Kuc-Michalska, Małgorzata; Baccetti, Tiziano
2010-01-01
To estimate and compare the duration of the pubertal growth peak in Class I and Class III subjects. The data examined consisted of pretreatment lateral cephalometric records of 218 skeletal Class I or Class III subjects (93 female and 125 male subjects) of white ancestry. The duration of the pubertal peak was calculated from the average chronological age intervals between stages CS3 and CS4 of the cervical vertebral maturation in Class I vs Class III groups (t-test). In skeletal Class I subjects, the pubertal peak had a mean duration of 11 months, whereas in Class III subjects it lasted 16 months. The average difference (5 months) was statistically significant (P < .001). The growth interval corresponding to the pubertal growth spurt (CS3-CS4) was longer in Class III subjects than in subjects with normal skeletal relationships; the larger increases in mandibular length during the pubertal peak reported in the literature for Class III subjects may be related to the longer duration of the pubertal peak.
[Comparisons of manual and automatic refractometry with subjective results].
Wübbolt, I S; von Alven, S; Hülssner, O; Erb, C
2006-11-01
Refractometry is very important in everyday clinical practice. The aim of this study is to compare the precision of three objective methods of refractometry with subjective dioptometry (Phoropter). The objective methods with the smallest deviation to subjective refractometry results are evaluated. The objective methods/instruments used were retinoscopy, Prism Refractometer PR 60 (Rodenstock) and Auto Refractometer RM-A 7000 (Topcon). The results of monocular dioptometry (sphere, cylinder and axis) of each objective method were compared to the results of the subjective method. The examination was carried out on 178 eyes, which were divided into 3 age-related groups: 6 - 12 years (103 eyes), 13 - 18 years (38 eyes) and older than 18 years (37 eyes). All measurements were made in cycloplegia. The smallest standard deviation of the measurement error was found for the Auto Refractometer RM-A 7000. Both the PR 60 and retinoscopy had a clearly higher standard deviation. Furthermore, the RM-A 7000 showed in three and retinoscopy in four of the nine comparisons a significant bias in the measurement error. The Auto Refractometer provides measurements with the smallest deviation compared to the subjective method. Here it has to be taken into account that the measurements for the sphere have an average deviation of + 0.2 dpt. In comparison to retinoscopy the examination of children with the RM-A 7000 is difficult. An advantage of the Auto Refractometer is the fast and easy handling, so that measurements can be performed by medical staff.
A subjective scheduler for subjective dedicated networks
Suherman; Fakhrizal, Said Reza; Al-Akaidi, Marwan
2017-09-01
Multiple access technique is one of important techniques within medium access layer in TCP/IP protocol stack. Each network technology implements the selected access method. Priority can be implemented in those methods to differentiate services. Some internet networks are dedicated for specific purpose. Education browsing or tutorial video accesses are preferred in a library hotspot, while entertainment and sport contents could be subjects of limitation. Current solution may use IP address filter or access list. This paper proposes subjective properties of users or applications are used for priority determination in multiple access techniques. The NS-2 simulator is employed to evaluate the method. A video surveillance network using WiMAX is chosen as the object. Subjective priority is implemented on WiMAX scheduler based on traffic properties. Three different traffic sources from monitoring video: palace, park, and market are evaluated. The proposed subjective scheduler prioritizes palace monitoring video that results better quality, xx dB than the later monitoring spots.
Zonally averaged chemical-dynamical model of the lower thermosphere
International Nuclear Information System (INIS)
Kasting, J.F.; Roble, R.G.
1981-01-01
A zonally averaged numerical model of the thermosphere is used to examine the coupling between neutral composition, including N 2 , O 2 and O, temperature, and winds at solstice for solar minimum conditions. The meridional circulation forced by solar heating results in a summer-to-winter flow, with a winter enhancement in atomic oxygen density that is a factor of about 1.8 greater than the summer hemisphere at 160 km. The O 2 and N 2 variations are associated with a latitudinal gradient in total number density, which is required to achieve pressure balance in the presence of large zonal jets. Latitudinal profiles OI (5577A) green line emission intensity are calculated by using both Chapman and Barth mechanisms. Composition of the lower thermosphere is shown to be strongly influenced by circulation patterns initiated in the stratosphere and lower mesosphere, below the lower boundary used in the model
Suicide attempts, platelet monoamine oxidase and the average evoked response
International Nuclear Information System (INIS)
Buchsbaum, M.S.; Haier, R.J.; Murphy, D.L.
1977-01-01
The relationship between suicides and suicide attempts and two biological measures, platelet monoamine oxidase levels (MAO) and average evoked response (AER) augmenting was examined in 79 off-medication psychiatric patients and in 68 college student volunteers chosen from the upper and lower deciles of MAO activity levels. In the patient sample, male individuals with low MAO and AER augmenting, a pattern previously associated with bipolar affective disorders, showed a significantly increased incidence of suicide attempts in comparison with either non-augmenting low MAO or high MAO patients. Within the normal volunteer group, all male low MAO probands with a family history of suicide or suicide attempts were AER augmenters themselves. Four completed suicides were found among relatives of low MAO probands whereas no high MAO proband had a relative who committed suicide. These findings suggest that the combination of low platelet MAO activity and AER augmenting may be associated with a possible genetic vulnerability to psychiatric disorders. (author)
20 CFR 404.221 - Computing your average monthly wage.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the average...
Average and local structure of α-CuI by configurational averaging
International Nuclear Information System (INIS)
Mohn, Chris E; Stoelen, Svein
2007-01-01
Configurational Boltzmann averaging together with density functional theory are used to study in detail the average and local structure of the superionic α-CuI. We find that the coppers are spread out with peaks in the atom-density at the tetrahedral sites of the fcc sublattice of iodines. We calculate Cu-Cu, Cu-I and I-I pair radial distribution functions, the distribution of coordination numbers and the distribution of Cu-I-Cu, I-Cu-I and Cu-Cu-Cu bond-angles. The partial pair distribution functions are in good agreement with experimental neutron diffraction-reverse Monte Carlo, extended x-ray absorption fine structure and ab initio molecular dynamics results. In particular, our results confirm the presence of a prominent peak at around 2.7 A in the Cu-Cu pair distribution function as well as a broader, less intense peak at roughly 4.3 A. We find highly flexible bonds and a range of coordination numbers for both iodines and coppers. This structural flexibility is of key importance in order to understand the exceptional conductivity of coppers in α-CuI; the iodines can easily respond to changes in the local environment as the coppers diffuse, and a myriad of different diffusion-pathways is expected due to the large variation in the local motifs
A radiographic examination system
International Nuclear Information System (INIS)
Cable, A.P.; Cable, W.S.
1983-01-01
A system for performing radiographic examination, particularly of large items such as international container units is disclosed. The system is formed as an installation comprising housings for respective linear accelerators transmitting a beam of radiation across the path of a conveyor along which the units can be displaced continuously or incrementally. On either end of the installation are container handling areas including roller conveyors with drag chains and transverse manipulators, and the whole installation is secured within automatically operated doors which seal the high energy region when a container on the conveyor is being subjected to examination. The radiation transmitted through a container is detected in a detector system incorporating a fluoroscopic screen light output from which is detected in a camera system such as a television camera, and transmitted as coded pulsed signals by a coding transfer unit to display screens where an image of the transmitted information can be displayed and/or recorded for further use. (author)
Wang, Ling; Abdel-Aty, Mohamed; Wang, Xuesong; Yu, Rongjie
2018-02-01
There have been plenty of traffic safety studies based on average daily traffic (ADT), average hourly traffic (AHT), or microscopic traffic at 5 min intervals. Nevertheless, not enough research has compared the performance of these three types of safety studies, and seldom of previous studies have intended to find whether the results of one type of study is transferable to the other two studies. First, this study built three models: a Bayesian Poisson-lognormal model to estimate the daily crash frequency using ADT, a Bayesian Poisson-lognormal model to estimate the hourly crash frequency using AHT, and a Bayesian logistic regression model for the real-time safety analysis using microscopic traffic. The model results showed that the crash contributing factors found by different models were comparable but not the same. Four variables, i.e., the logarithm of volume, the standard deviation of speed, the logarithm of segment length, and the existence of diverge segment, were positively significant in the three models. Additionally, weaving segments experienced higher daily and hourly crash frequencies than merge and basic segments. Then, each of the ADT-based, AHT-based, and real-time models was used to estimate safety conditions at different levels: daily and hourly, meanwhile, the real-time model was also used in 5 min intervals. The results uncovered that the ADT- and AHT-based safety models performed similar in predicting daily and hourly crash frequencies, and the real-time safety model was able to provide hourly crash frequency. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
2002-01-01
This document is one in a series of publications known as the ETDE/INIS Joint Reference Series and also constitutes a part of the ETDE Procedures Manual. It presents the rules, guidelines and procedures to be adopted by centers submitting input to the International Nuclear Information System (INIS) or the Energy Technology Data Exchange (ETDE). It is a manual for the subject analysis part of input preparation, meaning the selection, subject classification, abstracting and subject indexing of relevant publications, and is to be used in conjunction with the Thesauruses, Subject Categories documents and the documents providing guidelines for the preparation of abstracts. The concept and structure of the new manual are intended to describe in a logical and efficient sequence all the steps comprising the subject analysis of documents to be reported to INIS or ETDE. The manual includes new chapters on preparatory analysis, subject classification, abstracting and subject indexing, as well as rules, guidelines, procedures, examples and a special chapter on guidelines and examples for subject analysis in particular subject fields. (g.t.; a.n.)
Estimating Subjective Probabilities
DEFF Research Database (Denmark)
Andersen, Steffen; Fountain, John; Harrison, Glenn W.
2014-01-01
either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake 'calibrating adjustments' to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments...... that theory calls for. We illustrate this approach using data from a controlled experiment with real monetary consequences to the subjects. This allows the observer to make inferences about the latent subjective probability, under virtually any well-specified model of choice under subjective risk, while still...
The dynamics of multimodal integration: The averaging diffusion model.
Turner, Brandon M; Gao, Juan; Koenig, Scott; Palfy, Dylan; L McClelland, James
2017-12-01
We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.
Examinations in radiology with commented answers
International Nuclear Information System (INIS)
Wittmaack, F.M.
1980-01-01
This book is meant to be a help in the preparation for the examination in the subject of radiology. Original questions from the examinations of the past years and questions put by the author cover all subjects of the catalogue. All questions are answered, with additional comments in order to ensure the understanding of the subject, thus creating best preconditions for a successful examination. (orig./HP) [de
Subjective sleep quality in sarcoidosis.
Bosse-Henck, Andrea; Wirtz, Hubert; Hinz, Andreas
2015-05-01
Poor sleep is common among patients with medical disorders. Sleep disturbances can be a cause of fatigue and poor quality of life for patients suffering from sarcoidosis. Studies on subjective sleep quality or prevalence of insomnia have not been reported so far. The aim of this study was to investigate the subjectively reported sleep quality and its relation to psychological and physical factors in sarcoidosis patients. 1197 patients from Germany diagnosed with sarcoidosis were examined using the Pittsburgh Sleep Quality Index (PSQI), the Medical Research Council (MRC) dyspnea scale, the Hospital Anxiety and Depression Scale (HADS) and the Multidimensional Fatigue Inventory (MFI). 802 patients (67%) had PSQI global scores >5, indicating subjectively poor quality of sleep. The mean PSQI score was 7.79 ± 4.00. Women reported a significantly inferior individual quality of sleep than men. The subjective quality of sleep was lowered significantly with increasing dyspnea for men and women. 294 patients (25%) had PSQI global scores >10 usually found in patients with clinically relevant insomnia. In this group 86% had high values for fatigue, 69% for anxiety, and 59% for depression. The prevalence of known sleep apnea was 8.7% and 15.7% for restless legs. Poor subjective sleep quality in sarcoidosis patients is about twice as common as in the general population and is associated with fatigue, anxiety, depression and dyspnea. Questions about sleep complaints should therefore be included in the management of sarcoidosis. Copyright © 2014 Elsevier B.V. All rights reserved.
Thijs, Jochem T.; Koomen, Helma M. Y.; Van Der Leij, Aryan
2006-01-01
This study examined teachers' self-reported pedagogical practices toward socially inhibited, hyperactive, and average kindergartners. A self-report instrument was developed and examined in three samples of kindergartners and their teachers. Principal components analyses were conducted in four datasets pertaining to 1 child per teacher. Two…
Subjective poverty line definitions
J. Flik; B.M.S. van Praag (Bernard)
1991-01-01
textabstractIn this paper we will deal with definitions of subjective poverty lines. To measure a poverty threshold value in terms of household income, which separates the poor from the non-poor, we take into account the opinions of all people in society. Three subjective methods will be discussed
Analytical expressions for conditional averages: A numerical test
DEFF Research Database (Denmark)
Pécseli, H.L.; Trulsen, J.
1991-01-01
Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...
Experimental demonstration of squeezed-state quantum averaging
DEFF Research Database (Denmark)
Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin
2010-01-01
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...
North Korean refugee doctors' preliminary examination scores
Directory of Open Access Journals (Sweden)
Sung Uk Chae
2016-12-01
Full Text Available Purpose Although there have been studies emphasizing the re-education of North Korean (NK doctors for post-unification of the Korean Peninsula, study on the content and scope of such re-education has yet to be conducted. Researchers intended to set the content and scope of re-education by a comparative analysis for the scores of the preliminary examination, which is comparable to the Korean Medical Licensing Examination (KMLE. Methods The scores of the first and second preliminary exams were analyzed by subject using the Wilcoxon signed rank test. The passing status of the group of NK doctors for KMLE in recent 3 years were investigated. The multiple-choice-question (MCQ items of which difficulty indexes of NK doctors were lower than those of South Korean (SK medical students by two times of the standard deviation of the scores of SK medical students were selected to investigate the relevant reasons. Results The average scores of nearly all subjects were improved in the second exam compared with the first exam. The passing rate of the group of NK doctors was 75%. The number of MCQ items of which difficulty indexes of NK doctors were lower than those of SK medical students was 51 (6.38%. NK doctors’ lack of understandings for Diagnostic Techniques and Procedures, Therapeutics, Prenatal Care, and Managed Care Programs was suggested as the possible reason. Conclusion The education of integrated courses focusing on Diagnostic Techniques and Procedures and Therapeutics, and apprenticeship-style training for clinical practice of core subjects are needed. Special lectures on the Preventive Medicine are likely to be required also.
Similarity-based distortion of visual short-term memory is due to perceptual averaging.
Dubé, Chad; Zhou, Feng; Kahana, Michael J; Sekuler, Robert
2014-03-01
A task-irrelevant stimulus can distort recall from visual short-term memory (VSTM). Specifically, reproduction of a task-relevant memory item is biased in the direction of the irrelevant memory item (Huang & Sekuler, 2010a). The present study addresses the hypothesis that such effects reflect the influence of neural averaging under conditions of uncertainty about the contents of VSTM (Alvarez, 2011; Ball & Sekuler, 1980). We manipulated subjects' attention to relevant and irrelevant study items whose similarity relationships were held constant, while varying how similar the study items were to a subsequent recognition probe. On each trial, subjects were shown one or two Gabor patches, followed by the probe; their task was to indicate whether the probe matched one of the study items. A brief cue told subjects which Gabor, first or second, would serve as that trial's target item. Critically, this cue appeared either before, between, or after the study items. A distributional analysis of the resulting mnemometric functions showed an inflation in probability density in the region spanning the spatial frequency of the average of the two memory items. This effect, due to an elevation in false alarms to probes matching the perceptual average, was diminished when cues were presented before both study items. These results suggest that (a) perceptual averages are computed obligatorily and (b) perceptual averages are relied upon to a greater extent when item representations are weakened. Implications of these results for theories of VSTM are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
The flattening of the average potential in models with fermions
International Nuclear Information System (INIS)
Bornholdt, S.
1993-01-01
The average potential is a scale dependent scalar effective potential. In a phase with spontaneous symmetry breaking its inner region becomes flat as the averaging extends over infinite volume and the average potential approaches the convex effective potential. Fermion fluctuations affect the shape of the average potential in this region and its flattening with decreasing physical scale. They have to be taken into account to find the true minimum of the scalar potential which determines the scale of spontaneous symmetry breaking. (orig.)
Czech Academy of Sciences Publication Activity Database
Dušek, Libor; Kalíšková, Klára; Münich, Daniel
2013-01-01
Roč. 63, č. 6 (2013), s. 474-504 ISSN 0015-1920 R&D Projects: GA TA ČR(CZ) TD010033 Institutional support: RVO:67985998 Keywords : TAXBEN models * average tax rates * marginal tax rates Subject RIV: AH - Economics Impact factor: 0.358, year: 2013 http://journal.fsv.cuni.cz/storage/1287_dusek.pdf
Czech Academy of Sciences Publication Activity Database
Hynek, M.; Smetanová, D.; Stejskal, D.; Zvárová, Jana
2014-01-01
Roč. 34, č. 4 (2014), s. 367-376 ISSN 0197-3851 Institutional support: RVO:67985807 Keywords : nuchal translucency * exponentially weighted moving average model * statistics Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.268, year: 2014
A One-Electron Approximation to Domain Averaged Fermi hole Analysis
Czech Academy of Sciences Publication Activity Database
Cooper, D.L.; Ponec, Robert
2008-01-01
Roč. 10, č. 9 (2008), s. 1319-1329 ISSN 1463-9076 R&D Projects: GA AV ČR(CZ) IAA4072403 Institutional research plan: CEZ:AV0Z40720504 Keywords : domain-averaged fermi hole * comparisons Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 4.064, year: 2008
Model of averaged turbulent flow around cylindrical column for simulation of the saltation
Czech Academy of Sciences Publication Activity Database
Kharlamova, Irina; Kharlamov, Alexander; Vlasák, Pavel
2014-01-01
Roč. 21, č. 2 (2014), s. 103-110 ISSN 1802-1484 R&D Projects: GA ČR GA103/09/1718 Institutional research plan: CEZ:AV0Z20600510 Institutional support: RVO:67985874 Keywords : sediment transport * flow around cylinder * logarithmic profile * dipole line * averaged turbulent flow Subject RIV: BK - Fluid Dynamics
Prediction of LVH from average of R wave amplitude in leads I and ...
African Journals Online (AJOL)
Aim: The aim of this study was to determine the sensitivity, specificity, accuracy, positive and negative predictive values of average of R wave amplitude in leads I and V5 in predicting LVH Methodology: This is a cross-sectional descriptive study of adult hypertensive subjects. Participants were assessed for LVH using the ...
Imdat, Yarim
2014-01-01
The aim of the study is to find the correlation that exists between physical activity level and grade point averages of faculty of education students. The subjects consist of 359 (172 females and 187 males) under graduate students To determine the physical activity levels of the students in this research, International Physical Activity…
Examination stress at unified state examination: student destabilization or success factor?
Directory of Open Access Journals (Sweden)
Svetlana N. Kostromina
2017-01-01
. Stress reactions were already recorded when the students read the instructions, and their number increased in the process of the examination, reaching the maximum by the end of completing the work. The analysis of subjective evaluation of the state by the students themselves reveals that they often evaluate it inadequately. Significant statistical differences were found between the levels of stress at examinations in different subjects. The highest stress level was recorded at the examination in Russian. It was revealed, that the students who chose a free strategy of completing the tasks and followed it, had fewer stress reactions than those who completed the tasks one by one. The differences in the stress levels were registered between students who got different grades: the students who were graded “good”, had the most stress reactions, those who were graded “satisfactory”, had the fewest stress reactions.The results of the study show that the General and Unified State Examinations are highly stressful events: in average, the students were under stress about one third of the time of completing the exam tasks. The given events have a significant impact on school leavers’ psychophysiological state. Additional factors were determined that cause stress reactions at the examination.
20 CFR 404.220 - Average-monthly-wage method.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You must...
A time-averaged cosmic ray propagation theory
International Nuclear Information System (INIS)
Klimas, A.J.
1975-01-01
An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de
7 CFR 51.2561 - Average moisture content.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...
Averaging in SU(2) open quantum random walk
International Nuclear Information System (INIS)
Ampadu Clement
2014-01-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT
Averaging in SU(2) open quantum random walk
Clement, Ampadu
2014-03-01
We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.
Average glandular dose in digital mammography and breast tomosynthesis
Energy Technology Data Exchange (ETDEWEB)
Olgar, T. [Ankara Univ. (Turkey). Dept. of Engineering Physics; Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Kahn, T.; Gosch, D. [Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie
2012-10-15
Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.
Average glandular dose in digital mammography and breast tomosynthesis
International Nuclear Information System (INIS)
Olgar, T.; Universitaetsklinikum Leipzig AoeR; Kahn, T.; Gosch, D.
2012-01-01
Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.
International Nuclear Information System (INIS)
Fehlau, P.E.
1993-01-01
The author compared a recursive digital filter proposed as a detection method for French special nuclear material monitors with the author's detection methods, which employ a moving-average scaler or a sequential probability-ratio test. Each of these nine test subjects repeatedly carried a test source through a walk-through portal monitor that had the same nuisance-alarm rate with each method. He found that the average detection probability for the test source is also the same for each method. However, the recursive digital filter may have on drawback: its exponentially decreasing response to past radiation intensity prolongs the impact of any interference from radiation sources of radiation-producing machinery. He also examined the influence of each test subject on the monitor's operation by measuring individual attenuation factors for background and source radiation, then ranked the subjects' attenuation factors against their individual probabilities for detecting the test source. The one inconsistent ranking was probably caused by that subject's unusually long stride when passing through the portal
Directory of Open Access Journals (Sweden)
D.N. Bakhrakh
2006-03-01
Full Text Available The question about the subjects of law branches is concerning the number of most important and difficult in law science. Its right decision influences on the subject of law regulation, precise definition of addressees of law norms, the volume of their rights and duties, the limits of action of norms of Main part of the branch, its principles. Scientific investigations, dedicated to law subjects system, promote the development of recommendations for the legislative and law applying activity; they are needed for scientific work organization and student training, for preparing qualified lawyers.
DEFF Research Database (Denmark)
Greve, Charlotte
/page. It is, moreover, an index pointing to the painting/writing subject; it is a special deictic mode of painting/writing. The handwriting of the Russian avant-garde books, the poetics of handwriting, and the way handwriting is represented in poetry emphasize the way the subject (the speaking and the viewing...... in the early as well as the contemporary avant-garde, it becomes clear that the ‘subject’ is an unstable category that can be exposed to manipulation and play. Handwriting is performing as a signature (as an index), but is at the same time similar to the signature of a subject (an icon) and a verbal construct...
Waldhauer, Julia; Kuntz, Benjamin; Lampert, Thomas
2018-04-01
Social inequalities in health can already be found among children and adolescents to the disadvantage of socially deprived population groups. This paper aims to detect, whether differences in subjective health, mental health and health behavior among young people are due to the secondary school type attended and whether these associations exist independently of the family's socioeconomic position (SEP). The data basis was the German Health Interview and Examination Survey for Children and Adolescents (KiGGS Wave 1, 2009-2012). Data of 11- to 17-year-old girls and boys (n = 4665) who attend different types of secondary schools in Germany were analyzed. The dependent variables were self-rated health, findings of the Strengths and Difficulties Questionnaire (SDQ) for the detection of psychological abnormalities, as well as self-reported information regarding leisure sport, tobacco, and alcohol consumption. Prevalence and odds ratios (ORs) based on logistic regressions are shown. For the majority of the examined indicators, it can be shown that adolescents in lower secondary schools are more likely to report worse self-rated health and mental problems and engage in unhealthy behavior than peers in grammar schools ("Gymnasium"). The differences decrease after controlling for family's SEP but mostly remain statistically significant. Adolescents who don't attend grammar schools are most strongly disadvantaged in terms of inattention/hyperactivity for both gender (OR: 2.29 [1.70-3.08]), smoking among girls (2.91 [1.85-4.57]) and physical inactivity (no leisure sport) among boys (OR: 2.71 [1.85-3.95]). Unequal health opportunities should be viewed in relation to people's living conditions. For adolescents, school constitutes an important setting for learning, experience, and health. The results indicate divergent needs of school-based health promotion and prevention regarding differences among gender and type of school.
SUBJECTIVE MEMORY IN OLDER AFRICAN AMERICANS
Sims, Regina C.; Whitfield, Keith E.; Ayotte, Brian J.; Gamaldo, Alyssa A.; Edwards, Christopher L.; Allaire, Jason C.
2011-01-01
The current analysis examined (a) if measures of psychological well-being predict subjective memory, and (b) if subjective memory is consistent with actual memory. Five hundred seventy-nine older African Americans from the Baltimore Study of Black Aging completed measures assessing subjective memory, depressive symptomatology, perceived stress, locus of control, and verbal and working memory. Higher levels of perceived stress and greater externalized locus of control predicted poorer subjecti...
Dina Verdín; Allison Godwin; Adam Kirn; Lisa Benson; Geoff Potvin
2018-01-01
Women’s participation in engineering remains well below that of men at all degree levels. However, despite the low enrollment of women in engineering as a whole, some engineering disciplines report above average female enrollment. We used multiple linear regression to examine the attitudes, beliefs, career outcome expectations, and career choice of first-year female engineering students enrolled in below average, average, and above average female representation disciplines in engineering. Our...
Directory of Open Access Journals (Sweden)
María Angélica Garzón Martínez
2015-07-01
More concretely this article presents the idea of remembrance subjectivity that becomes converted into a political platform for reclaiming the right to recollect and change based on those recollections
Directory of Open Access Journals (Sweden)
Peihua Wang
Full Text Available After the implementation of the universal salt iodization (USI program in 1996, seven cross-sectional school-based surveys have been conducted to monitor iodine deficiency disorders (IDD among children in eastern China.This study aimed to examine the correlation of total goiter rate (TGR with average thyroid volume (Tvol and urinary iodine concentration (UIC in Jiangsu province after IDD elimination.Probability-proportional-to-size sampling was applied to select 1,200 children aged 8-10 years old in 30 clusters for each survey in 1995, 1997, 1999, 2001, 2002, 2005, 2009 and 2011. We measured Tvol using ultrasonography in 8,314 children and measured UIC (4,767 subjects and salt iodine (10,184 samples using methods recommended by the World Health Organization. Tvol was used to calculate TGR based on the reference criteria specified for sex and body surface area (BSA.TGR decreased from 55.2% in 1997 to 1.0% in 2009, and geometric means of Tvol decreased from 3.63 mL to 1.33 mL, along with the UIC increasing from 83 μg/L in 1995 to 407 μg/L in 1999, then decreasing to 243 μg/L in 2005, and then increasing to 345 μg/L in 2011. In the low goiter population (TGR 300 μg/L was associated with a smaller average Tvol in children.After IDD elimination in Jiangsu province in 2001, lower TGR was associated with smaller average Tvol. Average Tvol was more sensitive than TGR in detecting the fluctuation of UIC. A UIC of 300 μg/L may be defined as a critical value for population level iodine status monitoring.
Directory of Open Access Journals (Sweden)
Gabriela Brůhová
2017-07-01
Full Text Available The paper analyses English sentences with thematic locative subjects. These subjects were detected as translation counterparts of Czech sentenceinitial locative adverbials realized by prepositional phrases with the prepositions do (into, na (on, v/ve (in, z/ze (from complemented by a noun. In the corresponding English structure, the initial scene-setting adverbial is reflected in the thematic subject, which results in the locative semantics of the subject. The sentences are analysed from syntactic, semantic and FSP aspects. From the syntactic point of view, we found five syntactic patterns of the English sentences with a locative subject (SV, SVA, SVO, SVpassA and SVCs that correspond to Czech sentences with initial locative adverbials. On the FSP level the paper studies the potential of the sentences to implement the Presentation or Quality Scale. Since it is the “semantic content of the verb that actuates the presentation semantics of the sentence” (Duškova, 2015a: 260, major attention is paid to the syntactic-semantic structure of the verb. The analysis of the semantics of the English sentences results in the identification of two semantic classes of verbs which co-occur with the English locative subject.
Recruiting phobic research subjects: effectiveness and cost.
Kaakko, T; Murtomaa, H; Milgrom, P; Getz, T; Ramsay, D S; Coldwell, S E
2001-01-01
Efficiently enrolling subjects is one of the most important and difficult aspects of a clinical trial. This prospective study evaluated strategies used in the recruitment of 144 dental injection phobics for a clinical trial evaluating the effectiveness of combining alprazolam with exposure therapy. Three types of recruitment strategies were evaluated: paid advertising, free publicity, and professional referral. Sixty-three percent of subjects were enrolled using paid advertising (the majority of them from bus advertisements [27.0%], posters on the University of Washington campus [20.1%], and newspaper advertisements [13.2%]). Free publicity (eg, television coverage, word of mouth) yielded 18.8% of enrolled subjects and professionaL referrals 14.6% of subjects. The average cost (1996 dollars) of enrolling 1 subject was $79. Bus and poster advertising attracted more initial contacts and yielded the greatest enrollment.
Sedimentological regimes for turbidity currents: Depth-averaged theory
Halsey, Thomas C.; Kumar, Amit; Perillo, Mauricio M.
2017-07-01
Turbidity currents are one of the most significant means by which sediment is moved from the continents into the deep ocean; their properties are interesting both as elements of the global sediment cycle and due to their role in contributing to the formation of deep water oil and gas reservoirs. One of the simplest models of the dynamics of turbidity current flow was introduced three decades ago, and is based on depth-averaging of the fluid mechanical equations governing the turbulent gravity-driven flow of relatively dilute turbidity currents. We examine the sedimentological regimes of a simplified version of this model, focusing on the role of the Richardson number Ri [dimensionless inertia] and Rouse number Ro [dimensionless sedimentation velocity] in determining whether a current is net depositional or net erosional. We find that for large Rouse numbers, the currents are strongly net depositional due to the disappearance of local equilibria between erosion and deposition. At lower Rouse numbers, the Richardson number also plays a role in determining the degree of erosion versus deposition. The currents become more erosive at lower values of the product Ro × Ri, due to the effect of clear water entrainment. At higher values of this product, the turbulence becomes insufficient to maintain the sediment in suspension, as first pointed out by Knapp and Bagnold. We speculate on the potential for two-layer solutions in this insufficiently turbulent regime, which would comprise substantial bedload flow with an overlying turbidity current.
Variation in the annual average radon concentration measured in homes in Mesa County, Colorado
International Nuclear Information System (INIS)
Rood, A.S.; George, J.L.; Langner, G.H. Jr.
1990-04-01
The purpose of this study is to examine the variability in the annual average indoor radon concentration. The TMC has been collecting annual average radon data for the past 5 years in 33 residential structures in Mesa County, Colorado. This report is an interim report that presents the data collected up to the present. Currently, the plans are to continue this study in the future. 62 refs., 3 figs., 12 tabs
Subject search study. Final report
International Nuclear Information System (INIS)
Todeschini, C.
1995-01-01
The study gathered information on how users search the database of the International Nuclear Information System (INIS), using indicators such as Subject categories, Controlled terms, Subject headings, Free-text words, combinations of the above. Users participated from the Australian, French, Russian and Spanish INIS Centres, that have different national languages. Participants, both intermediaries and end users, replied to a questionnaire and executed search queries. The INIS Secretariat at the IAEA also participated. A protocol of all search strategies used in actual searches in the database was kept. The thought process for Russian and Spanish users is predominantly non-English and also the actual initial search formulation is predominantly non-English among Russian and Spanish users while it tends to be more in English among French users. A total of 1002 searches were executed by the five INIS centres including the IAEA. The search protocols indicate the following search behaviour: 1) free text words represent about 40% of search points on an average query; 2) descriptors used as search keys have the widest range as percentage of search points, from a low of 25% to a high of 48%; 3) search keys consisting of free text that coincides with a descriptor account for about 15% of search points; 4) Subject Categories are not used in many searches; 5) free text words are present as search points in about 80% of all searches; 6) controlled terms (descriptors) are used very extensively and appear in about 90% of all searches; 7) Subject Headings were used in only a few percent of searches. From the results of the study one can conclude that there is a greater reluctance on the part of non-native English speakers in initiating their searches by using free text word searches. Also: Subject Categories are little used in searching the database; both free text terms and controlled terms are the predominant types of search keys used, whereby the controlled terms are used more
Reproducibility of blood pressure variation in older ambulatory and bedridden subjects.
Tsuchihashi, Takuya; Kawakami, Yasunobu; Imamura, Tsuyoshi; Abe, Isao
2002-06-01
We investigated the influence of ambulation on the reproducibility of circadian blood pressure variation in older nursing home residents. Ambulatory blood pressure monitoring was performed twice in 37 older nursing home residents. Nursing home in Japan. Subjects included 18 ambulatory nursing home residents who had no limitation on physical activity and 19 bedridden residents who did not participate in physical activity. Twenty-four-hour, daytime, and nighttime blood pressure levels and their variability. The 24-hour and daytime variability of systolic blood pressure (SBP) was significantly greater in ambulatory than in bedridden subjects, whereas nighttime variability was similar. Significant correlations in SBP averaged for the whole day, daytime, and nighttime were observed between the two examinations in ambulatory (r =.80-.83) and bedridden (r =.83-.91) subjects, but the variabilities of SBP for the whole day and during the daytime of the first measurement were correlated with those of the second measurement in bedridden (r =.67 and r =.47, respectively) but not in ambulatory (r =.39 and r =.28, respectively) subjects. Significant correlations were found between the nocturnal SBP changes at two occasions in both ambulatory (r =.50) and bedridden (r =.51) subjects, but the dipper versus nondipper profiles, defined as reduction in SBP of greater than 10% versus not, showed low reproducibility in ambulatory subjects; five ambulatory (28%) and one bedridden (5%) subjects showed divergent profiles between the two examinations. The reproducibility of blood pressure variation in nursing home residents is influenced by ambulation.
Diedrichs, Phillippa C; Lee, Christina
2010-06-01
Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.
Averaging and sampling for magnetic-observatory hourly data
Directory of Open Access Journals (Sweden)
J. J. Love
2010-11-01
Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.
Directory of Open Access Journals (Sweden)
G. H. de Rooij
2009-07-01
Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.
Safety Impact of Average Speed Control in the UK
DEFF Research Database (Denmark)
Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert
2016-01-01
of automatic speed control was point-based, but in recent years a potentially more effective alternative automatic speed control method has been introduced. This method is based upon records of drivers’ average travel speed over selected sections of the road and is normally called average speed control...... in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....
on the performance of Autoregressive Moving Average Polynomial
African Journals Online (AJOL)
Timothy Ademakinwa
Distributed Lag (PDL) model, Autoregressive Polynomial Distributed Lag ... Moving Average Polynomial Distributed Lag (ARMAPDL) model. ..... Global Journal of Mathematics and Statistics. Vol. 1. ... Business and Economic Research Center.
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
Comparison of Interpolation Methods as Applied to Time Synchronous Averaging
National Research Council Canada - National Science Library
Decker, Harry
1999-01-01
Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...
Light-cone averaging in cosmology: formalism and applications
International Nuclear Information System (INIS)
Gasperini, M.; Marozzi, G.; Veneziano, G.; Nugier, F.
2011-01-01
We present a general gauge invariant formalism for defining cosmological averages that are relevant for observations based on light-like signals. Such averages involve either null hypersurfaces corresponding to a family of past light-cones or compact surfaces given by their intersection with timelike hypersurfaces. Generalized Buchert-Ehlers commutation rules for derivatives of these light-cone averages are given. After introducing some adapted ''geodesic light-cone'' coordinates, we give explicit expressions for averaging the redshift to luminosity-distance relation and the so-called ''redshift drift'' in a generic inhomogeneous Universe
Interaction, transference, and subjectivity
DEFF Research Database (Denmark)
Lundgaard Andersen, Linda
2012-01-01
Fieldwork is one of the important methods in educational, social, and organisational research. In fieldwork, the researcher takes residence for a shorter or longer period amongst the subjects and settings to be studied. The aim of this is to study the culture of people: how people seem to make...... sense of their lives and which moral, professional, and ethical values seem to guide their behaviour and attitudes. In fieldwork, the researcher has to balance participation and observation in her attempts at representation. Consequently, the researcher’s academic and life-historical subjectivity...... is also subjected to psychodynamic processes. In this article, I draw upon a number of research inquiries to illustrate how psychodynamic processes influence research processes: data production, research questions and methodology, relations to informants, as well as interpretation and analysis. I further...
Are average and symmetric faces attractive to infants? Discrimination and looking preferences.
Rhodes, Gillian; Geddes, Keren; Jeffery, Linda; Dziurawiec, Suzanne; Clark, Alison
2002-01-01
Young infants prefer to look at faces that adults find attractive, suggesting a biological basis for some face preferences. However, the basis for infant preferences is not known. Adults find average and symmetric faces attractive. We examined whether 5-8-month-old infants discriminate between different levels of averageness and symmetry in faces, and whether they prefer to look at faces with higher levels of these traits. Each infant saw 24 pairs of female faces. Each pair consisted of two versions of the same face differing either in averageness (12 pairs) or symmetry (12 pairs). Data from the mothers confirmed that adults preferred the more average and more symmetric versions in each pair. The infants were sensitive to differences in both averageness and symmetry, but showed no looking preference for the more average or more symmetric versions. On the contrary, longest looks were significantly longer for the less average versions, and both longest looks and first looks were marginally longer for the less symmetric versions. Mean looking times were also longer for the less average and less symmetric versions, but those differences were not significant. We suggest that the infant looking behaviour may reflect a novelty preference rather than an aesthetic preference.
Digital Pupillometry in Normal Subjects
Rickmann, Annekatrin; Waizel, Maria; Kazerounian, Sara; Szurman, Peter; Wilhelm, Helmut; Boden, Karl T.
2017-01-01
ABSTRACT The aim of this study was to evaluate the pupil size of normal subjects at different illumination levels with a novel pupillometer. The pupil size of healthy study participants was measured with an infrared-video PupilX pupillometer (MEye Tech GmbH, Alsdorf, Germany) at five different illumination levels (0, 0.5, 4, 32, and 250 lux). Measurements were performed by the same investigator. Ninety images were executed during a measurement period of 3 seconds. The absolute linear camera resolution was approximately 20 pixels per mm. This cross-sectional study analysed 490 eyes of 245 subjects (mean age: 51.9 ± 18.3 years, range: 6–87 years). On average, pupil diameter decreased with increasing light intensities for both eyes, with a mean pupil diameter of 5.39 ± 1.04 mm at 0 lux, 5.20 ± 1.00 mm at 0.5 lux, 4.70 ± 0.97 mm at 4 lux, 3.74 ± 0.78 mm at 32 lux, and 2.84 ± 0.50 mm at 250 lux illumination. Furthermore, it was found that anisocoria increased by 0.03 mm per life decade for all illumination levels (R2 = 0.43). Anisocoria was higher under scotopic and mesopic conditions. This study provides additional information to the current knowledge concerning age- and light-related pupil size and anisocoria as a baseline for future patient studies. PMID:28228832
Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection
DEFF Research Database (Denmark)
Bork, Lasse; Møller, Stig Vinther
2015-01-01
We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves substantia......We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves...
Subjective Wellbeing Among Adults with Diabetes
DEFF Research Database (Denmark)
Holmes-Truscott, Elizabeth; Browne, Jessica L; Pouwer, Frans
2016-01-01
duration, body mass index, number of diabetes-related complications, and depression). Furthermore, adults with type 2 diabetes using insulin to manage their condition report the lowest levels of subjective wellbeing, and are also most likely to report dissatisfaction with their current health....... These findings suggest that living with diabetes, and in particular, living with type 2 diabetes and using insulin, strongly challenges the maintenance of subjective wellbeing.......This study examines the subjective wellbeing of Australian adults with diabetes who completed the Diabetes MILES—Australia survey, investigating by diabetes type and treatment, and by comparing with the subjective wellbeing of the general Australian adult population. In addition, the extent...
Delineation of facial archetypes by 3d averaging.
Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G
2004-10-01
The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.
Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)
2016-05-01
subject to code matrices that follows the structure given by (113). [⃗ yR y⃗I ] = √ Es 2L [ GR1 −GI1 GI2 GR2 ] [ QR −QI QI QR ] [⃗ bR b⃗I ] + [⃗ nR n⃗I... QR ] [⃗ b+ b⃗− ] + [⃗ n+ n⃗− ] (115) The average likelihood for type 4 CDMA (116) is a special case of type 1 CDMA with twice the code length and...AVERAGE LIKELIHOOD METHODS OF CLASSIFICATION OF CODE DIVISION MULTIPLE ACCESS (CDMA) MAY 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE
Oppugning the assumptions of spatial averaging of segment and joint orientations.
Pierrynowski, Michael Raymond; Ball, Kevin Arthur
2009-02-09
Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.
Czech Academy of Sciences Publication Activity Database
Novotný, Karel
2014-01-01
Roč. 4, č. 1 (2014), s. 187-195 ISSN 1804-624X R&D Projects: GA ČR(CZ) GAP401/10/1164 Institutional support: RVO:67985955 Keywords : Levinas * phenomenology * factivity * body * experience Subject RIV: AA - Philosophy ; Religion
Miscellaneous subjects, ch. 18
International Nuclear Information System (INIS)
Brussaard, P.J.; Glaudemans, P.W.M.
1977-01-01
Attention is paid to a variery of subjects which are related to shell model applications, e.g. the Lanczos method for matrix diagonalization, truncation methods (seniority truncation, single-particle energy truncation and diagonal energy truncation which can be used for reducing the configuration space.) Coulomb energies and spurious states are briefly discussed. Finally attention is paid to the particle-vibrator model
Jansen, MA, Robert
2016-01-01
Includes one diagnostic test and three complete tests, all questions answered and explained, self-assessment guides, and subject reviews. Also features test strategies, QR codes to short instructional videos, and a detailed appendix with equations, physical constants, and a basic math review.
Gallistel, C R
2012-05-01
Except under unusually favorable circumstances, one can infer from functions obtained by averaging across the subjects neither the form of the function that describes the behavior of the individual subject nor the central tendencies of descriptive parameter values. We should restore the cumulative record to the place of honor as our means of visualizing behavioral change, and we should base our conclusions on analyses that measure where the change occurs in these response-by-response records of the behavior of individual subjects. When that is done, we may find that the extinction of responding to a continuously reinforced stimulus is faster than the extinction of responding to a partially reinforced stimulus in a within-subject design because the latter is signaled extinction. Copyright © 2012 Elsevier B.V. All rights reserved.
Digital mammography screening: average glandular dose and first performance parameters
International Nuclear Information System (INIS)
Weigel, S.; Girnus, R.; Czwoydzinski, J.; Heindel, W.; Decker, T.; Spital, S.
2007-01-01
Purpose: The Radiation Protection Commission demanded structured implementation of digital mammography screening in Germany. The main requirements were the installation of digital reference centers and separate evaluation of the fully digitized screening units. Digital mammography screening must meet the quality standards of the European guidelines and must be compared to analog screening results. We analyzed early surrogate indicators of effective screening and dosage levels for the first German digital screening unit in a routine setting after the first half of the initial screening round. Materials and Methods: We used three digital mammography screening units (one full-field digital scanner [DR] and two computed radiography systems [CR]). Each system has been proven to fulfill the requirements of the National and European guidelines. The radiation exposure levels, the medical workflow and the histological results were documented in a central electronic screening record. Results: In the first year 11,413 women were screened (participation rate 57.5 %). The parenchymal dosages for the three mammographic X-ray systems, averaged for the different breast sizes, were 0.7 (DR), 1.3 (CR), 1.5 (CR) mGy. 7 % of the screened women needed to undergo further examinations. The total number of screen-detected cancers was 129 (detection rate 1.1 %). 21 % of the carcinomas were classified as ductal carcinomas in situ, 40 % of the invasive carcinomas had a histological size ≤ 10 mm and 61 % < 15 mm. The frequency distribution of pT-categories of screen-detected cancer was as follows: pTis 20.9 %, pT1 61.2 %, pT2 14.7 %, pT3 2.3 %, pT4 0.8 %. 73 % of the invasive carcinomas were node-negative. (orig.)
The Lake Wobegon effect: are all cancer patients above average?
Wolf, Jacqueline H; Wolf, Kevin S
2013-12-01
When elderly patients face a terminal illness such as lung cancer, most are unaware that what we term in this article "the Lake Wobegon effect" taints the treatment advice imparted to them by their oncologists. In framing treatment plans, cancer specialists tend to intimate that elderly patients are like the children living in Garrison Keillor's mythical Lake Wobegon: above average and thus likely to exceed expectations. In this article, we use the story of our mother's death from lung cancer to investigate the consequences of elderly people's inability to reconcile the grave reality of their illness with the overly optimistic predictions of their physicians. In this narrative analysis, we examine the routine treatment of elderly, terminally ill cancer patients through alternating lenses: the lens of a historian of medicine who also teaches ethics to medical students and the lens of an actuary who is able to assess physicians' claims for the outcome of medical treatments. We recognize that a desire to instill hope in patients shapes physicians' messages. We argue, however, that the automatic optimism conveyed to elderly, dying patients by cancer specialists prompts those patients to choose treatment that is ineffective and debilitating. Rather than primarily prolong life, treatments most notably diminish patients' quality of life, weaken the ability of patients and their families to prepare for their deaths, and contribute significantly to the unsustainable costs of the U.S. health care system. The case described in this article suggests how physicians can better help elderly, terminally ill patients make medical decisions that are less damaging to them and less costly to the health care system. © 2013 Milbank Memorial Fund.
Interpreting Bivariate Regression Coefficients: Going beyond the Average
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Average stress in a Stokes suspension of disks
Prosperetti, Andrea
2004-01-01
The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is
47 CFR 1.959 - Computation of average terrain elevation.
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Computation of average terrain elevation. 1.959 Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.959 Computation of average terrain elevation. Except a...
47 CFR 80.759 - Average terrain elevation.
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80.759 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.759 Average terrain elevation. (a)(1) Draw radials...
The average covering tree value for directed graph games
Khmelnitskaya, Anna Borisovna; Selcuk, Özer; Talman, Dolf
We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all covering
The Average Covering Tree Value for Directed Graph Games
Khmelnitskaya, A.; Selcuk, O.; Talman, A.J.J.
2012-01-01
Abstract: We introduce a single-valued solution concept, the so-called average covering tree value, for the class of transferable utility games with limited communication structure represented by a directed graph. The solution is the average of the marginal contribution vectors corresponding to all
18 CFR 301.7 - Average System Cost methodology functionalization.
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER...
Analytic computation of average energy of neutrons inducing fission
International Nuclear Information System (INIS)
Clark, Alexander Rich
2016-01-01
The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.
An alternative scheme of the Bogolyubov's average method
International Nuclear Information System (INIS)
Ortiz Peralta, T.; Ondarza R, R.; Camps C, E.
1990-01-01
In this paper the average energy and the magnetic moment conservation laws in the Drift Theory of charged particle motion are obtained in a simple way. The approach starts from the energy and magnetic moment conservation laws and afterwards the average is performed. This scheme is more economic from the standpoint of time and algebraic calculations than the usual procedure of Bogolyubov's method. (Author)
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan M.; Chikalov, Igor; Moshkov, Mikhail
2015-01-01
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
Self-similarity of higher-order moving averages
Arianos, Sergio; Carbone, Anna; Türk, Christian
2011-10-01
In this work, higher-order moving average polynomials are defined by straightforward generalization of the standard moving average. The self-similarity of the polynomials is analyzed for fractional Brownian series and quantified in terms of the Hurst exponent H by using the detrending moving average method. We prove that the exponent H of the fractional Brownian series and of the detrending moving average variance asymptotically agree for the first-order polynomial. Such asymptotic values are compared with the results obtained by the simulations. The higher-order polynomials correspond to trend estimates at shorter time scales as the degree of the polynomial increases. Importantly, the increase of polynomial degree does not require to change the moving average window. Thus trends at different time scales can be obtained on data sets with the same size. These polynomials could be interesting for those applications relying on trend estimates over different time horizons (financial markets) or on filtering at different frequencies (image analysis).
Anomalous behavior of q-averages in nonextensive statistical mechanics
International Nuclear Information System (INIS)
Abe, Sumiyoshi
2009-01-01
A generalized definition of average, termed the q-average, is widely employed in the field of nonextensive statistical mechanics. Recently, it has however been pointed out that such an average value may behave unphysically under specific deformations of probability distributions. Here, the following three issues are discussed and clarified. Firstly, the deformations considered are physical and may be realized experimentally. Secondly, in view of the thermostatistics, the q-average is unstable in both finite and infinite discrete systems. Thirdly, a naive generalization of the discussion to continuous systems misses a point, and a norm better than the L 1 -norm should be employed for measuring the distance between two probability distributions. Consequently, stability of the q-average is shown not to be established in all of the cases
Bootstrapping pre-averaged realized volatility under market microstructure noise
DEFF Research Database (Denmark)
Hounyo, Ulrich; Goncalves, Sílvia; Meddahi, Nour
The main contribution of this paper is to propose a bootstrap method for inference on integrated volatility based on the pre-averaging approach of Jacod et al. (2009), where the pre-averaging is done over all possible overlapping blocks of consecutive observations. The overlapping nature of the pre......-averaged returns implies that these are kn-dependent with kn growing slowly with the sample size n. This motivates the application of a blockwise bootstrap method. We show that the "blocks of blocks" bootstrap method suggested by Politis and Romano (1992) (and further studied by Bühlmann and Künsch (1995......)) is valid only when volatility is constant. The failure of the blocks of blocks bootstrap is due to the heterogeneity of the squared pre-averaged returns when volatility is stochastic. To preserve both the dependence and the heterogeneity of squared pre-averaged returns, we propose a novel procedure...
A Conceptual Framework for Leisure and Subjective Well-Being
Kim, Byunggook
2009-01-01
The purpose of this study was to examine a conceptual framework for an individual's subjective perception of leisure that contributes to Subjective Well-Being (SWB). More specifically, this study was an attempt to examine causal relationships among social cognitive variables, subjective perception of leisure, and SWB. A survey was administered to…
Perceived Social Policy Fairness and Subjective Wellbeing: Evidence from China
Sun, Feng; Xiao, Jing Jian
2012-01-01
This study examined the relationship between perceived fairness of social policies and subjective well-being. Two types of policies examined were related to income distribution and social security. Subjective well-being was measured by work and life satisfaction. In addition, subjective well-beings between different income, age, and education…
Cardiovascular risk factors in subjects with psoriasis
DEFF Research Database (Denmark)
Jensen, Peter; Thyssen, Jacob P; Zachariae, Claus
2013-01-01
Background Epidemiological data have established an association between cardiovascular disease and psoriasis. Only one general population study has so far compared prevalences of cardiovascular risk factors among subjects with psoriasis and control subjects. We aimed to determine the prevalence...... of cardiovascular risk factors in subjects with and without psoriasis in the general population. Methods During 2006-2008, a cross-sectional study was performed in the general population in Copenhagen, Denmark. A total of 3471 subjects participated in a general health examination that included assessment of current...... between subjects with and without psoriasis with regard to traditional cardiovascular risk factors. Conclusions Our results contrast with the hitherto-reported increased prevalence of metabolic syndrome in subjects with psoriasis in the general US population. However, our results agree with those of other...
Medical examination of A-bomb survivors on Nagasaki A-bomb Casualty
International Nuclear Information System (INIS)
Tagawa, Masuko
1996-01-01
Medical examination of A-bomb survivors was described and discussed on history, time change of examinee number, action for subjects not examined, change of prevalence, cancer examination, examination for the second generation, and education and enlightenment. Free examination of the survivors was begun in 1953 and the present casualty was made in 1958 on the law for medical care for the survivors. Systematic examination started from 1967 and the examination for the 2nd generation, from 1974. Cancer examination was from 1988. The number of the survivors was the maximum of 82,439 in 1974 and decreased to 61,388 in 1994, when the actual number of examinees, which being rather settled recently, was 32,294 and their average age was 64 y. The examination is done by tour or at the Center. Subjects receive the information of the examination twice by mail. Hematopoietic diseases like anemia, hepatic ones, metabolic and endocrinic ones like diabetes, renal impairment and others (mostly hyperlipidemia) are increasing recently. The number of examinees for cancer is increasing. Lung cancer is examined by the direct roentgenography, gastric cancer by transillumination, and other cancers like myeloma, those in large bowel, uterus and mammary gland, by the respective suitable methods. Health education and enlightenment have been conceivably effective. (H.O.)
Parameterized examination in econometrics
Malinova, Anna; Kyurkchiev, Vesselin; Spasov, Georgi
2018-01-01
The paper presents a parameterization of basic types of exam questions in Econometrics. This algorithm is used to automate and facilitate the process of examination, assessment and self-preparation of a large number of students. The proposed parameterization of testing questions reduces the time required to author tests and course assignments. It enables tutors to generate a large number of different but equivalent dynamic questions (with dynamic answers) on a certain topic, which are automatically assessed. The presented methods are implemented in DisPeL (Distributed Platform for e-Learning) and provide questions in the areas of filtering and smoothing of time-series data, forecasting, building and analysis of single-equation econometric models. Questions also cover elasticity, average and marginal characteristics, product and cost functions, measurement of monopoly power, supply, demand and equilibrium price, consumer and product surplus, etc. Several approaches are used to enable the required numerical computations in DisPeL - integration of third-party mathematical libraries, developing our own procedures from scratch, and wrapping our legacy math codes in order to modernize and reuse them.
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
Lateral dispersion coefficients as functions of averaging time
International Nuclear Information System (INIS)
Sheih, C.M.
1980-01-01
Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion
Electronic cigarettes: abuse liability, topography and subjective effects.
Evans, Sarah E; Hoffman, Allison C
2014-05-01
To review the available evidence evaluating the abuse liability, topography, subjective effects, craving and withdrawal suppression associated with e-cigarette use in order to identify information gaps and provide recommendations for future research. Literature searches were conducted between October 2012 and January 2014 using five electronic databases. Studies were included in this review if they were peer-reviewed scientific journal articles evaluating clinical laboratory studies, national surveys or content analyses. A total of 15 peer-reviewed articles regarding behavioural use and effects of e-cigarettes published between 2010 and 2014 were included in this review. Abuse liability studies are limited in their generalisability. Topography (consumption behaviour) studies found that, compared with traditional cigarettes, e-cigarette average puff duration was significantly longer, and e-cigarette use required stronger suction. Data on e-cigarette subjective effects (such as anxiety, restlessness, concentration, alertness and satisfaction) and withdrawal suppression are limited and inconsistent. In general, study data should be interpreted with caution, given limitations associated with comparisons of novel and usual products, as well as the possible effects associated with subjects' previous experience/inexperience with e-cigarettes. Currently, very limited information is available on abuse liability, topography and subjective effects of e-cigarettes. Opportunities to examine extended e-cigarette use in a variety of settings with experienced e-cigarette users would help to more fully assess topography as well as behavioural and subjective outcomes. In addition, assessment of 'real-world' use, including amount and timing of use and responses to use, would clarify behavioural profiles and potential adverse health effects.
On averaging the Kubo-Hall conductivity of magnetic Bloch bands leading to Chern numbers
International Nuclear Information System (INIS)
Riess, J.
1997-01-01
The authors re-examine the topological approach to the integer quantum Hall effect in its original form where an average of the Kubo-Hall conductivity of a magnetic Bloch band has been considered. For the precise definition of this average it is crucial to make a sharp distinction between the discrete Bloch wave numbers k 1 , k 2 and the two continuous integration parameters α 1 , α 2 . The average over the parameter domain 0 ≤ α j 1 , k 2 . They show how this can be transformed into a single integral over the continuous magnetic Brillouin zone 0 ≤ α j j , j = 1, 2, n j = number of unit cells in j-direction, keeping k 1 , k 2 fixed. This average prescription for the Hall conductivity of a magnetic Bloch band is exactly the same as the one used for a many-body system in the presence of disorder
2010-07-01
... volume of gasoline produced or imported in batch i. Si=The sulfur content of batch i determined under § 80.330. n=The number of batches of gasoline produced or imported during the averaging period. i=Individual batch of gasoline produced or imported during the averaging period. (b) All annual refinery or...
2010-07-01
... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...
Directory of Open Access Journals (Sweden)
Aneta Rita Borkowska
2014-05-01
Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.
Average inactivity time model, associated orderings and reliability properties
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
Average L-shell fluorescence, Auger, and electron yields
International Nuclear Information System (INIS)
Krause, M.O.
1980-01-01
The dependence of the average L-shell fluorescence and Auger yields on the initial vacancy distribution is shown to be small. By contrast, the average electron yield pertaining to both Auger and Coster-Kronig transitions is shown to display a strong dependence. Numerical examples are given on the basis of Krause's evaluation of subshell radiative and radiationless yields. Average yields are calculated for widely differing vacancy distributions and are intercompared graphically for 40 3 subshell yields in most cases of inner-shell ionization
Simultaneous inference for model averaging of derived parameters
DEFF Research Database (Denmark)
Jensen, Signe Marie; Ritz, Christian
2015-01-01
Model averaging is a useful approach for capturing uncertainty due to model selection. Currently, this uncertainty is often quantified by means of approximations that do not easily extend to simultaneous inference. Moreover, in practice there is a need for both model averaging and simultaneous...... inference for derived parameters calculated in an after-fitting step. We propose a method for obtaining asymptotically correct standard errors for one or several model-averaged estimates of derived parameters and for obtaining simultaneous confidence intervals that asymptotically control the family...
Salecker-Wigner-Peres clock and average tunneling times
International Nuclear Information System (INIS)
Lunardi, Jose T.; Manzoni, Luiz A.; Nystrom, Andrew T.
2011-01-01
The quantum clock of Salecker-Wigner-Peres is used, by performing a post-selection of the final state, to obtain average transmission and reflection times associated to the scattering of localized wave packets by static potentials in one dimension. The behavior of these average times is studied for a Gaussian wave packet, centered around a tunneling wave number, incident on a rectangular barrier and, in particular, on a double delta barrier potential. The regime of opaque barriers is investigated and the results show that the average transmission time does not saturate, showing no evidence of the Hartman effect (or its generalized version).
Time average vibration fringe analysis using Hilbert transformation
International Nuclear Information System (INIS)
Kumar, Upputuri Paul; Mohan, Nandigana Krishna; Kothiyal, Mahendra Prasad
2010-01-01
Quantitative phase information from a single interferogram can be obtained using the Hilbert transform (HT). We have applied the HT method for quantitative evaluation of Bessel fringes obtained in time average TV holography. The method requires only one fringe pattern for the extraction of vibration amplitude and reduces the complexity in quantifying the data experienced in the time average reference bias modulation method, which uses multiple fringe frames. The technique is demonstrated for the measurement of out-of-plane vibration amplitude on a small scale specimen using a time average microscopic TV holography system.
Average multiplications in deep inelastic processes and their interpretation
International Nuclear Information System (INIS)
Kiselev, A.V.; Petrov, V.A.
1983-01-01
Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity
Fitting a function to time-dependent ensemble averaged data
DEFF Research Database (Denmark)
Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders
2018-01-01
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....
Average wind statistics for SRP area meteorological towers
International Nuclear Information System (INIS)
Laurinat, J.E.
1987-01-01
A quality assured set of average wind Statistics for the seven SRP area meteorological towers has been calculated for the five-year period 1982--1986 at the request of DOE/SR. A Similar set of statistics was previously compiled for the years 1975-- 1979. The updated wind statistics will replace the old statistics as the meteorological input for calculating atmospheric radionuclide doses from stack releases, and will be used in the annual environmental report. This report details the methods used to average the wind statistics and to screen out bad measurements and presents wind roses generated by the averaged statistics
Czech Academy of Sciences Publication Activity Database
Dušek, Libor; Kalíšková, Klára; Münich, Daniel
2013-01-01
Roč. 63, č. 6 (2013), s. 474-504 ISSN 0015-1920 R&D Projects: GA MŠk(CZ) SVV 267801/2013 Institutional support: PRVOUK-P23 Keywords : TAXBEN models * average tax rates * marginal tax rates Subject RIV: AH - Economics Impact factor: 0.358, year: 2013 http://journal.fsv.cuni.cz/storage/1287_dusek.pdf
2012-09-13
46, 1989. [75] S. Melkote and M.S. Daskin . An integrated model of facility location and transportation network design. Transportation Research Part A ... a work of the U.S. Government and is not subject to copyright protection in the United States. AFIT/DS/ENS/12-09 THE AVERAGE NETWORK FLOW PROBLEM...focused thinking (VFT) are used sparingly, as is the case across the entirety of the supply chain literature. We provide a VFT tutorial for supply chain
Czech Academy of Sciences Publication Activity Database
Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel
2015-01-01
Roč. 52, č. 2 (2015), s. 419-440 ISSN 0021-9002 Grant - others:GA AV ČR(CZ) 171396 Institutional support: RVO:67985556 Keywords : Dominated Convergence theorem for the expected average criterion * Discrepancy function * Kolmogorov inequality * Innovations * Strong sample-path optimality Subject RIV: BC - Control Systems Theory Impact factor: 0.665, year: 2015 http://library.utia.cas.cz/separaty/2015/E/sladky-0449029.pdf
Taylor, Julie Lounds; Henninger, Natalie A.; Mailick, Marsha R.
2015-01-01
This study examined correlates of participation in postsecondary education and employment over 12?years for 73 adults with autism spectrum disorders and average-range IQ whose families were part of a larger, longitudinal study. Correlates included demographic (sex, maternal education, paternal education), behavioral (activities of daily living,…
Thermodynamic Integration Methods, Infinite Swapping and the Calculation of Generalized Averages
Doll, J. D.; Dupuis, P.; Nyquist, P.
2016-01-01
In the present paper we examine the risk-sensitive and sampling issues associated with the problem of calculating generalized averages. By combining thermodynamic integration and Stationary Phase Monte Carlo techniques, we develop an approach for such problems and explore its utility for a prototypical class of applications.
Ismail, Siti Noor
2014-01-01
Purpose: This study attempted to determine whether the dimensions of TQM practices are predictors of school climate. It aimed to identify the level of TQM practices and school climate in three different categories of schools, namely high, average and low performance schools. The study also sought to examine which dimensions of TQM practices…
Hunt, S. Jane; Krueger, Lacy E.; Limberg, Dodie
2017-01-01
Interparental conflict has been shown to have a negative effect on the academic success of children and adolescents. This study examined the relationship between college students' (N = 143) perceived levels of interparental conflict, their living arrangement, and their current self-reported grade point average. Participants who experienced more…
Wicherts, Jelte M.; Dolan, Conor V.; Carlson, Jerry S.; van der Maas, Han L. J.
2010-01-01
This paper presents a systematic review of published data on the performance of sub-Saharan Africans on Raven's Progressive Matrices. The specific goals were to estimate the average level of performance, to study the Flynn Effect in African samples, and to examine the psychometric meaning of Raven's test scores as measures of general intelligence.…
Griffin, Tyler J.; Hilton, John, III.; Plummer, Kenneth; Barret, Devynne
2014-01-01
One of the most contentious potential sources of bias is whether instructors who give higher grades receive higher ratings from students. We examined the grade point averages (GPAs) and student ratings across 2073 general education religion courses at a large private university. A moderate correlation was found between GPAs and student evaluations…
Lee, Jennifer
2012-01-01
The intent of this study was to examine the relationship between media multitasking orientation and grade point average. The study utilized a mixed-methods approach to investigate the research questions. In the quantitative section of the study, the primary method of statistical analyses was multiple regression. The independent variables for the…
Grade point average and biographical data in personal resumes: Predictors of finding employment
Sulastri, A.; Handoko, M.; Janssens, J.M.A.M.
2015-01-01
This study aimed to examine relationships between graduates' grade point average (GPA), biographical data and success in finding a job in general and a psychology-based job in particular. Two hundred six psychology graduates participated in a two-wave longitudinal study. Biographical data assessed
Cohen-Schotanus, Janke; Muijtjens, Arno M. M.; Reinders, Jan J.; Agsteribbe, Jessica; van Rossum, Herman J. M.; van der Vleuten, Cees P. M.
2006-01-01
PURPOSE To ascertain whether the grade point average (GPA) of school-leaving examinations is related to study success, career development and scientific performance. The problem of restriction of range was expected to be partially reduced due to the use of a national lottery system weighted in
Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock
Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.
2001-01-01
Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.
Endogenous Information, Risk Characterization, and the Predictability of Average Stock Returns
Directory of Open Access Journals (Sweden)
Pradosh Simlai
2012-09-01
Full Text Available In this paper we provide a new type of risk characterization of the predictability of two widely known abnormal patterns in average stock returns: momentum and reversal. The purpose is to illustrate the relative importance of common risk factors and endogenous information. Our results demonstrates that in the presence of zero-investment factors, spreads in average momentum and reversal returns correspond to spreads in the slopes of the endogenous information. The empirical findings support the view that various classes of firms react differently to volatility risk, and endogenous information harbor important sources of potential risk loadings. Taken together, our results suggest that returns are influenced by random endogenous information flow, which is asymmetric in nature, and can be used as a performance attribution factor. If one fails to incorporate the existing asymmetric endogenous information hidden in the historical behavior, any attempt to explore average stock return predictability will be subject to an unquantified specification bias.
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.
Medicare Part B Drug Average Sales Pricing Files
U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...
High Average Power Fiber Laser for Satellite Communications, Phase I
National Aeronautics and Space Administration — Very high average power lasers with high electrical-top-optical (E-O) efficiency, which also support pulse position modulation (PPM) formats in the MHz-data rate...
A time averaged background compensator for Geiger-Mueller counters
International Nuclear Information System (INIS)
Bhattacharya, R.C.; Ghosh, P.K.
1983-01-01
The GM tube compensator described stores background counts to cancel an equal number of pulses from the measuring channel providing time averaged compensation. The method suits portable instruments. (orig.)
Time averaging, ageing and delay analysis of financial time series
Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf
2017-06-01
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
Historical Data for Average Processing Time Until Hearing Held
Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...
GIS Tools to Estimate Average Annual Daily Traffic
2012-06-01
This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...
A high speed digital signal averager for pulsed NMR
International Nuclear Information System (INIS)
Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.
1978-01-01
A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)
The average-shadowing property and topological ergodicity for flows
International Nuclear Information System (INIS)
Gu Rongbao; Guo Wenjing
2005-01-01
In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic
An evolutionary computation approach to examine functional brain plasticity
Directory of Open Access Journals (Sweden)
Arnab eRoy
2016-04-01
Full Text Available One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN and the executive control network (ECN during recovery from traumatic brain injury (TBI; the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in
Application of Bayesian approach to estimate average level spacing
International Nuclear Information System (INIS)
Huang Zhongfu; Zhao Zhixiang
1991-01-01
A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out
Annual average equivalent dose of workers form health area
International Nuclear Information System (INIS)
Daltro, T.F.L.; Campos, L.L.
1992-01-01
The data of personnel monitoring during 1985 and 1991 of personnel that work in health area were studied, obtaining a general overview of the value change of annual average equivalent dose. Two different aspects were presented: the analysis of annual average equivalent dose in the different sectors of a hospital and the comparison of these doses in the same sectors in different hospitals. (C.G.C.)
A precise measurement of the average b hadron lifetime
Buskulic, Damir; De Bonis, I; Décamp, D; Ghez, P; Goy, C; Lees, J P; Lucotte, A; Minard, M N; Odier, P; Pietrzyk, B; Ariztizabal, F; Chmeissani, M; Crespo, J M; Efthymiopoulos, I; Fernández, E; Fernández-Bosman, M; Gaitan, V; Garrido, L; Martínez, M; Orteu, S; Pacheco, A; Padilla, C; Palla, Fabrizio; Pascual, A; Perlas, J A; Sánchez, F; Teubert, F; Colaleo, A; Creanza, D; De Palma, M; Farilla, A; Gelao, G; Girone, M; Iaselli, Giuseppe; Maggi, G; Maggi, M; Marinelli, N; Natali, S; Nuzzo, S; Ranieri, A; Raso, G; Romano, F; Ruggieri, F; Selvaggi, G; Silvestris, L; Tempesta, P; Zito, G; Huang, X; Lin, J; Ouyang, Q; Wang, T; Xie, Y; Xu, R; Xue, S; Zhang, J; Zhang, L; Zhao, W; Bonvicini, G; Cattaneo, M; Comas, P; Coyle, P; Drevermann, H; Engelhardt, A; Forty, Roger W; Frank, M; Hagelberg, R; Harvey, J; Jacobsen, R; Janot, P; Jost, B; Knobloch, J; Lehraus, Ivan; Markou, C; Martin, E B; Mato, P; Meinhard, H; Minten, Adolf G; Miquel, R; Oest, T; Palazzi, P; Pater, J R; Pusztaszeri, J F; Ranjard, F; Rensing, P E; Rolandi, Luigi; Schlatter, W D; Schmelling, M; Schneider, O; Tejessy, W; Tomalin, I R; Venturi, A; Wachsmuth, H W; Wiedenmann, W; Wildish, T; Witzeling, W; Wotschack, J; Ajaltouni, Ziad J; Bardadin-Otwinowska, Maria; Barrès, A; Boyer, C; Falvard, A; Gay, P; Guicheney, C; Henrard, P; Jousset, J; Michel, B; Monteil, S; Montret, J C; Pallin, D; Perret, P; Podlyski, F; Proriol, J; Rossignol, J M; Saadi, F; Fearnley, Tom; Hansen, J B; Hansen, J D; Hansen, J R; Hansen, P H; Nilsson, B S; Kyriakis, A; Simopoulou, Errietta; Siotis, I; Vayaki, Anna; Zachariadou, K; Blondel, A; Bonneaud, G R; Brient, J C; Bourdon, P; Passalacqua, L; Rougé, A; Rumpf, M; Tanaka, R; Valassi, Andrea; Verderi, M; Videau, H L; Candlin, D J; Parsons, M I; Focardi, E; Parrini, G; Corden, M; Delfino, M C; Georgiopoulos, C H; Jaffe, D E; Antonelli, A; Bencivenni, G; Bologna, G; Bossi, F; Campana, P; Capon, G; Chiarella, V; Felici, G; Laurelli, P; Mannocchi, G; Murtas, F; Murtas, G P; Pepé-Altarelli, M; Dorris, S J; Halley, A W; ten Have, I; Knowles, I G; Lynch, J G; Morton, W T; O'Shea, V; Raine, C; Reeves, P; Scarr, J M; Smith, K; Smith, M G; Thompson, A S; Thomson, F; Thorn, S; Turnbull, R M; Becker, U; Braun, O; Geweniger, C; Graefe, G; Hanke, P; Hepp, V; Kluge, E E; Putzer, A; Rensch, B; Schmidt, M; Sommer, J; Stenzel, H; Tittel, K; Werner, S; Wunsch, M; Beuselinck, R; Binnie, David M; Cameron, W; Colling, D J; Dornan, Peter J; Konstantinidis, N P; Moneta, L; Moutoussi, A; Nash, J; San Martin, G; Sedgbeer, J K; Stacey, A M; Dissertori, G; Girtler, P; Kneringer, E; Kuhn, D; Rudolph, G; Bowdery, C K; Brodbeck, T J; Colrain, P; Crawford, G; Finch, A J; Foster, F; Hughes, G; Sloan, Terence; Whelan, E P; Williams, M I; Galla, A; Greene, A M; Kleinknecht, K; Quast, G; Raab, J; Renk, B; Sander, H G; Wanke, R; Van Gemmeren, P; Zeitnitz, C; Aubert, Jean-Jacques; Bencheikh, A M; Benchouk, C; Bonissent, A; Bujosa, G; Calvet, D; Carr, J; Diaconu, C A; Etienne, F; Thulasidas, M; Nicod, D; Payre, P; Rousseau, D; Talby, M; Abt, I; Assmann, R W; Bauer, C; Blum, Walter; Brown, D; Dietl, H; Dydak, Friedrich; Ganis, G; Gotzhein, C; Jakobs, K; Kroha, H; Lütjens, G; Lutz, Gerhard; Männer, W; Moser, H G; Richter, R H; Rosado-Schlosser, A; Schael, S; Settles, Ronald; Seywerd, H C J; Stierlin, U; Saint-Denis, R; Wolf, G; Alemany, R; Boucrot, J; Callot, O; Cordier, A; Courault, F; Davier, M; Duflot, L; Grivaz, J F; Heusse, P; Jacquet, M; Kim, D W; Le Diberder, F R; Lefrançois, J; Lutz, A M; Musolino, G; Nikolic, I A; Park, H J; Park, I C; Schune, M H; Simion, S; Veillet, J J; Videau, I; Abbaneo, D; Azzurri, P; Bagliesi, G; Batignani, G; Bettarini, S; Bozzi, C; Calderini, G; Carpinelli, M; Ciocci, M A; Ciulli, V; Dell'Orso, R; Fantechi, R; Ferrante, I; Foà, L; Forti, F; Giassi, A; Giorgi, M A; Gregorio, A; Ligabue, F; Lusiani, A; Marrocchesi, P S; Messineo, A; Rizzo, G; Sanguinetti, G; Sciabà, A; Spagnolo, P; Steinberger, Jack; Tenchini, Roberto; Tonelli, G; Triggiani, G; Vannini, C; Verdini, P G; Walsh, J; Betteridge, A P; Blair, G A; Bryant, L M; Cerutti, F; Gao, Y; Green, M G; Johnson, D L; Medcalf, T; Mir, L M; Perrodo, P; Strong, J A; Bertin, V; Botterill, David R; Clifft, R W; Edgecock, T R; Haywood, S; Edwards, M; Maley, P; Norton, P R; Thompson, J C; Bloch-Devaux, B; Colas, P; Duarte, H; Emery, S; Kozanecki, Witold; Lançon, E; Lemaire, M C; Locci, E; Marx, B; Pérez, P; Rander, J; Renardy, J F; Rosowsky, A; Roussarie, A; Schuller, J P; Schwindling, J; Si Mohand, D; Trabelsi, A; Vallage, B; Johnson, R P; Kim, H Y; Litke, A M; McNeil, M A; Taylor, G; Beddall, A; Booth, C N; Boswell, R; Cartwright, S L; Combley, F; Dawson, I; Köksal, A; Letho, M; Newton, W M; Rankin, C; Thompson, L F; Böhrer, A; Brandt, S; Cowan, G D; Feigl, E; Grupen, Claus; Lutters, G; Minguet-Rodríguez, J A; Rivera, F; Saraiva, P; Smolik, L; Stephan, F; Apollonio, M; Bosisio, L; Della Marina, R; Giannini, G; Gobbo, B; Ragusa, F; Rothberg, J E; Wasserbaech, S R; Armstrong, S R; Bellantoni, L; Elmer, P; Feng, P; Ferguson, D P S; Gao, Y S; González, S; Grahl, J; Harton, J L; Hayes, O J; Hu, H; McNamara, P A; Nachtman, J M; Orejudos, W; Pan, Y B; Saadi, Y; Schmitt, M; Scott, I J; Sharma, V; Turk, J; Walsh, A M; Wu Sau Lan; Wu, X; Yamartino, J M; Zheng, M; Zobernig, G
1996-01-01
An improved measurement of the average b hadron lifetime is performed using a sample of 1.5 million hadronic Z decays, collected during the 1991-1993 runs of ALEPH, with the silicon vertex detector fully operational. This uses the three-dimensional impact parameter distribution of lepton tracks coming from semileptonic b decays and yields an average b hadron lifetime of 1.533 \\pm 0.013 \\pm 0.022 ps.
Bivariate copulas on the exponentially weighted moving average control chart
Directory of Open Access Journals (Sweden)
Sasigarn Kuvattana
2016-10-01
Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.
Averaging Bias Correction for Future IPDA Lidar Mission MERLIN
Directory of Open Access Journals (Sweden)
Tellier Yoann
2018-01-01
Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.
Averaging Bias Correction for Future IPDA Lidar Mission MERLIN
Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien
2018-04-01
The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.
The average action for scalar fields near phase transitions
International Nuclear Information System (INIS)
Wetterich, C.
1991-08-01
We compute the average action for fields in two, three and four dimensions, including the effects of wave function renormalization. A study of the one loop evolution equations for the scale dependence of the average action gives a unified picture of the qualitatively different behaviour in various dimensions for discrete as well as abelian and nonabelian continuous symmetry. The different phases and the phase transitions can be infered from the evolution equation. (orig.)
Wave function collapse implies divergence of average displacement
Marchewka, A.; Schuss, Z.
2005-01-01
We show that propagating a truncated discontinuous wave function by Schr\\"odinger's equation, as asserted by the collapse axiom, gives rise to non-existence of the average displacement of the particle on the line. It also implies that there is no Zeno effect. On the other hand, if the truncation is done so that the reduced wave function is continuous, the average coordinate is finite and there is a Zeno effect. Therefore the collapse axiom of measurement needs to be revised.
Average geodesic distance of skeleton networks of Sierpinski tetrahedron
Yang, Jinjin; Wang, Songjing; Xi, Lifeng; Ye, Yongchao
2018-04-01
The average distance is concerned in the research of complex networks and is related to Wiener sum which is a topological invariant in chemical graph theory. In this paper, we study the skeleton networks of the Sierpinski tetrahedron, an important self-similar fractal, and obtain their asymptotic formula for average distances. To provide the formula, we develop some technique named finite patterns of integral of geodesic distance on self-similar measure for the Sierpinski tetrahedron.
Drummond, Aaron; Halsey, R. John
2013-01-01
The journal "Education in Rural Australia" (now the "Australian and International Journal of Rural Education") has been in existence since 1991. During the Excellence in Research Australia (ERA) period, the journal maintained a B ranking, indicating that it was a quality journal within a specialised field. With the abolishment…
Turner, Jacob Stephen; Croucher, Stephen Michael
2014-01-01
The current study uses survey methods to understand how US college students' use of various types of social media, such as social networking websites and text messaging on smart phones, as well as consumption of traditional media, such as watching television and reading books for pleasure, is (or is not) related to intellectual cognitive…
Amsterdam, Jay D; Lorenzo-Luaces, Lorenzo; DeRubeis, Robert J
2017-02-01
We examined differences in treatment outcome between Diagnostic and Statistical Manual Fourth Edition (DSM-IV)-defined rapid cycling and average lifetime-defined rapid cycling in subjects with bipolar II disorder. We hypothesized that, compared with the DSM-IV definition, the average lifetime definition of rapid cycling may better identify subjects with a history of more mood lability and a greater likelihood of hypomanic symptom induction during long-term treatment. Subjects ≥18 years old with a bipolar II major depressive episode (n=129) were categorized into DSM-IV- and average lifetime-defined rapid cycling and prospectively treated with either venlafaxine or lithium monotherapy for 12 weeks. Responders (n=59) received continuation monotherapy for six additional months. These exploratory analyses found moderate agreement between the two rapid-cycling definitions (κ=0.56). The lifetime definition captured subjects with more chronic courses of bipolar II depression, whereas the DSM-IV definition captured subjects with more acute symptoms of hypomania. There was no difference between rapid-cycling definitions with respect to the response to acute venlafaxine or lithium monotherapy. However, the lifetime definition was slightly superior to the DSM-IV definition in identifying subjects who went on to experience hypomanic symptoms during continuation therapy. Although sample sizes were limited, the findings suggest that the lifetime definition of rapid cycling may identify individuals with a chronic rapid-cycling course and may also be slightly superior to the DSM-IV definition in identifying individuals with hypomania during relapse-prevention therapy. These findings are preliminary in nature and need replication in larger, prospective, bipolar II studies. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Vision as subjective perception
International Nuclear Information System (INIS)
Reppas, J.B.; Dale, A.; Sereno, M.; Tootell, R.
1996-01-01
The human brain is not very different of the monkey's one: at least, its visual cortex is organized as a similar scheme. Specialized areas in the movement analysis are found and others in the forms perception. In this work, the author tries to answer to the following questions: 1)why so many visual areas? What are exactly their role in vision? Thirteen years of experimentation have not allowed to answer to these questions. The cerebral NMR imaging gives the opportunity of understanding the subjective perception of the visual world. One step which is particularly described in this work is to know how the visual cortex reacts to the optical illusions. (O.M.)
The Subjectivity of Participation
DEFF Research Database (Denmark)
Nissen, Morten
of a community of social/youth workers in Copenhagen between 1987 and 2003, who developed a pedagogy through creating collectives and mobilizing young people as participants. The theoretical and practical traditions are combined in a unique methodology viewing research as a contentious modeling of prototypical......What is a 'we' – a collective – and how can we use such communal self-knowledge to help people? This book is about collectivity, participation, and subjectivity – and about the social theories that may help us understand these matters. It also seeks to learn from the innovative practices and ideas...
Microbes make average 2 nanometer diameter crystalline UO2 particles.
Suzuki, Y.; Kelly, S. D.; Kemner, K. M.; Banfield, J. F.
2001-12-01
It is well known that phylogenetically diverse groups of microorganisms are capable of catalyzing the reduction of highly soluble U(VI) to highly insoluble U(IV), which rapidly precipitates as uraninite (UO2). Because biological uraninite is highly insoluble, microbial uranyl reduction is being intensively studied as the basis for a cost-effective in-situ bioremediation strategy. Previous studies have described UO2 biomineralization products as amorphous or poorly crystalline. The objective of this study is to characterize the nanocrystalline uraninite in detail in order to determine the particle size, crystallinity, and size-related structural characteristics, and to examine the implications of these for reoxidation and transport. In this study, we obtained U-contaminated sediment and water from an inactive U mine and incubated them anaerobically with nutrients to stimulate reductive precipitation of UO2 by indigenous anaerobic bacteria, mainly Gram-positive spore-forming Desulfosporosinus and Clostridium spp. as revealed by RNA-based phylogenetic analysis. Desulfosporosinus sp. was isolated from the sediment and UO2 was precipitated by this isolate from a simple solution that contains only U and electron donors. We characterized UO2 formed in both of the experiments by high resolution-TEM (HRTEM) and X-ray absorption fine structure analysis (XAFS). The results from HRTEM showed that both the pure and the mixed cultures of microorganisms precipitated around 1.5 - 3 nm crystalline UO2 particles. Some particles as small as around 1 nm could be imaged. Rare particles around 10 nm in diameter were also present. Particles adhere to cells and form colloidal aggregates with low fractal dimension. In some cases, coarsening by oriented attachment on \\{111\\} is evident. Our preliminary results from XAFS for the incubated U-contaminated sample also indicated an average diameter of UO2 of 2 nm. In nanoparticles, the U-U distance obtained by XAFS was 0.373 nm, 0.012 nm
Nondestructive examination development and demonstration plan
International Nuclear Information System (INIS)
Weber, J.R.
1991-01-01
Nondestructive examination (NDE) of waste matrices using penetrating radiation is by nature very subjective. Two candidate systems of examination have been identified for use in WRAP 1. This test plan describes a method for a comparative evaluation of different x-ray examination systems and techniques
40 CFR 80.67 - Compliance on average.
2010-07-01
... RVP (in the case of a refinery or importer subject to the simple model standards) or the standards for... or importer subject to the simple model standards, each gallon of reformulated gasoline and RBOB... credits are transferred, either through inter-company or intra-company transfers, directly from the...
Examination Management and Examination Malpractice: The Nexus
Ogunji, James A.
2011-01-01
Examination malpractice or cheating has become a global phenomenon. In different countries of the world today, developed and developing, academic dishonesty especially cheating in examinations has heightened and taken frightening dimension. In many countries of the world this phenomenon has become a serious matter of concern that has left many…
Subjective Life Horizon and Portfolio Choice
Spaenjers , Christophe; Spira, Sven Michael
2013-01-01
Using data from a U.S. household survey, we examine the empirical relation between subjective life horizon (i.e., the self-reported expectation of remaining life span) and portfolio choice. We find that equity portfolio shares are higher for investors with longer horizons, ceteris paribus, in line with theoretical predictions. This result is robust to controlling for optimism and health status, accounting for the endogeneity of equity market participation, or instrumenting subjective life hor...
PSA, subjective probability and decision making
International Nuclear Information System (INIS)
Clarotti, C.A.
1989-01-01
PSA is the natural way to making decisions in face of uncertainty relative to potentially dangerous plants; subjective probability, subjective utility and Bayes statistics are the ideal tools for carrying out a PSA. This paper reports that in order to support this statement the various stages of the PSA procedure are examined in detail and step by step the superiority of Bayes techniques with respect to sampling theory machinery is proven
Average Soil Water Retention Curves Measured by Neutron Radiography
Energy Technology Data Exchange (ETDEWEB)
Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD
2011-01-01
Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.
A living wage for research subjects.
Phillips, Trisha B
2011-01-01
Offering cash payments to research subjects is a common recruiting method, but this practice continues to be controversial because of its potential to compromise the protection of human subjects. Federal regulations and guidelines currently allow researchers to pay subjects for participation, but they say very little about how much researchers can pay their subjects. This paper argues that the federal regulations and guidelines should implement a standard payment formula. It argues for a wage payment model, and critically examines three candidates for a base wage: the nonfarm production wage, the FLSA minimum wage, and a living wage. After showing that the nonfarm production wage is too high to satisfy ethical criteria, and the minimum wage is too low, this paper concludes that the wage payment model with a base wage equivalent to a living wage is the best candidate for a standard payment formula in human subjects research. © 2011 American Society of Law, Medicine & Ethics, Inc.
Belo, Luciana Rodrigues; Gomes, Nathália Angelina Costa; Coriolano, Maria das Graças Wanderley de Sales; de Souza, Elizabete Santos; Moura, Danielle Albuquerque Alves; Asano, Amdore Guescel; Lins, Otávio Gomes
2014-08-01
The goal of this study was to obtain the limit of dysphagia and the average volume per swallow in patients with mild to moderate Parkinson's disease (PD) but without swallowing complaints and in normal subjects, and to investigate the relationship between them. We hypothesize there is a direct relationship between these two measurements. The study included 10 patients with idiopathic PD and 10 age-matched normal controls. Surface electromyography was recorded over the suprahyoid muscle group. The limit of dysphagia was obtained by offering increasing volumes of water until piecemeal deglutition occurred. The average volume per swallow was calculated by dividing the time taken by the number of swallows used to drink 100 ml of water. The PD group showed a significantly lower dysphagia limit and lower average volume per swallow. There was a significantly moderate direct correlation and association between the two measurements. About half of the PD patients had an abnormally low dysphagia limit and average volume per swallow, although none had spontaneously related swallowing problems. Both measurements may be used as a quick objective screening test for the early identification of swallowing alterations that may lead to dysphagia in PD patients, but the determination of the average volume per swallow is much quicker and simpler.
Translating HbA1c measurements into estimated average glucose values in pregnant women with diabetes
DEFF Research Database (Denmark)
Law, Graham R; Gilthorpe, Mark S; Secher, Anna L
2017-01-01
AIMS/HYPOTHESIS: This study aimed to examine the relationship between average glucose levels, assessed by continuous glucose monitoring (CGM), and HbA1c levels in pregnant women with diabetes to determine whether calculations of standard estimated average glucose (eAG) levels from HbA1c measureme...
Subjective performance evaluations and employee careers
DEFF Research Database (Denmark)
Frederiksen, Anders; Lange, Fabian; Kriechell, Ben
Firms commonly use supervisor evaluations to assess the performance of employees who work in complex environments. Doubts persist whether their subjective nature invalidates using these performance measures to learn about careers of individuals and to inform theory in personnel economics. We...... examine personnel data from six large companies and establish how subjective ratings, interpreted as ordinal rankings of employee performances within narrowly defined peer-groups, correlate with objective career outcomes. We find many similarities across firms in how subjective ratings correlate with base...
Tracheobronchial calcification in adult health study subjects
International Nuclear Information System (INIS)
Fukuya, Tatsuro; Mihara, Futoshi; Kudo, Sho; Russell, W.J.; Delongchamp, R.R.; Vaeth, M.; Hosoda, Yutaka.
1988-04-01
Tracheobronchial calcification is reportedly more frequent in women than in men. Ten cases of extensive tracehobronchial calcification were identified on chest radiographs of 1,152 consecutively examined Adult Health Study subjects, for a prevalence of 0.87 %. An additional 51 subjects having this coded diagnosis were identified among 11,758 members of this fixed population sample. Sixty of the 61 subjects were women. The manifestations and extent of this type of calcification and its correlations with clinical and histopathologic features, which have not been previously reported, are described here. (author)
Occlusal consequence of using average condylar guidance settings: An in vitro study.
Lee, Wonsup; Lim, Young-Jun; Kim, Myung-Joo; Kwon, Ho-Beom
2017-04-01
A simplified mounting technique that adopts an average condylar guidance has been advocated. Despite this, the experimental explanation of how average settings differ from individual condylar guidance remains unclear. The purpose of this in vitro study was to examine potential occlusal error by using average condylar guidance settings during nonworking side movement of the articulator. Three-dimensional positions of the nonworking side maxillary first molar at various condylar and incisal settings were traced using a laser displacement sensor attached to the motorized stages with biaxial freedom of movement. To examine clinically relevant occlusal consequences of condylar guidance setting errors, the vertical occlusal error was defined as the vertical-axis positional difference between the average setting trace and the other condylar guidance setting trace. In addition, the respective contribution of the condylar and incisal guidance to the position of the maxillary first molar area was analyzed by multiple regression analysis using the resultant coordinate data. Alteration from individual to average settings led to a positional difference in the maxillary first molar nonworking side movement. When the individual setting was lower than average, vertical occlusal error occurred, which might cause occlusal interference. The vertical occlusal error ranged from -2964 to 1711 μm. In addition, the occlusal effect of incisal guidance was measured as a partial regression coefficient of 0.882, which exceeded the effect of condylar guidance, 0.431. Potential occlusal error as a result of adopting an average condylar guidance setting was observed. The occlusal effect of incisal guidance doubled the effect of condylar guidance. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
O'Dwyer, Aidan
2011-01-01
The objective of this study is to examine if a link exists between lecture attendance and examination performance of Level 7, Year 1, Electrical Engineering students at Dublin Institute of Technology in the Electrical Systems subject. Lecture attendance was monitored and analysed over four academic years (2007-8, 2008-9, 2009-10 and 2010-11). The average lecture attendance for students in the three academic years from 2007-10 was 55%, increasing noticeably in the 2009-10 academic year. A stat...
Accurate phenotyping: Reconciling approaches through Bayesian model averaging.
Directory of Open Access Journals (Sweden)
Carla Chia-Ming Chen
Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.
Yearly, seasonal and monthly daily average diffuse sky radiation models
International Nuclear Information System (INIS)
Kassem, A.S.; Mujahid, A.M.; Turner, D.W.
1993-01-01
A daily average diffuse sky radiation regression model based on daily global radiation was developed utilizing two year data taken near Blytheville, Arkansas (Lat. =35.9 0 N, Long. = 89.9 0 W), U.S.A. The model has a determination coefficient of 0.91 and 0.092 standard error of estimate. The data were also analyzed for a seasonal dependence and four seasonal average daily models were developed for the spring, summer, fall and winter seasons. The coefficient of determination is 0.93, 0.81, 0.94 and 0.93, whereas the standard error of estimate is 0.08, 0.102, 0.042 and 0.075 for spring, summer, fall and winter, respectively. A monthly average daily diffuse sky radiation model was also developed. The coefficient of determination is 0.92 and the standard error of estimate is 0.083. A seasonal monthly average model was also developed which has 0.91 coefficient of determination and 0.085 standard error of estimate. The developed monthly daily average and daily models compare well with a selected number of previously developed models. (author). 11 ref., figs., tabs
Praxis, subjectivity and sense
Directory of Open Access Journals (Sweden)
Alfredo Gómez-Muller
2006-07-01
Full Text Available A primordial aspect of the Sartrian critique of alienation concerns understanding the analytic ideology as the domination of materiality over the symbolic, in other words as the reification of the human, and therefore as anticulture. In the context of contemporary nihilism, the decoding of the mechanisms which consign praxis to the practico-inert requires a critique of the relations between the social sciences and philosophy, which in its turn implies a new theory of the relation between what Sartre calls the "notion" (the area of subjectivity and the "concept" (objectivity, From this perspective, the deconstruction of the established frontiers between the social sciences and philosophy, and between the conceptual and the narrative, is corelative to a redefinition of the relation between theory and practice.
DEFF Research Database (Denmark)
Rittenhofer, Iris
2010-01-01
This article contributes to the rethinking of qualitative interview research into intercultural issues. It suggests that the application of poststructuralist thought should not be limited to the analysis of the interview material itself, but incorporate the choice of interviewees and the modalities...... for the accomplishment of interviews. The paper focuses on a discussion of theoretical and methodological considerations of design, approach and research strategy. These discussions are specified in relation to a project on gender and ethnicity in cultural encounters at Universities. In the paper, I introduce a research...... design named Cultural interviewing, present an approach to the design of interviews named Interview without a subject, and offer an analytic strategy directed towards the analysis of interview transcripts named Interview on the level of the signifier. The paper concludes that even though it is relevant...
Effect of temporal averaging of meteorological data on predictions of groundwater recharge
Directory of Open Access Journals (Sweden)
Batalha Marcia S.
2018-06-01
Full Text Available Accurate estimates of infiltration and groundwater recharge are critical for many hydrologic, agricultural and environmental applications. Anticipated climate change in many regions of the world, especially in tropical areas, is expected to increase the frequency of high-intensity, short-duration precipitation events, which in turn will affect the groundwater recharge rate. Estimates of recharge are often obtained using monthly or even annually averaged meteorological time series data. In this study we employed the HYDRUS-1D software package to assess the sensitivity of groundwater recharge calculations to using meteorological time series of different temporal resolutions (i.e., hourly, daily, weekly, monthly and yearly averaged precipitation and potential evaporation rates. Calculations were applied to three sites in Brazil having different climatological conditions: a tropical savanna (the Cerrado, a humid subtropical area (the temperate southern part of Brazil, and a very wet tropical area (Amazonia. To simplify our current analysis, we did not consider any land use effects by ignoring root water uptake. Temporal averaging of meteorological data was found to lead to significant bias in predictions of groundwater recharge, with much greater estimated recharge rates in case of very uneven temporal rainfall distributions during the year involving distinct wet and dry seasons. For example, at the Cerrado site, using daily averaged data produced recharge rates of up to 9 times greater than using yearly averaged data. In all cases, an increase in the time of averaging of meteorological data led to lower estimates of groundwater recharge, especially at sites having coarse-textured soils. Our results show that temporal averaging limits the ability of simulations to predict deep penetration of moisture in response to precipitation, so that water remains in the upper part of the vadose zone subject to upward flow and evaporation.
Average cross sections for the 252Cf neutron spectrum
International Nuclear Information System (INIS)
Dezso, Z.; Csikai, J.
1977-01-01
A number of average cross sections have been measured for 252 Cf neutrons in (n, γ), (n,p), (n,2n), (n,α) reactions by the activation method and for fission by fission chamber. Cross sections have been determined for 19 elements and 45 reactions. The (n,γ) cross section values lie in the interval from 0.3 to 200 mb. The data as a function of target neutron number increases up to about N=60 with minimum near to dosed shells. The values lie between 0.3 mb and 113 mb. These cross sections decrease significantly with increasing the threshold energy. The values are below 20 mb. The data do not exceed 10 mb. Average (n,p) cross sections as a function of the threshold energy and average fission cross sections as a function of Zsup(4/3)/A are shown. The results obtained are summarized in tables
Testing averaged cosmology with type Ia supernovae and BAO data
Energy Technology Data Exchange (ETDEWEB)
Santos, B.; Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Coley, A.A. [Department of Mathematics and Statistics, Dalhousie University, Halifax, B3H 3J5 Canada (Canada); Devi, N. Chandrachani, E-mail: thoven@on.br, E-mail: aac@mathstat.dal.ca, E-mail: chandrachaniningombam@astro.unam.mx, E-mail: alcaniz@on.br [Instituto de Astronomía, Universidad Nacional Autónoma de México, Box 70-264, México City, México (Mexico)
2017-02-01
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.
Average contraction and synchronization of complex switched networks
International Nuclear Information System (INIS)
Wang Lei; Wang Qingguo
2012-01-01
This paper introduces an average contraction analysis for nonlinear switched systems and applies it to investigating the synchronization of complex networks of coupled systems with switching topology. For a general nonlinear system with a time-dependent switching law, a basic convergence result is presented according to average contraction analysis, and a special case where trajectories of a distributed switched system converge to a linear subspace is then investigated. Synchronization is viewed as the special case with all trajectories approaching the synchronization manifold, and is thus studied for complex networks of coupled oscillators with switching topology. It is shown that the synchronization of a complex switched network can be evaluated by the dynamics of an isolated node, the coupling strength and the time average of the smallest eigenvalue associated with the Laplacians of switching topology and the coupling fashion. Finally, numerical simulations illustrate the effectiveness of the proposed methods. (paper)
The Health Effects of Income Inequality: Averages and Disparities.
Truesdale, Beth C; Jencks, Christopher
2016-01-01
Much research has investigated the association of income inequality with average life expectancy, usually finding negative correlations that are not very robust. A smaller body of work has investigated socioeconomic disparities in life expectancy, which have widened in many countries since 1980. These two lines of work should be seen as complementary because changes in average life expectancy are unlikely to affect all socioeconomic groups equally. Although most theories imply long and variable lags between changes in income inequality and changes in health, empirical evidence is confined largely to short-term effects. Rising income inequality can affect individuals in two ways. Direct effects change individuals' own income. Indirect effects change other people's income, which can then change a society's politics, customs, and ideals, altering the behavior even of those whose own income remains unchanged. Indirect effects can thus change both average health and the slope of the relationship between individual income and health.
Testing averaged cosmology with type Ia supernovae and BAO data
International Nuclear Information System (INIS)
Santos, B.; Alcaniz, J.S.; Coley, A.A.; Devi, N. Chandrachani
2017-01-01
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO data is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.
Perceived Average Orientation Reflects Effective Gist of the Surface.
Cha, Oakyoon; Chong, Sang Chul
2018-03-01
The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.
Object detection by correlation coefficients using azimuthally averaged reference projections.
Nicholson, William V
2004-11-01
A method of computing correlation coefficients for object detection that takes advantage of using azimuthally averaged reference projections is described and compared with two alternative methods-computing a cross-correlation function or a local correlation coefficient versus the azimuthally averaged reference projections. Two examples of an application from structural biology involving the detection of projection views of biological macromolecules in electron micrographs are discussed. It is found that a novel approach to computing a local correlation coefficient versus azimuthally averaged reference projections, using a rotational correlation coefficient, outperforms using a cross-correlation function and a local correlation coefficient in object detection from simulated images with a range of levels of simulated additive noise. The three approaches perform similarly in detecting macromolecular views in electron microscope images of a globular macrolecular complex (the ribosome). The rotational correlation coefficient outperforms the other methods in detection of keyhole limpet hemocyanin macromolecular views in electron micrographs.
Measurement of average radon gas concentration at workplaces
International Nuclear Information System (INIS)
Kavasi, N.; Somlai, J.; Kovacs, T.; Gorjanacz, Z.; Nemeth, Cs.; Szabo, T.; Varhegyi, A.; Hakl, J.
2003-01-01
In this paper results of measurement of average radon gas concentration at workplaces (the schools and kindergartens and the ventilated workplaces) are presented. t can be stated that the one month long measurements means very high variation (as it is obvious in the cases of the hospital cave and the uranium tailing pond). Consequently, in workplaces where the expectable changes of radon concentration considerable with the seasons should be measure for 12 months long. If it is not possible, the chosen six months period should contain summer and winter months as well. The average radon concentration during working hours can be differ considerable from the average of the whole time in the cases of frequent opening the doors and windows or using artificial ventilation. (authors)
A Martian PFS average spectrum: Comparison with ISO SWS
Formisano, V.; Encrenaz, T.; Fonti, S.; Giuranna, M.; Grassi, D.; Hirsh, H.; Khatuntsev, I.; Ignatiev, N.; Lellouch, E.; Maturilli, A.; Moroz, V.; Orleanski, P.; Piccioni, G.; Rataj, M.; Saggin, B.; Zasova, L.
2005-08-01
The evaluation of the planetary Fourier spectrometer performance at Mars is presented by comparing an average spectrum with the ISO spectrum published by Lellouch et al. [2000. Planet. Space Sci. 48, 1393.]. First, the average conditions of Mars atmosphere are compared, then the mixing ratios of the major gases are evaluated. Major and minor bands of CO 2 are compared, from the point of view of features characteristics and bands depth. The spectral resolution is also compared using several solar lines. The result indicates that PFS radiance is valid to better than 1% in the wavenumber range 1800-4200 cm -1 for the average spectrum considered (1680 measurements). The PFS monochromatic transfer function generates an overshooting on the left-hand side of strong narrow lines (solar or atmospheric). The spectral resolution of PFS is of the order of 1.3 cm -1 or better. A large number of narrow features to be identified are discovered.
Size and emotion averaging: costs of dividing attention after all.
Brand, John; Oriet, Chris; Tottenham, Laurie Sykes
2012-03-01
Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.
A virtual pebble game to ensemble average graph rigidity.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2015-01-01
The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most
Exactly averaged equations for flow and transport in random media
International Nuclear Information System (INIS)
Shvidler, Mark; Karasaki, Kenzi
2001-01-01
It is well known that exact averaging of the equations of flow and transport in random porous media can be realized only for a small number of special, occasionally exotic, fields. On the other hand, the properties of approximate averaging methods are not yet fully understood. For example, the convergence behavior and the accuracy of truncated perturbation series. Furthermore, the calculation of the high-order perturbations is very complicated. These problems for a long time have stimulated attempts to find the answer for the question: Are there in existence some exact general and sufficiently universal forms of averaged equations? If the answer is positive, there arises the problem of the construction of these equations and analyzing them. There exist many publications related to these problems and oriented on different applications: hydrodynamics, flow and transport in porous media, theory of elasticity, acoustic and electromagnetic waves in random fields, etc. We present a method of finding the general form of exactly averaged equations for flow and transport in random fields by using (1) an assumption of the existence of Green's functions for appropriate stochastic problems, (2) some general properties of the Green's functions, and (3) the some basic information about the random fields of the conductivity, porosity and flow velocity. We present a general form of the exactly averaged non-local equations for the following cases. 1. Steady-state flow with sources in porous media with random conductivity. 2. Transient flow with sources in compressible media with random conductivity and porosity. 3. Non-reactive solute transport in random porous media. We discuss the problem of uniqueness and the properties of the non-local averaged equations, for the cases with some types of symmetry (isotropic, transversal isotropic, orthotropic) and we analyze the hypothesis of the structure non-local equations in general case of stochastically homogeneous fields. (author)
Positivity of the spherically averaged atomic one-electron density
DEFF Research Database (Denmark)
Fournais, Søren; Hoffmann-Ostenhof, Maria; Hoffmann-Ostenhof, Thomas
2008-01-01
We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥ 0. This article may be reproduced in its entirety for non-commercial purposes.......We investigate the positivity of the spherically averaged atomic one-electron density . For a which stems from a physical ground state we prove that for r ≥ 0. This article may be reproduced in its entirety for non-commercial purposes....
MAIN STAGES SCIENTIFIC AND PRODUCTION MASTERING THE TERRITORY AVERAGE URAL
Directory of Open Access Journals (Sweden)
V.S. Bochko
2006-09-01
Full Text Available Questions of the shaping Average Ural, as industrial territory, on base her scientific study and production mastering are considered in the article. It is shown that studies of Ural resources and particularities of the vital activity of its population were concerned by Russian and foreign scientist in XVIII-XIX centuries. It is noted that in XX century there was a transition to systematic organizing-economic study of production power, society and natures of Average Ural. More attention addressed on new problems of region and on needs of their scientific solving.
High-Average, High-Peak Current Injector Design
Biedron, S G; Virgo, M
2005-01-01
There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.
Non-self-averaging nucleation rate due to quenched disorder
International Nuclear Information System (INIS)
Sear, Richard P
2012-01-01
We study the nucleation of a new thermodynamic phase in the presence of quenched disorder. The quenched disorder is a generic model of both impurities and disordered porous media; both are known to have large effects on nucleation. We find that the nucleation rate is non-self-averaging. This is in a simple Ising model with clusters of quenched spins. We also show that non-self-averaging behaviour is straightforward to detect in experiments, and may be rather common. (fast track communication)
A note on moving average models for Gaussian random fields
DEFF Research Database (Denmark)
Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.
The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...
Broderick, Ciaran; Matthews, Tom; Wilby, Robert L.; Bastola, Satish; Murphy, Conor
2016-10-01
Understanding hydrological model predictive capabilities under contrasting climate conditions enables more robust decision making. Using Differential Split Sample Testing (DSST), we analyze the performance of six hydrological models for 37 Irish catchments under climate conditions unlike those used for model training. Additionally, we consider four ensemble averaging techniques when examining interperiod transferability. DSST is conducted using 2/3 year noncontinuous blocks of (i) the wettest/driest years on record based on precipitation totals and (ii) years with a more/less pronounced seasonal precipitation regime. Model transferability between contrasting regimes was found to vary depending on the testing scenario, catchment, and evaluation criteria considered. As expected, the ensemble average outperformed most individual ensemble members. However, averaging techniques differed considerably in the number of times they surpassed the best individual model member. Bayesian Model Averaging (BMA) and the Granger-Ramanathan Averaging (GRA) method were found to outperform the simple arithmetic mean (SAM) and Akaike Information Criteria Averaging (AICA). Here GRA performed better than the best individual model in 51%-86% of cases (according to the Nash-Sutcliffe criterion). When assessing model predictive skill under climate change conditions we recommend (i) setting up DSST to select the best available analogues of expected annual mean and seasonal climate conditions; (ii) applying multiple performance criteria; (iii) testing transferability using a diverse set of catchments; and (iv) using a multimodel ensemble in conjunction with an appropriate averaging technique. Given the computational efficiency and performance of GRA relative to BMA, the former is recommended as the preferred ensemble averaging technique for climate assessment.
DEFF Research Database (Denmark)
Hamre, Bjørn; Fristrup, Tine; Christensen, Gerd
2016-01-01
’s understanding of the relation between normality and deviancy. On the other hand, an examination of Danish Foucauldian disability research shows that this conception of ‘the deviant subject’ has changed over time. Hence, the present expectations of ‘the disabled’ are – more or less – influenced by contemporary...... discourses of general education. Thus, this article argues that Foucauldian disability studies could benefit from taking into account Foucauldian research in the field of general education. Until recently, the two research fields have been mutually isolated....
Averaging processes in granular flows driven by gravity
Rossi, Giulia; Armanini, Aronne
2016-04-01
One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental
Political Subjects: Decision and Subjectivity from a Post-Fundational Perspective
Directory of Open Access Journals (Sweden)
Martín Retamozo
2011-12-01
Full Text Available The problem of decision and of political subjects was addressed in the field of 20th century political philosophy by authors such as Carl Schmitt, Hannah Arendt, and Jacques Derrida, who related it closely to the concepts of sovereignty, freedom, and contingency. The works of Ernesto Laclau and Slavoj Žižek have currently turnedto the issue of decision in order to address the constitutive spects of the political. In a context dominated by deconstruction, post-Marxism, and post-structuralism,the article inquires into the elation between decision and political subjects in a contemporary setting, examining in depth the difference between subjectivity,subjectivization, and subject.
Health Examination by PET. (1) Cancer Examination
International Nuclear Information System (INIS)
Uno, Koichi
2006-01-01
Cancer examination by positron emission tomography (PET) started in Japan in 1994 and has been rapidly popularized. This paper describes author's experience of the examination in his hospital along the recent Japanese guideline for the PET cancer examination. Fluorodeoxyglucose (FDG) is intravenously injected at 3.7 (or 4.6, for diabetic patients) MBq/kg after 4-5 hr fasting and 40 min later, imaging is conducted with additional delayed scan at 2 hr to reduce the possible false positive. Image is taken by the equipment with PET-specific camera, of which quality assurance (QA) is maintained according to the guideline, and 3D image is constructed by the ordered subset expectation maximization method. Number of examinees during 4.5 years are 18,210 (M/F=9,735/8,475), and 236 (1.3%), together with use of other test measures like ultrasonography, computed tomography (CT), magnetic resonance imaging (MRI), biochemical marker and occult blood as well, are found to have cancer of thyroid, large bowel, lung, breast and others. The false negative rate by PET alone is 78/236 (33%) for cancer. PET examination has problems of image reading and specificity of organs, and tasks of informed consent, test cost, increased exchange of information and radiation exposure. However, PET cancer examination will be established as a routine diagnostic tool when the accumulated evidence of early cancer detection is shown useful for improving the survival rate and for reducing the medicare cost. (T.I.)
Laboratory instruction and subjectivity
Directory of Open Access Journals (Sweden)
Elisabeth Barolli
1998-09-01
Full Text Available The specific aspects which determined the way some groups of students conducted their work in a university laboratory, made us understand the articulation of these groups´s dynamics, from elements that were beyond the reach of cognition. In more specific terms the conduction and the maintenance of the groups student´s dynamics were explicited based on a intergame between the non conscious strategies, shared anonymously, and the efforts of the individuals in working based on their most objective task. The results and issues we have reached so far, using a reference the work developed by W.R.Bion, with therapeutical groups, gave us the possibility for understanding the dynamics of the student´s experimental work through a new approach that approximates the fields of cognition and subjectivity. This approximation led us to a deeper reflection about the issues which may be involved in the teaching process, particularly in situations which the teacher deals with the class, organised in groups.
Vinogradov, G. P.
2017-01-01
The problem of constructing a choice model of an agent with endogenous purposes of evolution is under debate. It is demonstrated that its solution requires the development of well-known methods of decision-making while taking into account the relation of action mode motivation to an agent’s ambition to implement subjectively understood interests and the environment state. The latter is submitted for consideration as a purposeful state situation model that exists only in the mind of an agent. It is the situation that is a basis for getting an insight into the agent’s ideas on the possible selected action mode results. The agent’s ambition to build his confidence in the feasibility of the action mode and the possibility of achieving the desired state requires him to use the procedures of forming an idea model based on the measured values of environment state. This leads to the gaming approach for the choice problem and its solution can be obtained on a set of trade-off alternatives.
Small Bandwidth Asymptotics for Density-Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
This paper proposes (apparently) novel standard error formulas for the density-weighted average derivative estimator of Powell, Stock, and Stoker (1989). Asymptotic validity of the standard errors developed in this paper does not require the use of higher-order kernels and the standard errors...
High Average Power UV Free Electron Laser Experiments At JLAB
International Nuclear Information System (INIS)
Douglas, David; Benson, Stephen; Evtushenko, Pavel; Gubeli, Joseph; Hernandez-Garcia, Carlos; Legg, Robert; Neil, George; Powers, Thomas; Shinn, Michelle; Tennant, Christopher; Williams, Gwyn
2012-01-01
Having produced 14 kW of average power at ∼2 microns, JLAB has shifted its focus to the ultraviolet portion of the spectrum. This presentation will describe the JLab UV Demo FEL, present specifics of its driver ERL, and discuss the latest experimental results from FEL experiments and machine operations.
Average subentropy, coherence and entanglement of random mixed quantum states
Energy Technology Data Exchange (ETDEWEB)
Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)
2017-02-15
Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.
Establishment of Average Body Measurement and the Development ...
African Journals Online (AJOL)
cce
body measurement for height and backneck to waist for ages 2,3,4 and 5 years. The ... average measurements of the different parts of the body must be established. ..... and OAU Charter on Rights of the child: Lagos: Nigeria Country office.
Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.
Brezis, Noam; Bronfman, Zohar Z; Usher, Marius
2015-06-04
We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.
Determination of the average lifetime of bottom hadrons
Energy Technology Data Exchange (ETDEWEB)
Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W [Technische Hochschule Aachen (Germany, F.R.). Lehrstuhl fuer Experimentalphysik 1A und 1. Physikalisches Inst.; Eisenmann, J; Fischer, H M
1984-12-27
We have determined the average lifetime of hadrons containing b quarks produced in e/sup +/e/sup -/ annihilation to be tausub(B)=1.83 x 10/sup -12/ s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes.
Determination of the average lifetime of bottom hadrons
Energy Technology Data Exchange (ETDEWEB)
Althoff, M; Braunschweig, W; Kirschfink, F J; Martyn, H U; Rosskamp, P; Schmitz, D; Siebke, H; Wallraff, W; Eisenmann, J; Fischer, H M
1984-12-27
We have determined the average lifetime of hadrons containing b quarks produced in e e annihilation to be tausub(B)=1.83x10 S s. Our method uses charged decay products from both non-leptonic and semileptonic decay modes. (orig./HSI).
Time Series ARIMA Models of Undergraduate Grade Point Average.
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
Crystallographic extraction and averaging of data from small image areas
Perkins, GA; Downing, KH; Glaeser, RM
The accuracy of structure factor phases determined from electron microscope images is determined mainly by the level of statistical significance, which is limited by the low level of allowed electron exposure and by the number of identical unit cells that can be averaged. It is shown here that
Reducing Noise by Repetition: Introduction to Signal Averaging
Hassan, Umer; Anwar, Muhammad Sabieh
2010-01-01
This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…
Environmental stresses can alleviate the average deleterious effect of mutations
Directory of Open Access Journals (Sweden)
Leibler Stanislas
2003-05-01
Full Text Available Abstract Background Fundamental questions in evolutionary genetics, including the possible advantage of sexual reproduction, depend critically on the effects of deleterious mutations on fitness. Limited existing experimental evidence suggests that, on average, such effects tend to be aggravated under environmental stresses, consistent with the perception that stress diminishes the organism's ability to tolerate deleterious mutations. Here, we ask whether there are also stresses with the opposite influence, under which the organism becomes more tolerant to mutations. Results We developed a technique, based on bioluminescence, which allows accurate automated measurements of bacterial growth rates at very low cell densities. Using this system, we measured growth rates of Escherichia coli mutants under a diverse set of environmental stresses. In contrast to the perception that stress always reduces the organism's ability to tolerate mutations, our measurements identified stresses that do the opposite – that is, despite decreasing wild-type growth, they alleviate, on average, the effect of deleterious mutations. Conclusions Our results show a qualitative difference between various environmental stresses ranging from alleviation to aggravation of the average effect of mutations. We further show how the existence of stresses that are biased towards alleviation of the effects of mutations may imply the existence of average epistatic interactions between mutations. The results thus offer a connection between the two main factors controlling the effects of deleterious mutations: environmental conditions and epistatic interactions.
The background effective average action approach to quantum gravity
DEFF Research Database (Denmark)
D’Odorico, G.; Codello, A.; Pagani, C.
2016-01-01
of an UV attractive non-Gaussian fixed-point, which we find characterized by real critical exponents. Our closure method is general and can be applied systematically to more general truncations of the gravitational effective average action. © Springer International Publishing Switzerland 2016....
Error estimates in horocycle averages asymptotics: challenges from string theory
Cardella, M.A.
2010-01-01
For modular functions of rapid decay, a classical result connects the error estimate in their long horocycle average asymptotic to the Riemann hypothesis. We study similar asymptotics, for modular functions with not that mild growing conditions, such as of polynomial growth and of exponential growth
Moving average rules as a source of market instability
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
Despite the pervasiveness of the efficient markets paradigm in the academic finance literature, the use of various moving average (MA) trading rules remains popular with financial market practitioners. This paper proposes a stochastic dynamic financial market model in which demand for traded assets
arXiv Averaged Energy Conditions and Bouncing Universes
Giovannini, Massimo
2017-11-16
The dynamics of bouncing universes is characterized by violating certain coordinate-invariant restrictions on the total energy-momentum tensor, customarily referred to as energy conditions. Although there could be epochs in which the null energy condition is locally violated, it may perhaps be enforced in an averaged sense. Explicit examples of this possibility are investigated in different frameworks.
26 CFR 1.1301-1 - Averaging of farm income.
2010-04-01
... January 1, 2003, rental income based on a share of a tenant's production determined under an unwritten... the Collection of Income Tax at Source on Wages (Federal income tax withholding), or the amount of net... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Averaging of farm income. 1.1301-1 Section 1...
Implications of Methodist clergies' average lifespan and missional ...
African Journals Online (AJOL)
2015-06-09
Jun 9, 2015 ... The author of Genesis 5 paid meticulous attention to the lifespan of several people ... of Southern Africa (MCSA), and to argue that memories of the ... average ages at death were added up and the sum was divided by 12 (which represents the 12 ..... not explicit in how the departed Methodist ministers were.
Pareto Principle in Datamining: an Above-Average Fencing Algorithm
Directory of Open Access Journals (Sweden)
K. Macek
2008-01-01
Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.
Average Distance Travelled To School by Primary and Secondary ...
African Journals Online (AJOL)
This study investigated average distance travelled to school by students in primary and secondary schools in Anambra, Enugu, and Ebonyi States and effect on attendance. These are among the top ten densely populated and educationally advantaged States in Nigeria. Research evidences report high dropout rates in ...
Trend of Average Wages as Indicator of Hypothetical Money Illusion
Directory of Open Access Journals (Sweden)
Julian Daszkowski
2010-06-01
Full Text Available The definition of wage in Poland not before 1998 includes any value of social security contribution. Changed definition creates higher level of reported wages, but was expected not to influence the take home pay. Nevertheless, the trend of average wages, after a short period, has returned to its previous line. Such effect is explained in the term of money illusion.
Computation of the average energy for LXY electrons
International Nuclear Information System (INIS)
Grau Carles, A.; Grau, A.
1996-01-01
The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs
Bounding quantum gate error rate based on reported average fidelity
International Nuclear Information System (INIS)
Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C
2016-01-01
Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)
75 FR 78157 - Farmer and Fisherman Income Averaging
2010-12-15
... to the averaging of farm and fishing income in computing income tax liability. The regulations...: PART 1--INCOME TAXES 0 Paragraph 1. The authority citation for part 1 continues to read in part as... section 1 tax would be increased if one-third of elected farm income were allocated to each year. The...
Characteristics of phase-averaged equations for modulated wave groups
Klopman, G.; Petit, H.A.H.; Battjes, J.A.
2000-01-01
The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993).
A depth semi-averaged model for coastal dynamics
Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.
2017-05-01
The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.
Effect of tank geometry on its average performance
Orlov, Aleksey A.; Tsimbalyuk, Alexandr F.; Malyugin, Roman V.; Leontieva, Daria A.; Kotelnikova, Alexandra A.
2018-03-01
The mathematical model of non-stationary filling of vertical submerged tanks with gaseous uranium hexafluoride is presented in the paper. There are calculations of the average productivity, heat exchange area, and filling time of various volumes tanks with smooth inner walls depending on their "height : radius" ratio as well as the average productivity, degree, and filling time of horizontal ribbing tank with volume 6.10-2 m3 with change central hole diameter of the ribs. It has been shown that the growth of "height / radius" ratio in tanks with smooth inner walls up to the limiting values allows significantly increasing tank average productivity and reducing its filling time. Growth of H/R ratio of tank with volume 1.0 m3 to the limiting values (in comparison with the standard tank having H/R equal 3.49) augments tank productivity by 23.5 % and the heat exchange area by 20%. Besides, we have demonstrated that maximum average productivity and a minimum filling time are reached for the tank with volume 6.10-2 m3 having central hole diameter of horizontal ribs 6.4.10-2 m.
An averaged polarizable potential for multiscale modeling in phospholipid membranes
DEFF Research Database (Denmark)
Witzke, Sarah; List, Nanna Holmgaard; Olsen, Jógvan Magnus Haugaard
2017-01-01
A set of average atom-centered charges and polarizabilities has been developed for three types of phospholipids for use in polarizable embedding calculations. The lipids investigated are 1,2-dimyristoyl-sn-glycero-3-phosphocholine, 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine, and 1-palmitoyl...
Understanding coastal morphodynamic patterns from depth-averaged sediment concentration
Ribas, F.; Falques, A.; de Swart, H. E.; Dodd, N.; Garnier, R.; Calvete, D.
This review highlights the important role of the depth-averaged sediment concentration (DASC) to understand the formation of a number of coastal morphodynamic features that have an alongshore rhythmic pattern: beach cusps, surf zone transverse and crescentic bars, and shoreface-connected sand
Post-model selection inference and model averaging
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2011-07-01
Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.
Determination of average activating thermal neutron flux in bulk samples
International Nuclear Information System (INIS)
Doczi, R.; Csikai, J.; Doczi, R.; Csikai, J.; Hassan, F. M.; Ali, M.A.
2004-01-01
A previous method used for the determination of the average neutron flux within bulky samples has been applied for the measurements of hydrogen contents of different samples. An analytical function is given for the description of the correlation between the activity of Dy foils and the hydrogen concentrations. Results obtained by the activation and the thermal neutron reflection methods are compared
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
Grade Point Average: What's Wrong and What's the Alternative?
Soh, Kay Cheng
2011-01-01
Grade point average (GPA) has been around for more than two centuries. However, it has created a lot of confusion, frustration, and anxiety to GPA-producers and users alike, especially when used across-nation for different purposes. This paper looks into the reasons for such a state of affairs from the perspective of educational measurement. It…
The Effect of Honors Courses on Grade Point Averages
Spisak, Art L.; Squires, Suzanne Carter
2016-01-01
High-ability entering college students give three main reasons for not choosing to become part of honors programs and colleges; they and/or their parents believe that honors classes at the university level require more work than non-honors courses, are more stressful, and will adversely affect their self-image and grade point average (GPA) (Hill;…
40 CFR 63.652 - Emissions averaging provisions.
2010-07-01
... emissions more than the reference control technology, but the combination of the pollution prevention... emissions average. This must include any Group 1 emission points to which the reference control technology... agrees has a higher nominal efficiency than the reference control technology. Information on the nominal...
An average salary: approaches to the index determination
Directory of Open Access Journals (Sweden)
T. M. Pozdnyakova
2017-01-01
Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of
Average use of Alcohol and Binge Drinking in Pregnancy: Neuropsychological Effects at Age 5
DEFF Research Database (Denmark)
Kilburn, Tina R.
Objectives The objective of this PhD. was to examine the relation between low weekly average maternal alcohol consumption and ‘Binge drinking' (defined as intake of 5 or more drinks per occasion) during pregnancy and information processing time (IPT) in children aged five years. Since a method...... that provided detailed information on maternal alcohol drinking patterns before and during pregnancy and other lifestyle factors. These women were categorized in groups of prenatally average alcohol intake and binge drinking, timing and number of episodes. At the age of five years the children of these women...... and number of episodes) and between simple reaction time (SRT) and alcohol intake or binge drinking (timing and number of episodes) during pregnancy. Conclusion This was one of the first studies investigating IPT and prenatally average alcohol intake and binge drinking in early pregnancy. Daily prenatal...
Metallographic examination in irradiated materials examination facility
Energy Technology Data Exchange (ETDEWEB)
Choo, Yong Sun; Lee, Key Soon; Park, Dae Gyu; Ahn, Sang Bok; Yoo, Byoung Ok
1998-01-01
It is very important to have equipment of metallographic examination in hot-cell to observe the micro-structure of nuclear fuels and materials irradiated at nuclear power and/or research reactor. Those equipment should be operated by master-slave manipulators, so they are designed, manufactured and modified to make exercise easy and no trouble. The metallographic examination equipment and techniques as well as its operation procedure are described, so an operator can practice the metallography in hot-cell. (author). 5 refs., 7 tabs., 21 figs.
Gong, Qi; Schaubel, Douglas E
2017-03-01
Treatments are frequently evaluated in terms of their effect on patient survival. In settings where randomization of treatment is not feasible, observational data are employed, necessitating correction for covariate imbalances. Treatments are usually compared using a hazard ratio. Most existing methods which quantify the treatment effect through the survival function are applicable to treatments assigned at time 0. In the data structure of our interest, subjects typically begin follow-up untreated; time-until-treatment, and the pretreatment death hazard are both heavily influenced by longitudinal covariates; and subjects may experience periods of treatment ineligibility. We propose semiparametric methods for estimating the average difference in restricted mean survival time attributable to a time-dependent treatment, the average effect of treatment among the treated, under current treatment assignment patterns. The pre- and posttreatment models are partly conditional, in that they use the covariate history up to the time of treatment. The pre-treatment model is estimated through recently developed landmark analysis methods. For each treated patient, fitted pre- and posttreatment survival curves are projected out, then averaged in a manner which accounts for the censoring of treatment times. Asymptotic properties are derived and evaluated through simulation. The proposed methods are applied to liver transplant data in order to estimate the effect of liver transplantation on survival among transplant recipients under current practice patterns. © 2016, The International Biometric Society.
High-average-power diode-pumped Yb: YAG lasers
International Nuclear Information System (INIS)
Avizonis, P V; Beach, R; Bibeau, C M; Emanuel, M A; Harris, D G; Honea, E C; Monroe, R S; Payne, S A; Skidmore, J A; Sutton, S B
1999-01-01
A scaleable diode end-pumping technology for high-average-power slab and rod lasers has been under development for the past several years at Lawrence Livermore National Laboratory (LLNL). This technology has particular application to high average power Yb:YAG lasers that utilize a rod configured gain element. Previously, this rod configured approach has achieved average output powers in a single 5 cm long by 2 mm diameter Yb:YAG rod of 430 W cw and 280 W q-switched. High beam quality (M(sup 2)= 2.4) q-switched operation has also been demonstrated at over 180 W of average output power. More recently, using a dual rod configuration consisting of two, 5 cm long by 2 mm diameter laser rods with birefringence compensation, we have achieved 1080 W of cw output with an M(sup 2) value of 13.5 at an optical-to-optical conversion efficiency of 27.5%. With the same dual rod laser operated in a q-switched mode, we have also demonstrated 532 W of average power with an M(sup 2) and lt; 2.5 at 17% optical-to-optical conversion efficiency. These q-switched results were obtained at a 10 kHz repetition rate and resulted in 77 nsec pulse durations. These improved levels of operational performance have been achieved as a result of technology advancements made in several areas that will be covered in this manuscript. These enhancements to our architecture include: (1) Hollow lens ducts that enable the use of advanced cavity architectures permitting birefringence compensation and the ability to run in large aperture-filling near-diffraction-limited modes. (2) Compound laser rods with flanged-nonabsorbing-endcaps fabricated by diffusion bonding. (3) Techniques for suppressing amplified spontaneous emission (ASE) and parasitics in the polished barrel rods
High average power diode pumped solid state lasers for CALIOPE
International Nuclear Information System (INIS)
Comaskey, B.; Halpin, J.; Moran, B.
1994-07-01
Diode pumping of solid state media offers the opportunity for very low maintenance, high efficiency, and compact laser systems. For remote sensing, such lasers may be used to pump tunable non-linear sources, or if tunable themselves, act directly or through harmonic crystals as the probe. The needs of long range remote sensing missions require laser performance in the several watts to kilowatts range. At these power performance levels, more advanced thermal management technologies are required for the diode pumps. The solid state laser design must now address a variety of issues arising from the thermal loads, including fracture limits, induced lensing and aberrations, induced birefringence, and laser cavity optical component performance degradation with average power loading. In order to highlight the design trade-offs involved in addressing the above issues, a variety of existing average power laser systems are briefly described. Included are two systems based on Spectra Diode Laboratory's water impingement cooled diode packages: a two times diffraction limited, 200 watt average power, 200 Hz multi-rod laser/amplifier by Fibertek, and TRW's 100 watt, 100 Hz, phase conjugated amplifier. The authors also present two laser systems built at Lawrence Livermore National Laboratory (LLNL) based on their more aggressive diode bar cooling package, which uses microchannel cooler technology capable of 100% duty factor operation. They then present the design of LLNL's first generation OPO pump laser for remote sensing. This system is specified to run at 100 Hz, 20 nsec pulses each with 300 mJ, less than two times diffraction limited, and with a stable single longitudinal mode. The performance of the first testbed version will be presented. The authors conclude with directions their group is pursuing to advance average power lasers. This includes average power electro-optics, low heat load lasing media, and heat capacity lasers
Erfanian, A.; Fomenko, L.; Wang, G.
2016-12-01
Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling
Energy Technology Data Exchange (ETDEWEB)
Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)
2011-04-07
The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.
DEFF Research Database (Denmark)
Hamre, Bjørn
This study originated in an observation of the return of medical and individual-oriented explanations in conceptualizing the problematization of students in Denmark. The study will include analyses of files of 125 Danish students from a ‘helpschool’ that focuses on children with special education...... needs who were examined by the school psychologist and referred to special needs education in the 1935-45, and ‘the present’ period, an analysis of files of 100 students referred to special education by school psychologists in 2005-10. Through a Foucauldian lens, this study will analyze the ways...... to capture the ways in which children with deviancy and other ‘problems’ have been categorized. The key research questions will be: 1) What types of problematizations can be found in the files; 2) How do these problematizations reflect the educational, psychological and medical discourses of the time; and 3...