Evaluating and Estimating the WCET Criticality Metric
DEFF Research Database (Denmark)
Jordan, Alexander
2014-01-01
a programmer (or compiler) from targeting optimizations the right way. A possible resort is to use a metric that targets WCET and which can be efficiently computed for all code parts of a program. Similar to dynamic profiling techniques, which execute code with input that is typically expected...... for the application, based on WCET analysis we can indicate how critical a code fragment is, in relation to the worst-case bound. Computing such a metric on top of static analysis, incurs a certain overhead though, which increases with the complexity of the underlying WCET analysis. We present our approach...... to estimate the Criticality metric, by relaxing the precision of WCET analysis. Through this, we can reduce analysis time by orders of magnitude, while only introducing minor error. To evaluate our estimation approach and share our garnered experience using the metric, we evaluate real-time programs, which...
DEFF Research Database (Denmark)
Hanxleden, Reinhard von; Holsti, Niklas; Lisper, Björn
Following the successful WCET Tool Challenges in 2006 and 2008, the third event in this series was organized in 2011, again with support from the ARTIST DESIGN Network of Excellence. Following the practice established in the previous Challenges, the WCET Tool Challenge 2011 (WCC'11) dened two kin...
WCET Tool Challenge 2011: Report
DEFF Research Database (Denmark)
Bonenfant, Armelle; Cassé, Hugues; Bünte, Sven
2011-01-01
Following the successful WCET Tool Challenges in 2006 and 2008, the third event in this series was organized in 2011, again with support from the ARTIST DESIGN Network of Excellence. Following the practice established in the previous Challenges, the WCET Tool Challenge 2011 (WCC’11) defined two k...
Extracting Loop Bounds for WCET Analysis Using the Instrumentation Point Graph
Betts, A.; Bernat, G.
2009-05-01
Every calculation engine proposed in the literature of Worst-Case Execution Time (WCET) analysis requires upper bounds on loop iterations. Existing mechanisms to procure this information are either error prone, because they are gathered from the end-user, or limited in scope, because automatic analyses target very specific loop structures. In this paper, we present a technique that obtains bounds completely automatically for arbitrary loop structures. In particular, we show how to employ the Instrumentation Point Graph (IPG) to parse traces of execution (generated by an instrumented program) in order to extract bounds relative to any loop-nesting level. With this technique, therefore, non-rectangular dependencies between loops can be captured, allowing more accurate WCET estimates to be calculated. We demonstrate the improvement in accuracy by comparing WCET estimates computed through our HMB framework against those computed with state-of-the-art techniques.
Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.
Directory of Open Access Journals (Sweden)
Fan Ni
Full Text Available Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.
Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.
Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng
2013-01-01
Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.
Reducing WCET Overestimations by Correcting Errors in Loop Bound Constraints
Directory of Open Access Journals (Sweden)
Fanqi Meng
2017-12-01
Full Text Available In order to reduce overestimations of worst-case execution time (WCET, in this article, we firstly report a kind of specific WCET overestimation caused by non-orthogonal nested loops. Then, we propose a novel correction approach which has three basic steps. The first step is to locate the worst-case execution path (WCEP in the control flow graph and then map it onto source code. The second step is to identify non-orthogonal nested loops from the WCEP by means of an abstract syntax tree. The last step is to recursively calculate the WCET errors caused by the loose loop bound constraints, and then subtract the total errors from the overestimations. The novelty lies in the fact that the WCET correction is only conducted on the non-branching part of WCEP, thus avoiding potential safety risks caused by possible WCEP switches. Experimental results show that our approach reduces the specific WCET overestimation by an average of more than 82%, and 100% of corrected WCET is no less than the actual WCET. Thus, our approach is not only effective but also safe. It will help developers to design energy-efficient and safe real-time systems.
Beasley, Wyn
2014-12-01
This paper examines the career of Johann Sebastian Bach (1685-1750) and the role played by Wilhelm His I (who was, with Albert von Haller, a noted pioneer of physiology) in the exhumation of Bach's remains in 1894. His's examination of these remains allowed the sculptor Carl Seffner to produce the celebrated statue of Bach which stands outside the church of St Thomas in Leipzig, where the composer was employed from 1723 until his death. Modern forensic techniques have recently enabled Bach's image to be reconstructed in even more spectacular detail. © 2014 Royal Australasian College of Surgeons.
Persistence-Based Branch Misprediction Bounds for WCET Analysis
DEFF Research Database (Denmark)
Puffitsch, Wolfgang
2015-01-01
Branch prediction is an important feature of pipelined processors to achieve high performance. However, it can lead to overly pessimistic worst-case execution time (WCET) bounds when being modeled too conservatively. This paper presents bounds on the number of branch mispredictions for local...... dynamic branch predictors. To handle interferences between branch instructions we use the notion of persistence, a concept that is also found in cache analyses. The bounds apply to branches in general, not only to branches that close a loop. Furthermore, the bounds can be easily integrated into integer...... linear programming formulations of the WCET problem. An evaluation on a number of benchmarks shows that with these bounds, dynamic branch prediction does not necessarily lead to higher WCET bounds than static prediction schemes....
WCET Analysis of Java Bytecode Featuring Common Execution Environments
DEFF Research Database (Denmark)
Luckow, Kasper Søe; Thomsen, Bent; Frost, Christian
2011-01-01
We present a novel tool for statically determining the Worst Case Execution Time (WCET) of Java Bytecode-based programs called Tool for Execution Time Analysis of Java bytecode (TetaJ). This tool differentiates itself from existing tools by separating the individual constituents of the execution...... environment into independent components. The prime benefit is that it can be used for execution environments featuring common embedded processors and software implementations of the JVM. TetaJ employs a model checking approach for statically determining WCET where the Java program, the JVM, and the hardware...
Indian Academy of Sciences (India)
Long interested in music of various kinds, ... other art form, it is impossible to adequately explain the appeal of Bach's music ... composer, does exhibit a full range of emotions such as joy, ... seem to be cerebral rather than emotional. Moreover ...
WCET Analysis of ARM Processors using Real-Time Model Checking
DEFF Research Database (Denmark)
Toft, Martin; Olesen, Mads Christian; Dalsgaard, Andreas
2009-01-01
This paper presents a flexible method that utilises real-time model checking to determine safe and sharp WCETs for processes running on hardware platforms featuring pipelining and caching.......This paper presents a flexible method that utilises real-time model checking to determine safe and sharp WCETs for processes running on hardware platforms featuring pipelining and caching....
High-level synthesis for reduction of WCET in real-time systems
DEFF Research Database (Denmark)
Kristensen, Andreas Toftegaard; Pezzarossa, Luca; Sparsø, Jens
2017-01-01
. Compared to executing the high-level language code on a processor, HLS can be used to create hardware that accelerates critical parts of the code. When discussing performance in the context or real-time systems, it is the worst-case execution time (WCET) of a task that matters. WCET obviously benefits from...... hardware acceleration, but it may also benefit from a tighter bound on the WCET. This paper explores the use of and integration of accelerators generated using HLS into a time-predictable processor intended for real-time systems. The high-level design tool, Vivado HLS, is used to generate hardware...
Poeetilise utoopia meister Aino Bach
2003-01-01
28. III-11. V Adamson-Ericu muuseumis graafik Aino Bachi loomingu ülevaatenäitus. Võrdluseks on eksponeeritud Eduard Wiiralti ja Kaarel Liimandi tööd. Näha võib Mark Soosaare filmi A. Bachist "Liblikapüüdja" (1978). Kuraator Anne Lõugas, kujundaja Virge Jõekalda. Kai Tuviku juhendamisel muuseumitunnid "Kuidas joonistada inimest?". 1. IV seminaril "Naiskunstnik ja tema aeg. Aino Bach" esinejad, ettekanded
De ogen van Johann Sebastian Bach
Zegers, R. H. C.
2005-01-01
Limited vision seems to have been Johann Sebastian Bach's (1685-1750) only physical problem. Myopia seems the most likely cause and he probably developed cataracts later in life. In addition to the cataracts, his worsening vision may have been due in part to some other eye problem. In 1750 Bach's
Blindness of Johann Sebastian Bach.
Tarkkanen, Ahti
2013-03-01
Johann Sebastian Bach (1685-1750) was one of the greatest composers of all time. Apart from performing as a brilliant organist, he composed over 1.100 works in almost every musical genre. He was known as a hardworking, deeply Christian person, who had to support his family of 20 children and many students staying at his home. At the age of 64 years, his vision started to decline. Old biographies claim that it was the result of overstressing his vision in poor illumination. By persuasion of his friends, he had his both eyes operated by a travelling British eye surgeon. A cataract couching was performed. After surgery, Bach was totally blind and unable to play an organ, compose or direct choirs and orchestras. He was confined to bed and suffering from immense pain of the eyes and the body. He died <4 months after surgery. In this paper, as the plausible diagnosis, intractable glaucoma because of pupillary block or secondary to phacoanaphylactic endophthalmitis is suggested. © 2012 The Author. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.
Analysis list: Bach2 [Chip-atlas[Archive
Lifescience Database Archive (English)
Full Text Available Bach2 Blood + mm9 http://dbarchive.biosciencedbc.jp/kyushu-u/mm9/target/Bach2.1.tsv... http://dbarchive.biosciencedbc.jp/kyushu-u/mm9/target/Bach2.5.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/mm9/target/Bach...2.10.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/mm9/colo/Bach2.Blood.tsv http://dbarchive.biosciencedbc.jp/kyushu-u/mm9/colo/Blood.gml ...
Positive Feed-Bach: A Motivation Game.
Kennedy, William G.
1984-01-01
Classroom scrip emblazoned with a picture of Johann Sebastian Bach is used to motivate junior high school students. Students are paid for cooperation and performance. Two auctions are held during the school year, at which students can spend the scrip on musical items. (CS)
Rigidity of generalized Bach-flat vacuum static spaces
Yun, Gabjin; Hwang, Seungsu
2017-11-01
In this paper, we study the structure of generalized Bach-flat vacuum static spaces. Generalized Bach-flat metrics are considered as extensions of both Einstein and Bach-flat metrics. First, we prove that a compact Riemannian n-manifold with n ≥ 4 which is a generalized Bach-flat vacuum static space is Einstein. A generalized Bach-flat vacuum static space with the potential function f having compact level sets is either Ricci-flat or a warped product with zero scalar curvature when n ≥ 5, and when n = 4, it is Einstein if f has its minimum. Secondly, we consider critical metrics for another quadratic curvature functional involving the Ricci tensor, and prove similar results. Lastly, by applying the technique developed above, we prove Besse conjecture when the manifold is generalized Bach-flat.
USING BACH FLOWER IN HOLISTIC PSYCHOTHERAPY
Directory of Open Access Journals (Sweden)
Vagner Ferreira do Nascimento
2017-05-01
Full Text Available This is a narrative review from scientific literature that aimed to describe concepts and approaches for indications of the therapeutic use of Bach flower remedies in holistic psychotherapy. The review was developed in February 2016 from books, official documents and articles indexed in Lilacs and Scielo databases. Bach flower remedies is a therapeutic method that aims to restore the balance of human being, restoring its vital energy through holistic care. Because the flower essences act on psychic and emotional dimension of individual, when employed in holistic psychotherapy can provide greater autonomy, self-care and effectiveness compared to other alternative methods. The literature indicated that flower essence therapy is a safe practice and can be used in a complementary to health care, but should be performed by qualified professionals. It has also shown to be a promising and important area for nursing professional, but it still requires greater investment in research in the area to support the practice.
Sasakian manifolds with purely transversal Bach tensor
Ghosh, Amalendu; Sharma, Ramesh
2017-10-01
We show that a (2n + 1)-dimensional Sasakian manifold (M, g) with a purely transversal Bach tensor has constant scalar curvature ≥2 n (2 n +1 ) , equality holding if and only if (M, g) is Einstein. For dimension 3, M is locally isometric to the unit sphere S3. For dimension 5, if in addition (M, g) is complete, then it has positive Ricci curvature and is compact with finite fundamental group π1(M).
Cohort profile: the Boston Area Community Health (BACH) survey.
Piccolo, Rebecca S; Araujo, Andre B; Pearce, Neil; McKinlay, John B
2014-02-01
The Boston Area Community Health (BACH) Survey is a community-based, random sample, epidemiologic cohort of n = 5502 Boston (MA) residents. The baseline BACH Survey (2002-05) was designed to explore the mechanisms conferring increased health risks on minority populations with a particular focus on urologic signs/symptoms and type 2 diabetes. To this end, the cohort was designed to include adequate numbers of US racial/ethnic minorities (Black, Hispanic, White), both men and women, across a broad age of distribution. Follow-up surveys were conducted ∼5 (BACH II, 2008) and 7 (BACH III, 2010) years later, which allows for both within- and between-person comparisons over time. The BACH Survey's measures were designed to cover the following seven broad categories: socio-demographics, health care access/utilization, lifestyles, psychosocial factors, health status, physical measures and biochemical parameters. The breadth of measures has allowed BACH researchers to identify disparities and quantify contributions to social disparities in a number of health conditions including urologic conditions (e.g. nocturia, lower urinary tract symptoms, prostatitis), type 2 diabetes, obesity, bone mineral content and density, and physical function. BACH I data are available through the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) Central Repositories (www.niddkrepository.org). Further inquiries can be made through the New England Research Institutes Inc. website (www.neriscience.com/epidemiology).
Instructional Practices: Teaching Speech Composition with J. S. Bach.
Merriam, Allen H.
1979-01-01
Describes the structural similarities inherent in musical and linguistic compositions and how they reflect basic human impulses and principles of effective composition. Uses Bach's "Passacaglia" to illustrate the characteristics of good composition. (JMF)
BACH transcription factors in innate and adaptive immunity.
Igarashi, Kazuhiko; Kurosaki, Tomohiro; Roychoudhuri, Rahul
2017-07-01
BTB and CNC homology (BACH) proteins are transcriptional repressors of the basic region leucine zipper (bZIP) transcription factor family. Recent studies indicate widespread roles of BACH proteins in controlling the development and function of the innate and adaptive immune systems, including the differentiation of effector and memory cells of the B and T cell lineages, CD4 + regulatory T cells and macrophages. Here, we emphasize similarities at a molecular level in the cell-type-specific activities of BACH factors, proposing that competitive interactions of BACH proteins with transcriptional activators of the bZIP family form a common mechanistic theme underlying their diverse actions. The findings contribute to a general understanding of how transcriptional repressors shape lineage commitment and cell-type-specific functions through repression of alternative lineage programmes.
Microfilming of the Bach Manuscripts in the Staatsbibliothek Berlin.
Penzold, Leonhard
1998-01-01
Describes a microfilming project to preserve music manuscripts of Johann Sebastian Bach at the Staatsbibliothek Berlin (Germany), highlighting project goals, problems and peculiarities encountered filming the collection, color micrography, and black-and-white filming. (PEN)
Anneli Remme soovitab : Ajaloo ilu - Johann Sebastian Bach / Anneli Remme
Remme, Anneli, 1968-
2005-01-01
Agentuuri Corelli Music ja Eesti Klavessiinisõprade Tsunfti korraldatavast klavessiinikontsertide sarjast "Ajaloo ilu - Johann Sebastian Bach" (avakontserdid 17. sept. Kadrioru lossis, 18. sept. Pärnu Eliisabeti kirikus)
BACH1 Ser919Pro variant and breast cancer risk
Directory of Open Access Journals (Sweden)
Eerola Hannaleena
2006-01-01
Full Text Available Abstract Background BACH1 (BRCA1-associated C-terminal helicase 1; also known as BRCA1-interacting protein 1, BRIP1 is a helicase protein that interacts in vivo with BRCA1, the protein product of one of the major genes for hereditary predisposition to breast cancer. Previously, two BACH1 germ line missense mutations have been identified in early-onset breast cancer patients with and without family history of breast and ovarian cancer. In this study, we aimed to evaluate whether there are BACH1 genetic variants that contribute to breast cancer risk in Finland. Methods The BACH1 gene was screened for germ line alterations among probands from 43 Finnish BRCA1/2 negative breast cancer families. Recently, one of the observed common variants, Ser-allele of the Ser919Pro polymorphism, was suggested to associate with an increased breast cancer risk, and was here evaluated in an independent, large series of 888 unselected breast cancer patients and in 736 healthy controls. Results Six BACH1 germ line alterations were observed in the mutation analysis, but none of these were found to associate with the cancer phenotype. The Val193Ile variant that was seen in only one family was further screened in an independent series of 346 familial breast cancer cases and 183 healthy controls, but no additional carriers were observed. Individuals with the BACH1 Ser919-allele were not found to have an increased breast cancer risk when the Pro/Ser heterozygotes (OR 0.90; 95% CI 0.70–1.16; p = 0.427 or Ser/Ser homozygotes (OR 1.02; 95% CI 0.76–1.35; p = 0.91 were compared to Pro/Pro homozygotes, and there was no association of the variant with any breast tumor characteristics, age at cancer diagnosis, family history of cancer, or survival. Conclusion Our results suggest that the BACH1 Ser919 is not a breast cancer predisposition allele in the Finnish study population. Together with previous studies, our results also indicate that although some rare germ line variants
BACH1 Ser919Pro variant and breast cancer risk
International Nuclear Information System (INIS)
Vahteristo, Pia; Yliannala, Kristiina; Tamminen, Anitta; Eerola, Hannaleena; Blomqvist, Carl; Nevanlinna, Heli
2006-01-01
BACH1 (BRCA1-associated C-terminal helicase 1; also known as BRCA1-interacting protein 1, BRIP1) is a helicase protein that interacts in vivo with BRCA1, the protein product of one of the major genes for hereditary predisposition to breast cancer. Previously, two BACH1 germ line missense mutations have been identified in early-onset breast cancer patients with and without family history of breast and ovarian cancer. In this study, we aimed to evaluate whether there are BACH1 genetic variants that contribute to breast cancer risk in Finland. The BACH1 gene was screened for germ line alterations among probands from 43 Finnish BRCA1/2 negative breast cancer families. Recently, one of the observed common variants, Ser-allele of the Ser919Pro polymorphism, was suggested to associate with an increased breast cancer risk, and was here evaluated in an independent, large series of 888 unselected breast cancer patients and in 736 healthy controls. Six BACH1 germ line alterations were observed in the mutation analysis, but none of these were found to associate with the cancer phenotype. The Val193Ile variant that was seen in only one family was further screened in an independent series of 346 familial breast cancer cases and 183 healthy controls, but no additional carriers were observed. Individuals with the BACH1 Ser919-allele were not found to have an increased breast cancer risk when the Pro/Ser heterozygotes (OR 0.90; 95% CI 0.70–1.16; p = 0.427) or Ser/Ser homozygotes (OR 1.02; 95% CI 0.76–1.35; p = 0.91) were compared to Pro/Pro homozygotes, and there was no association of the variant with any breast tumor characteristics, age at cancer diagnosis, family history of cancer, or survival. Our results suggest that the BACH1 Ser919 is not a breast cancer predisposition allele in the Finnish study population. Together with previous studies, our results also indicate that although some rare germ line variants in BACH1 may contribute to breast cancer development, the
4. Passion in the Work of Johann Sebastian Bach
Directory of Open Access Journals (Sweden)
Medňanský Karol
2016-03-01
Full Text Available Passions are exceptionally important in the works of Johann Sebastian Bach. His passion compositions are based particularly on Luther’s reformation, chiefly on developmental tendency which is based on the works of Johann Walter, Hans Leo Hassler and Michael Praetorius. The most significant forerunner of J. S. Bach was Heinrich Schütz. J. S. Bach’s textual aspect is aimed at the model of passion oratorio the main representative of which was a librettist Heinrich Brockes who worked in Hamburg. The interesting fact is that before the arrival of J. S. Bach, in 1723, there was no long tradition of passions in Leipzig. They were performed there in 1721 for the first time. J. S. Bach is demonstrably the author of the two passions: St Matthew Passion BWV 244 and St John Passion BWV 245. The authorship of Johann Sebastian Bach in St. Lukas Passion BWV 246 is strongly called into question and from St Mark Passion BWV 24 only the text was preserved.
Rigidity of complete noncompact bach-flat n-manifolds
Chu, Yawei; Feng, Pinghua
2012-11-01
Let (Mn,g) be a complete noncompact Bach-flat n-manifold with the positive Yamabe constant and constant scalar curvature. Assume that the L2-norm of the trace-free Riemannian curvature tensor R∘m is finite. In this paper, we prove that (Mn,g) is a constant curvature space if the L-norm of R∘m is sufficiently small. Moreover, we get a gap theorem for (Mn,g) with positive scalar curvature. This can be viewed as a generalization of our earlier results of 4-dimensional Bach-flat manifolds with constant scalar curvature R≥0 [Y.W. Chu, A rigidity theorem for complete noncompact Bach-flat manifolds, J. Geom. Phys. 61 (2011) 516-521]. Furthermore, when n>9, we derive a rigidity result for R<0.
The Bach equations in spin-coefficient form
Forbes, Hamish
2018-06-01
Conformal gravity theories are defined by field equations that determine only the conformal structure of the spacetime manifold. The Bach equations represent an early example of such a theory, we present them here in component form in terms of spin- and boost-weighted spin-coefficients using the compacted spin-coefficient formalism. These equations can be used as an efficient alternative to the standard tensor form. As a simple application we solve the Bach equations for pp-wave and static spherically symmetric spacetimes.
Are the alleged remains of Johann Sebastian Bach authentic?
Zegers, Richard H C; Maas, Mario; Koopman, A Ton G; Maat, George J R
2009-02-16
A skeleton alleged to be that of Johann Sebastian Bach (1685-1750) was exhumed from a graveyard in Leipzig, Germany, in 1894, but its authenticity is not established. In 1895, anatomist Wilhelm His concluded from his examination of the skeleton and reconstruction of the face that it most likely belonged to Bach. In 1949, surgeon Wolfgang Rosenthal noticed exostoses on the skeleton and on x-rays of 11 living organists and proposed a condition, Organistenkrankheit, which he interpreted as evidence that the skeleton was Bach's. However, our critical assessment of the remains analysis raises doubts: the localisation of the grave was dubious, and the methods used by His to reconstruct the face are controversial. Also, our study of the pelvic x-rays of 12 living professional organists failed to find evidence for the existence of Organistenkrankheit. We believe it is unlikely that the skeleton is that of Bach; techniques such as DNA analysis might help resolve the question but, to date, church authorities have not approved their use on the skeleton.
The Bach-Lanczos Lagrangian in matrix relativity
International Nuclear Information System (INIS)
Maluf, J.W.
1987-01-01
The author examines the generalisation of the Bach-Lanczos Lagrangian in matrix relativity where it is no longer a topological invariant, and find that for certain structures of the matrix affine connection a Yang-Mills type Lagrangian is obtained. Thus the possibility is considered of interpreting non-Abelian gauge fields as arising from an otherwise topological invariant. (author)
The normal conformal Cartan connection and the Bach tensor
International Nuclear Information System (INIS)
Korzynski, Mikolaj; Lewandowski, Jerzy
2003-01-01
The goal of this paper is to express the Bach tensor of a four-dimensional conformal geometry of an arbitrary signature by the Cartan normal conformal (CNC) connection. We show that the Bach tensor can be identified with the Yang-Mills current of the connection. It follows from that result that a conformal geometry whose CNC connection is reducible in an appropriate way has a degenerate Bach tensor. As an example we study the case of a CNC connection which admits a twisting covariantly constant twistor field. This class of conformal geometries of this property is known as given by the Fefferman metric tensors. We use our result to calculate the Bach tensor of an arbitrary Fefferman metric and show that it is proportional to the tensorial square of the four-fold eigenvector of the Weyl tensor. Finally, we solve the Yang-Mills equations imposed on the CNC connection for all the homogeneous Fefferman metrics. The only solution is the Nurowski-Plebanski metric
IV Bach conference on radiation chemistry. Abstracts of reports
International Nuclear Information System (INIS)
2005-01-01
IV Bach conference on radiation chemistry was conducted in the framework of the conference Physico-chemical foundations of new technologies of XXI century. Problems of radiolysis of modern polymeric materials, post-irradiation examination of different organic and inorganic compounds treated with ionizing radiation, equipment used for these examinations, application of data obtained in these researches were represented in reports [ru
Bach2 Controls Homeostasis of Eosinophils by Restricting the Type-2 Helper Function of T Cells.
Sato, Yuki; Kato, Hiroki; Ebina-Shibuya, Risa; Itoh-Nakadai, Ari; Okuyama, Ryuhei; Igarashi, Kazuhiko
2017-03-01
Bach2 is a transcription factor which represses its target genes and plays important roles in the differentiation of B and T lymphoid cells. Bach2-deficient (KO) mice develop severe pulmonary alveolar proteinosis, which is associated with increased numbers of granulocytes and T cells. Bach2 is essential for the regulation of T cells, but its role in the regulation of granulocytes is not clear. Here, we observed increased numbers of eosinophils but not neutrophils in the bone marrow, spleen, peripheral blood, and bronchoalveolar lavage fluids of Bach2 KO mice compared with those of wild-type (WT) mice. Upon co-transplantation of the bone marrow cells from CD45.2 Bach2 KO and CD45.1/CD45.2 double-positive WT mice to irradiated WT CD45.1/CD45.2 mice, the reconstituted numbers of eosinophils were similar between Bach2 KO and WT cells. These results showed that the deficiency of Bach2 in eosinophils did not directly drive the differentiation of eosinophils. To investigate the effect of Bach2 KO CD4 + T cells upon eosinophils, we analyzed Rag2/Bach2-double deficient (dKO) mice which lack lymphocytes including CD4 + T cells. Rag2/Bach2 dKO mice did not show any increase in the numbers of eosinophils. Importantly, Bach2 KO mice showed an increase of interleukin-5 (Il-5) in the sera compared with WT mice. These results suggest that up-regulated functions of CD4 + T cells including secretion of Il-5 resulted in proliferation and/or migration to peripheral tissues of eosinophils in Bach2 KO mice. We propose that Bach2 controls homeostasis of eosinophils via restricting the production of Il-5 in CD4 + T cells.
Extremal Kähler metrics and Bach-Merkulov equations
Koca, Caner
2013-08-01
In this paper, we study a coupled system of equations on oriented compact 4-manifolds which we call the Bach-Merkulov equations. These equations can be thought of as the conformally invariant version of the classical Einstein-Maxwell equations. Inspired by the work of C. LeBrun on Einstein-Maxwell equations on compact Kähler surfaces, we give a variational characterization of solutions to Bach-Merkulov equations as critical points of the Weyl functional. We also show that extremal Kähler metrics are solutions to these equations, although, contrary to the Einstein-Maxwell analogue, they are not necessarily minimizers of the Weyl functional. We illustrate this phenomenon by studying the Calabi action on Hirzebruch surfaces.
Directory of Open Access Journals (Sweden)
Noraima Contrera Vega
2012-04-01
Full Text Available Se realizó una intervención terapéutica en 15 pacientes con alcoholismo crónico, pertenecientes al Grupo Básico de Trabajo No. 3 del Policlínico Docente "Armando García Aspurú", de Santiago de Cuba, desde junio de 2008 hasta febrero de 2009, a fin de evaluar la efectividad de la terapia floral de Bach en el tratamiento de dichos pacientes, para lo cual se aplicó primero un esquema terapéutico general y luego individual. Se emplearon la media aritmética para cuantificar los datos, la desviación estándar para estimar la variabilidad y el porcentaje como medida de resumen, con un intervalo de confianza de 95 %. La favorable evolución observada en la mayoría de ellos puso de manifiesto la utilidad de esta terapia para disminuir manifestaciones psicosomáticas de ansiedad, depresión e insomnio y mejorar así la calidad de vida, tanto de los afectados como de sus familiares.A therapeutic intervention was performed in 15 patients with chronic alcoholism, belonging to the Working Basic Group No. 3 of "Armando García Aspurú" Teaching Polyclinic in Santiago de Cuba, from June 2008 to February 2009, in order to evaluate the effectiveness of Bach flower therapy in the treatment of these patients, thus applying a therapeutic regimen generally first and then individually. The arithmetic mean to quantify the data, the standard deviation to estimate the variability and the percentage as summary measure were used, with a 95% confidence interval. The favorable course observed in most of them demonstrated the usefulness of this therapy to reduce psychosomatic manifestations of anxiety, depression and insomnia and improve the life quality of both patients and their families.
Bach2 is involved in neuronal differentiation of N1E-115 neuroblastoma cells
International Nuclear Information System (INIS)
Shim, Ki Shuk; Rosner, Margit; Freilinger, Angelika; Lubec, Gert; Hengstschlaeger, Markus
2006-01-01
Bach1 and Bach2 are evolutionarily related members of the BTB-basic region leucine zipper transcription factor family. We found that Bach2 downregulates cell proliferation of N1E-115 cells and negatively affects their potential to differentiate. Nuclear localization of the cyclin-dependent kinase inhibitor p21 is known to arrest cell cycle progression, and cytoplasmic p21 has been shown to promote neuronal differentiation of N1E-115 cells. We found that ectopic Bach2 causes upregulation of p21 expression in the nucleus and in the cytoplasm in undifferentiated N1E-115 cells. In differentiated cells, Bach2 specifically triggers upregulation of cytoplasmic p21. Our data suggest that Bach2 expression could represent a switch during the process of neuronal differentiation. Bach2 is not expressed in neuronal precursor cells. It would have negative effects on proliferation and differentiation of these cells. In differentiated neuronal cells Bach2 expression is upregulated, which could allow Bach2 to function as a gatekeeper of the differentiated status
[Johann Sebastian Bach: life, oeuvre and his significance for the cardiology].
Trappe, H-J
2014-12-01
Johann Sebastian Bach was born on 1685 in Eisenach. By the time he turned 10, Bach found himself an orphan after the death of both of his parents. After working in Weimar, Arnstadt, Mühlhausen, and Köthen Bach signed a contract to become the new organist and teacher at St. Thomas Church Leipzig in 1723 and stayed there until his death. In 1749, Bach tried to fix his failing sight by having surgery the following year, but the operation ended up leaving him completely blind. Few months later, Bach suffered a stroke. He died in Leipzig on July 28, 1750. In recent years, there were some questions whether music of different styles can directly alter cardiovascular parameters, particularly by using Bach's music. In some studies it has been shown that cardiovascular parameters (blood pressure, heart rate) are influenced by music. Listening to classic music (Bach) leads to positive erffects, also music by Italian composters. In contrast, "modern" music, vocal music or songs had no positive effects on cardiovascular parameters. In addition, positive effects on cardiovascular parameters and behavioural patters have been shown in an animal study recently, by Bach's music. Recent studies showed clearly that music influences cardiovascular parameters. It is obvious that classical music (Bach) has benefitial effects, both in humans and in animals. Therefore, the music of the "Thomaskantor" will improve both, quality of life and cardiovascular health. © Georg Thieme Verlag KG Stuttgart · New York.
Dynamic changes in Bach1 expression in the kidney of rhabdomyolysis-associated acute kidney injury.
Directory of Open Access Journals (Sweden)
Masakazu Yamaoka
Full Text Available Free heme, a pro-oxidant released from myoglobin, is thought to contribute to the pathogenesis of rhabdomyolysis-associated acute kidney injury (RM-AKI, because renal overexpression of heme oxygenase-1 (HO-1, the rate-limiting enzyme in heme catabolism, confers protection against RM-AKI. BTB and CNC homology 1 (Bach1 is a heme-responsive transcription factor that represses HO-1. Here, we examined the changes with time in the gene expression of Bach1, HO-1, and δ-aminolevulinate synthase (ALAS1, a heme biosynthetic enzyme in the rat kidney using an RM-AKI model induced by the injection of 50% glycerol (10 mL/kg body weight into bilateral limbs. We also examined the protein expression of Bach1 in the nucleus and cytosol, and HO-1 in the rat kidney. Glycerol treatment induced significant elevation of serum creatinine kinase and aspartate aminotransferase levels followed by the marked elevation of serum blood urea nitrogen and creatinine levels, which caused serious damage to renal tubules. Following glycerol treatment, HO-1 mRNA and protein levels were significantly up-regulated, while ALAS1 mRNA expression was down-regulated, suggesting an increase in the free renal heme concentration. The Bach1 mRNA level was drastically increased 3 h after glycerol treatment, and the increased level was maintained for 12 h. Nuclear Bach1 protein levels were significantly decreased 3 h after treatment. Conversely, cytosolic Bach1 protein levels abruptly increased after 6 h. In conclusion, we demonstrate the dynamic changes in Bach1 expression in a rat model of RM-AKI. Our findings suggest that the increase in Bach1 mRNA and cytosolic Bach1 protein expression may reflect de novo Bach1 protein synthesis to compensate for the depletion of nuclear Bach1 protein caused by the induction of HO-1 by free heme.
Respek vir die liturgiese teks: J. Calvyn en J.S. Bach | Schulze | Acta ...
African Journals Online (AJOL)
Despite obvious differences in terms of historical and theological aspects between Calvin and Bach, both share a respect for the priority of the Biblical text in worship.The anomaly between Calvin's engagement in the versification of the Psalter and Bach's preference for the literal (rather than a versified) use of the Biblical ...
Jesu, meine Freude BWV 227 van Johann Sebastian Bach : een praedicatio sonora
Eikelboom, A.
2007-01-01
In the oeuvre of Johann Sebastian Bach the motets take only a modest position. According to Bach's early biographers he must have written many motets, but only few have been preserved. Among these motets Jesu, meine Freude BWV 227 occupies a special position. It is the only one which combines a
Eric Bach : ilustrador de Luis de Góngora
Directory of Open Access Journals (Sweden)
María Dolores Antigüedad del Castillo Olivares
1991-01-01
Full Text Available El pintor holandés Eric Bach [Eríurt (Alemania 1911] ha mostrado durante el año 1991 sus obras como ilustrador en sendas exposiciones celebradas en Córdoba y Madrid. El artista que reside indistintamente en Munich y Barcelona, expuso las ilustraciones que ha realizado para la edición en alemán de Los veinticuatro sonetos amorosos, heroicos y fúnebres de don Luis de Góngora, edición hecha por encargo del Sr. Rüdiger Kampmann de Munich.
Seydel, Elena; Turley, Michael; Becht, Michael; Heckmann, Tobias
2013-04-01
On July 3rd, 2010, an extreme precipitation event occurred in the municipality of Wachtberg at the southern urban fringe of the Federal City of Bonn. The 30-min intensity of the torrential rain was estimated to represent a 1000 year event according to the KOSTRA dataset (German Meteorological Service, DWD). Rapid overland flow and the exceedance of the design values of the sewerage system caused a flash flood in the Mehlemer Bach catchment. Roughly 400 buildings were affected in its lower, urbanized part, and it took over two weeks to clear the damage. Similar flash flood events have been recorded in the same catchment regularly since the year 1693, three times in the last 80 years alone. The fact that, in case of the 2010 event, the official weather warning was released almost simultaneously to the beginning of the downpour highlights the urgent need for preparative action in the longer term. Flash flood risk mitigation relies, among others, on risk awareness and preparedness of residents. One aim of this study is to analyse the current risk communication in the drainage area of Mehlemer Bach through questionnaires and expert interviews, which will provide a good basis for an open dialogue between residents and the authorities. There is an urgent need for practical and accessible advice, and it must be ensured that the resources and capabilities of the individuals involved are taken into consideration. In addition, we compare a hazard map of the area to mental maps drawn by the local population in order to assess their risk perception.
Probability and Style in the Chorales of J. S. Bach
Directory of Open Access Journals (Sweden)
Matthew Woolhouse
2015-12-01
Full Text Available This paper discusses de Clercq’s (2015 contribution to our understanding of the relationship between scale degree and cadence type within Bach chorales from the perspective of style and probability. De Clercq is applauded for the diligence of this research and for attempting to synthesize findings into a practical, working model of benefit to music-theory students and educators. A literal interpretation of a premise underpinning his model—that more common musical events are more indicative of a style—is, however, found to be inconsistent. A test is described in which university students enrolled in a second-level harmony class were presented with pairs of cadences. Cadences were manipulated in various ways, primarily to investigate whether the inclusion of certain figurations would result in a perfect-authentic cadence, the most ubiquitous cadence within Bach chorales, being considered less stylistic than a never-occurring cadence. This proved to be the case, demonstrating the importance of figuration over scale degree and cadence for the accomplishment of style. De Clercq’s model is further discussed with respect to probabilistic models of music and in relation to proscriptive approaches to teaching harmony.
Bach1 gene ablation reduces steatohepatitis in mouse MCD diet model.
Inoue, Motoki; Tazuma, Susumu; Kanno, Keishi; Hyogo, Hideyuki; Igarashi, Kazuhiko; Chayama, Kazuaki
2011-03-01
Bach1 is a transcriptional repressor of heme oxygenase-1 (HO-1, a.k.a. HSP-32), which is an inducible enzyme and has anti-oxidation/anti-inflammatory properties shown in various models of organ injuries. Since oxidative stress plays a pivotal role in the pathogenesis of nonalcoholic steatohepatitis (NASH), HO-1 induction would be expected to prevent the development of NASH. In this study, we investigated the influence of Bach1 ablation in mice on the progression of NASH in methionine-choline deficient (MCD) diet model. Bach1 ablation resulted in significant induction of HO-1 mRNA and its activity in the liver. When fed MCD diet, Bach1(-/-) mice exhibited negligible hepatic steatosis compared to pronounced steatohepatitis in wild type mice with 6-fold increase in hepatic triglyceride content. Whereas feeding of MCD diet decreased mRNA expressions of peroxisome proliferator-activated receptor (PPAR) α and microsomal triglyceride transfer protein (MTP) in wild type mice, there were no change in Bach1(-/-) mice. In addition, hepatic concentration of malondialdehyde (MDA), a biomarker for oxidative stress as well as plasma alanine aminotransferase (ALT) was significantly lower in Bach1(-/-) mice. These findings suggest that Bach1 ablation exerts hepatoprotective effect against steatohepatitis presumably via HO-1 induction and may be a potential therapeutic target.
The Performance of Bach: Study of Rhythmic Timing by Skilled Musicians.
Johnson, Christopher M.
1999-01-01
Analyzes 15 performances of "Bach's Suite Number 3 for Violoncello solo, Bourree Number 1" and determines what patterns of rhythmic variation (rubato) were used by soloists. Indicates that the soloists demonstrated four identifiable and similar trends in the performances. (CMK)
Pärt, Arvo: "Summa". Collage sur BACH. Fratres / Patric Wiklacz
Wiklacz, Patric
1997-01-01
Uuest heliplaadist "Pärt, Arvo: "Summa". Collage sur BACH. Fratres. Cantus in Memoriam Benjamin Britten. Summa. Festina Lente. Tabula rasa. Tapiola Sinfonietta / Jean-Jacques Kantorow". BIS-CD-834. 62:30 DDD
Bach2 represses the AP-1-driven induction of interleukin-2 gene transcription in CD4+ T cells
Jang, Eunkyeong; Lee, Hye Rim; Lee, Geon Hee; Oh, Ah-Reum; Cha, Ji-Young; Igarashi, Kazuhiko; Youn, Jeehee
2017-01-01
The transcription repressor Bach2 has been proposed as a regulator of T cell quiescence, but the underlying mechanism is not fully understood. Given the importance of interleukin-2 in T cell activation, we investigated whether Bach2 is a component of the network of factors that regulates interleukin-2 expression. In primary and transformed CD4+ T cells, Bach2 overexpression counteracted T cell receptor/CD28- or PMA/ionomycin-driven induction of interleukin-2 expression, and silencing of Bach2...
The results of treatment for thyrotoxicosis at Bach Mai hospital
International Nuclear Information System (INIS)
Phan Sy An
2002-01-01
The authors evaluated the results of treating hyperthyroidism with 131 I. Patient selection for tre treatment is based on clinical features and laboratory test results such as thyroid uptake, scintigraphy and RIA determinations of thyroid hormones. The average dose is 6.2±1.1 m Ci (that is 233.1±40.7 MBq). The average number of doses is 1.3 for one patient. The results are as follows: - Euthyroid status after 4 years follow-up from the 1 31 I dose administration: 72.3 - Persistent or recurrent hyperthyroidism: 20 %. - Hypothyroid complication appears 6 years after the administration of 1 31 I dose: 14 %. So the cumulative hypothyroid rate is: 2.3 % per year. - Serious complications were not observed in any patient. Hyperthyroidism is a common health problem in Vietnam (1). In the past, only antithyroid drugs and surgery were used. 131 I was first introduced to Vietnam in the Nuclear Medicine Department in Bach Mai in 1971 and thereafter widely applied in the country. (Author)
Optical band gap energy and ur bach tail of CdS:Pb2+ thin films
Energy Technology Data Exchange (ETDEWEB)
Chavez, M.; Juarez, H.; Pacio, M. [Universidad Autonoma de Puebla, Instituto de Ciencias, Centro de Investigacion en Dispositivos Semiconductores, Av. 14 Sur, Col. Jardines de San Manuel, Ciudad Universitaria, Puebla, Pue. (Mexico); Gutierrez, R.; Chaltel, L.; Zamora, M.; Portillo, O. [Universidad Autonoma de Puebla, Facultad de Ciencias Quimicas, Laboratorio de Materiales, Apdo. Postal 1067, 72001 Puebla, Pue. (Mexico); Mathew, X., E-mail: osporti@yahoo.mx [UNAM, Instituto de Energias Renovables, Temixco, Morelos (Mexico)
2016-11-01
Pb S-doped CdS nano materials were successfully synthesized using chemical bath. Transmittance measurements were used to estimate the optical band gap energy. Tailing in the band gap was observed and found to obey Ur bach rule. The diffraction X-ray show that the size of crystallites is in the ∼33 nm to 12 nm range. The peaks belonging to primary phase are identified at 2θ = 26.5 degrees Celsius and 2θ = 26.00 degrees Celsius corresponding to CdS and Pb S respectively. Thus, a shift in maximum intensity peak from 2θ = 26.4 to 28.2 degrees Celsius is clear indication of possible transformation of cubic to hexagonal phase. Also peaks at 2θ = 13.57, 15.9 degrees Celsius correspond to lead perchlorate thiourea. The effects on films thickness and substrate doping on the band gap energy and the width on tail were investigated. Increasing doping give rise to a shift in optical absorption edge ∼0.4 eV. (Author)
Ohta, Ryo; Tanaka, Nobuhiro; Nakanishi, Kazuyoshi; Kamei, Naosuke; Nakamae, Toshio; Izumi, Bunichiro; Fujioka, Yuki; Ochi, Mitsuo
2012-09-01
Intervertebral disc degeneration is considered to be a major feature of low back pain. Furthermore, oxidative stress has been shown to be an important factor in degenerative diseases such as osteoarthritis and is considered a cause of intervertebral disc degeneration. The purpose of this study was to clarify the correlation between oxidative stress and intervertebral disc degeneration using Broad complex-Tramtrack-Bric-a-brac and cap'n'collar homology 1 deficient (Bach 1-/-) mice which highly express heme oxygenase-1 (HO-1). HO-1 protects cells from oxidative stress. Caudal discs of 12-week-old and 1-year-old mice were evaluated as age-related models. Each group and period, 5 mice (a total of 20 mice, a total of 20 discs) were evaluated as age-related model. C9-C10 caudal discs in 12-week-old Bach 1-/- and wild-type mice were punctured using a 29-gauge needle as annulus puncture model. Each group and period, 5 mice (a total of 60 mice, a total of 60 discs) were evaluated. The progress of disc degeneration was evaluated at pre-puncture, 1, 2, 4, 8 and 12 weeks post-puncture. Radiographic, histologic and immunohistologic analysis were performed to compare between Bach 1-/- and wild-type mice. In the age-related model, there were no significant differences between Bach 1-/- and wild-type mice radiologically and histologically. However, in the annulus puncture model, histological scoring revealed significant difference at 8 and 12 weeks post-puncture. The number of HO-1 positive cells was significantly greater in Bach 1-/- mice at every period. The apoptosis rate was significantly lower at 1 and 2 weeks post-puncture in Bach 1-/- mice. Oxidative stress prevention may avoid the degenerative process of the intervertebral disc after puncture, reducing the number of apoptosis cells. High HO-1 expression may also inhibit oxidative stress and delay the process of intervertebral disc degeneration.
Spherically Symmetric Solutions of the Einstein-Bach Equations and a Consistent Spin-2 Field Theory
International Nuclear Information System (INIS)
Janda, A.
2006-01-01
We briefly present a relationship between General Relativity coupled to certain spin-0 and spin-2 field theories and higher derivatives metric theories of gravity. In a special case, described by the Einstein-Bach equations, the spin-0 field drops out from the theory and we obtain a consistent spin-two field theory interacting gravitationally, which overcomes a well known inconsistency of the theory for a linear spin-two field coupled to the Einstein's gravity. Then we discuss basic properties of static spherically symmetric solutions of the Einstein-Bach equations. (author)
Zhu, Zhengwei; Yang, Chao; Wen, Leilei; Liu, Lu; Zuo, Xianbo; Zhou, Fusheng; Gao, Jinping; Zheng, Xiaodong; Shi, Yinjuan; Zhu, Caihong; Liang, Bo; Yin, Xianyong; Wang, Wenjun; Cheng, Hui; Shen, Songke; Tang, Xianfa; Tang, Huayang; Sun, Liangdan; Zhang, Anping; Yang, Sen; Cui, Yong; Zhang, Xuejun; Sheng, Yujun
2018-04-01
This study was aimed to explore the effect of Bach2 on B cells in systemic lupus erythematosus (SLE), as well as the underlying mechanisms. Expression of Bach2, phosphorylated-Bach2 (p-Bach2), Akt, p-Akt and BCR-ABL (p210) in B cells isolated from SLE patients and the healthy persons were assessed by Western blot. Immunofluorescence staining was performed to assess the localization of Bach2 in B cells. Enzyme-linked immunosorbent assay (ELISA) was employed to detect IgG produced by B cells. Cell counting kit-8 (CCK-8) and Annexin-V FITC/PI double staining assay were adopted to evaluate cell proliferation and apoptosis in B cells, respectively. Compared to the healthy controls, Bach2, p-Akt and p210 were significantly decreased, while nuclear translocation of Bach2, IgG, CD40 and CD86 obviously up-regulated in B cells from SLE patients. Bach2 significantly inhibited the proliferation, promoted apoptosis of B cells from SLE patients, whereas BCR-ABL dramatically reversed cell changes induced by Bach2. Besides, BCR-ABL also inhibited nuclear translocation of Bach2 in B cells from SLE patients. Further, LY294002 treatment had no effect on decreased expression of Bach2 induced by BCR-ABL, but significantly eliminated BCR-ABL-induced phosphorylation of Bach2 and restored reduced nuclear translocation of Bach2 induced by BCR-ABL in B cells from SLE. Bach2 may play a suppressive role in B cells from SLE, and BCR-ABL may inhibit the nuclear translocation of Bach2 via serine phosphorylation through the PI3K pathway. Copyright © 2018 Elsevier Inc. All rights reserved.
Das Orgelbüchlein von Johann Sebastian Bach : Strukturen und innere Ordnung
Pachlatko, F.M.
2014-01-01
The 'Little Organ Book' (O=B) is the first major cyclical work of Johann Sebastian Bach, and was probably written between 1703 and 1720. Hitherto it has been assumed that the O=B was unfinished. Of the 164 chorale titles listed in the autograph, only 46 chorales were composed. As a preliminary
RESPEK VIR DIE LITURGIESE TEKS: J. CALVYN EN J.S. BACH
African Journals Online (AJOL)
admin
Calvin and Bach, both share a respect for the priority of the Biblical text in worship. The anomaly ...... spirits” in the St. John Passion. ..... P.E. Bernoulli & F. Furler (Hrsg.), Der Genfer Psalter (Zürich: Theologischer Verlag .... London: Collins.
Fra Bach til Beatles på tre måneder!
DEFF Research Database (Denmark)
Bonde, Anders
2006-01-01
Fra Bach til Beatles på tre måneder! Didaktiske overvejelser og erfaringer på baggrund af nyt et-semesters oversigtskursus Anders Bonde Fra og med september 2005 har man på musikuddannelsen ved Aalborg Universitet, som led i implementeringen af en ny tofaglig bachelorstruktur, udviklet et nyt...
Johnson, Christopher M.; Madsen, Clifford K.; Geringer, John M.
2012-01-01
The purpose of this study was to investigate how instruction in the use of rhythmic nuances influences subsequent timings of a musical performance. Volunteer participants were asked to listen to and alter a performance of an excerpt from Mozart's "Concerto for Horn and Orchestra No. 2" and Bach's "Suite Number 3 for Violoncello solo, Bourree…
Bach music in preterm infants: no 'Mozart effect' on resting energy expenditure.
Keidar, H Rosenfeld; Mandel, D; Mimouni, F B; Lubetzky, R
2014-02-01
To study whether Johan Sebastian Bach music has a lowering effect on resting energy expenditure (REE) similar to that of Wolfgang Amadeus Mozart music. Prospective, randomized clinical trial with cross-over in 12 healthy, appropriate weights for gestational age (GA), gavage fed, metabolically stable, preterm infants. Infants were randomized to a 30-min period of either Mozart or Bach music or no music over 3 consecutive days. REE was measured every minute by indirect calorimetry. Three REE measurements were performed in each of 12 infants at age 20±15.8 days. Mean GA was 30.17±2.44 weeks and mean birthweight was 1246±239 g. REE was similar during the first 10-min of all three randomization periods. During the next 10-min period, infants exposed to music by Mozart had a trend toward lower REE than when not exposed to music. This trend became significant during the third 10-min period. In contrast, music by Bach or no music did not affect significantly REE during the whole study. On average, the effect size of Mozart music upon REE was a reduction of 7.7% from baseline. Mozart music significantly lowers REE in preterm infants, whereas Bach music has no similar effect. We speculate that 'Mozart effect' must be taken into account when incorporating music in the therapy of preterm infants, as not all types of music may have similar effects upon REE and growth.
Pärt: Collage sur B-A-C-H für Kammerorschester / Hans-Christian Dadelsen
Dadelsen, Hans-Christian
1993-01-01
Uuest heliplaadist "Pärt: Collage sur B-A-C-H für Kammerorschester, Summa (1991) für Streichorchester, Fratres, Sinfonie Nr. 2, Festina lente, Wenn Bach Bienen gezüchtet hätte, Credo für Klavier, Chor und Orchester. Philharmonia Orchestra and Chorus, Neeme Järvi". Chandos/Koch CD 9134 (WD: 63'02")
Igarashi, Kazuhiko; Watanabe-Matsui, Miki
2014-04-01
The connection between gene regulation and metabolism is an old issue that warrants revisiting in order to understand both normal as well as pathogenic processes in higher eukaryotes. Metabolites affect the gene expression by either binding to transcription factors or serving as donors for post-translational modification, such as that involving acetylation and methylation. The focus of this review is heme, a prosthetic group of proteins that includes hemoglobin and cytochromes. Heme has been shown to bind to several transcription factors, including Bach1 and Bach2, in higher eukaryotes. Heme inhibits the transcriptional repressor activity of Bach1, resulting in the derepression of its target genes, such as globin in erythroid cells and heme oxygenase-1 in diverse cell types. Since Bach2 is important for class switch recombination and somatic hypermutation of immunoglobulin genes as well as regulatory and effector T cell differentiation and the macrophage function, the heme-Bach2 axis may regulate the immune response as a signaling cascade. We discuss future issues regarding the topic of the iron/heme-gene regulation network based on current understanding of the heme-Bach axis, including the concept of "iron immunology" as the synthesis of the iron metabolism and the immune response.
Warnatz, Hans-Jörg; Schmidt, Dominic; Manke, Thomas; Piccini, Ilaria; Sultan, Marc; Borodina, Tatiana; Balzereit, Daniela; Wruck, Wasco; Soldatov, Alexey; Vingron, Martin; Lehrach, Hans; Yaspo, Marie-Laure
2011-07-01
The regulation of gene expression in response to environmental signals and metabolic imbalances is a key step in maintaining cellular homeostasis. BTB and CNC homology 1 (BACH1) is a heme-binding transcription factor repressing the transcription from a subset of MAF recognition elements at low intracellular heme levels. Upon heme binding, BACH1 is released from the MAF recognition elements, resulting in increased expression of antioxidant response genes. To systematically address the gene regulatory networks involving BACH1, we combined chromatin immunoprecipitation sequencing analysis of BACH1 target genes in HEK 293 cells with knockdown of BACH1 using three independent types of small interfering RNAs followed by transcriptome profiling using microarrays. The 59 BACH1 target genes identified by chromatin immunoprecipitation sequencing were found highly enriched in genes showing expression changes after BACH1 knockdown, demonstrating the impact of BACH1 repression on transcription. In addition to known and new BACH1 targets involved in heme degradation (HMOX1, FTL, FTH1, ME1, and SLC48A1) and redox regulation (GCLC, GCLM, and SLC7A11), we also discovered BACH1 target genes affecting cell cycle and apoptosis pathways (ITPR2, CALM1, SQSTM1, TFE3, EWSR1, CDK6, BCL2L11, and MAFG) as well as subcellular transport processes (CLSTN1, PSAP, MAPT, and vault RNA). The newly identified impact of BACH1 on genes involved in neurodegenerative processes and proliferation provides an interesting basis for future dissection of BACH1-mediated gene repression in neurodegeneration and virus-induced cancerogenesis.
Extended exome sequencing identifies BACH2 as a novel major risk locus for Addison's disease.
Eriksson, D; Bianchi, M; Landegren, N; Nordin, J; Dalin, F; Mathioudaki, A; Eriksson, G N; Hultin-Rosenberg, L; Dahlqvist, J; Zetterqvist, H; Karlsson, Å; Hallgren, Å; Farias, F H G; Murén, E; Ahlgren, K M; Lobell, A; Andersson, G; Tandre, K; Dahlqvist, S R; Söderkvist, P; Rönnblom, L; Hulting, A-L; Wahlberg, J; Ekwall, O; Dahlqvist, P; Meadows, J R S; Bensing, S; Lindblad-Toh, K; Kämpe, O; Pielberg, G R
2016-12-01
Autoimmune disease is one of the leading causes of morbidity and mortality worldwide. In Addison's disease, the adrenal glands are targeted by destructive autoimmunity. Despite being the most common cause of primary adrenal failure, little is known about its aetiology. To understand the genetic background of Addison's disease, we utilized the extensively characterized patients of the Swedish Addison Registry. We developed an extended exome capture array comprising a selected set of 1853 genes and their potential regulatory elements, for the purpose of sequencing 479 patients with Addison's disease and 1394 controls. We identified BACH2 (rs62408233-A, OR = 2.01 (1.71-2.37), P = 1.66 × 10 -15 , MAF 0.46/0.29 in cases/controls) as a novel gene associated with Addison's disease development. We also confirmed the previously known associations with the HLA complex. Whilst BACH2 has been previously reported to associate with organ-specific autoimmune diseases co-inherited with Addison's disease, we have identified BACH2 as a major risk locus in Addison's disease, independent of concomitant autoimmune diseases. Our results may enable future research towards preventive disease treatment. © 2016 The Authors. Journal of Internal Medicine published by John Wiley & Sons Ltd on behalf of Association for Publication of The Journal of Internal Medicine.
Surface and Subsurface Meltwater Ponding and Refreezing on the Bach Ice Shelf, Antarctic Peninsula
Willis, I.; Haggard, E.; Benedek, C. L.; MacAyeal, D. R.; Banwell, A. F.
2017-12-01
There is growing concern about the stability and fate of Antarctic ice shelves, as four major ice shelves on the Antarctic Peninsula have completely disintegrated since the 1950s. Their collapse has been linked to the southward movement of the -9 oC mean annual temperature isotherm. The proximal causes of ice shelf instability are not fully known, but an increase in surface melting leading to water ponding and ice flexure, fracture and calving has been implicated. Close to the recently collapsed Wilkins Ice Shelf, the Bach Ice Shelf (72°S 72°W) may be at risk from break up in the near future. Here, we document the changing surface hydrology of the Bach Ice Shelf between 2001 and 2017 using Landsat 7 & 8 imagery. Extensive surface water is identified across the Bach Ice Shelf and its tributary glaciers. Two types of drainage system are observed, drainage into firn via simple stream networks and drainage into the ocean via more complex networks. There are differences between the surface hydrology on the ice shelf and the tributary glaciers, as well as variations within and between summer seasons linked to surface air temperature fluctuations. We also document the changing subsurface hydrology of the ice shelf between 2014 and 2017 using Sentinel 1 A/B SAR imagery. Forty-five subsurface features are identified and analysed for their patterns and temporal evolution. Fourteen of the features show similar characteristics to previously-identified buried lakes and some occur in areas associated with surface lakes in previous years. The buried lakes show seasonal variability in area and surface backscatter, which varies with surface air temperature, and are consistent with the presence, enlargement and contraction of liquid water bodies. Buried lakes are an overlooked source of water loading on ice shelves, which may contribute to ice shelf flexure and potential fracture.
Bach: Máxima expresión de genes musicales.
Directory of Open Access Journals (Sweden)
Alfredo Jácome Roca
2001-04-01
Full Text Available
El padre de la música moderna, Juan Sebastián Bach, inauguró el milenio con explosiones armoniosas en todo el planeta, al conmemorarse 250 años de la finalización de su ciclo vital.
Como ya lo hemos indicado, la presidencia de la Academia ha querido intercalar las sesiones ordinarias con otras de tipo cultural y aprovechó este importante aniversario para hacer su contribución, al igual que lo hicieron otras importantes instituciones del país.
Por esta razón, el académico Roso Alfredo Cala Hederich nos obsequió con una agradable velada de antología, en la cual combinó la audición de trozos musicales que en su sentir representan lo mejor de la música del genio alemán (en ocasiones sirviendo de “melopea” o música de fondo para su presentación oral, con extraordinarios videos de paisajes y edificios por donde transcurrió la vida de uno de los tres más grandes compositores mundiales, junto con Beethoven y Mozart posteriormente.
Bach fue la máxima expresión musical de una familia de muchos miembros cuyo oficio artesanal fue tocar, enseñar, crear y perfeccionar la música, que fue exitosa “in crescendo” a partir del primer Bach, llegó a su clímax con J.Sebastián y fue paulatinamente decayendo hasta llegar a la buena calidad, mas no extraordinaria. Bach, consciente de la importancia de su familia en ese arte (el que además representaba el “modus vivendi” de todos ellos, escribió una crónica de 53 familiares, todos de sexo masculino, con información sobre su actividad musical, acompañada de un árbol genealógico que infortunadamente se extravió...
J. S. Bach: Técnica de composición como explicatio textus
Directory of Open Access Journals (Sweden)
González Valle, José V.
2002-12-01
Full Text Available During the XVII to XVIII centuries, musical theorists, especially in Protestant circles compiled, classified and published diverse constructive procedures (figures used by renaissance composers and their immediate precursors in order to express sub specie musicae the meaning of texts. In this way, composition techniques became "idiomatical". J. S. Bach's contribution in this field has been investigated and acknowledged by musicologists throughout history. For some time now, evangelical theology has been taking an interest in that particular aspect of Bach's music, as it is obvious that in order to find an explanation for texts both devout and biblical it is often useful to analyse artistic expressions of the said texts, given that, going down that path theology will learn to understand and listen to them in a new light. That is where music, more so than other art forms, acquires special importance.Durante los siglos XVII-XVIII, los teóricos de música, especialmente en ambientes protestantes, recogen, sistematizan y dan a conocer diversos procedimientos constructivos (figuras usados por los compositores renacentistas e inmediatamente posteriores para expresar sub specie musicae el contenido del texto. De este modo, la técnica de composición musical se va "idiomatizando". La aportación de J. S. Bach, en este sentido, ha sido estudiada y reconocida por la musicología histórica. Hace un tiempo, también la teología evangélica ha comenzado a interesarse en este aspecto de la música de Bach, pues parece claro, que la explicación de los textos bíblicos o devocionales puede tener éxito por medio del trabajo artístico sobre dichos textos, ya que, por este camino, la teología aprende a oírlos y entenderlos de un modo nuevo. Es aquí donde la música, mucho más que el resto de las artes, adquiere una especial importancia.
Efectividad de la terapia floral de Bach en pacientes con alcoholismo crónico
Contrera Vega, Noraima; Cedeño Rodríguez, Enriqueta; Vázquez Sánchez, Monserrat
2012-01-01
Se realizó una intervención terapéutica en 15 pacientes con alcoholismo crónico, pertenecientes al Grupo Básico de Trabajo No. 3 del Policlínico Docente "Armando García Aspurú", de Santiago de Cuba, desde junio de 2008 hasta febrero de 2009, a fin de evaluar la efectividad de la terapia floral de Bach en el tratamiento de dichos pacientes, para lo cual se aplicó primero un esquema terapéutico general y luego individual. Se emplearon la media aritmética para cuantificar los datos, la desviació...
Harusato, Akihito; Naito, Yuji; Takagi, Tomohisa; Uchiyama, Kazuhiko; Mizushima, Katsura; Hirai, Yasuko; Higashimura, Yasuki; Katada, Kazuhiro; Handa, Osamu; Ishikawa, Takeshi; Yagi, Nobuaki; Kokura, Satoshi; Ichikawa, Hiroshi; Muto, Akihiko; Igarashi, Kazuhiko; Yoshikawa, Toshikazu
2013-01-01
BTB and CNC homolog 1 (Bach1) is a transcriptional repressor of heme oxygenase-1 (HO-1), which plays an important role in the protection of cells and tissues against acute and chronic inflammation. However, the role of Bach1 in the gastrointestinal mucosal defense system remains little understood. HO-1 supports the suppression of experimental colitis and localizes mainly in macrophages in colonic mucosa. This study was undertaken to elucidate the Bach1/HO-1 system's effects on the pathogenesis of experimental colitis. This study used C57BL/6 (wild-type) and homozygous Bach1-deficient C57BL/6 mice in which colonic damage was induced by the administration of an enema of 2,4,6-trinitrobenzene sulfonic acid (TNBS). Subsequently, they were evaluated macroscopically, histologically, and biochemically. Peritoneal macrophages from the respective mice were isolated and analyzed. Then, wild-type mice were injected with peritoneal macrophages from the respective mice. Acute colitis was induced similarly. TNBS-induced colitis was inhibited in Bach1-deficient mice. TNBS administration increased the expression of HO-1 messenger RNA and protein in colonic mucosa in Bach1-deficient mice. The expression of HO-1 mainly localized in F4/80-immunopositive and CD11b-immunopositive macrophages. Isolated peritoneal macrophages from Bach1-deficient mice highly expressed HO-1 and also manifested M2 macrophage markers, such as Arginase-1, Fizz-1, Ym1, and MRC1. Furthermore, TNBS-induced colitis was inhibited by the transfer of Bach1-deficient macrophages into wild-type mice. Deficiency of Bach1 ameliorated TNBS-induced colitis. Bach1-deficient macrophages played a key role in protection against colitis. Targeting of this mechanism is applicable to cell therapy for human inflammatory bowel disease.
International Nuclear Information System (INIS)
Tahara, Tsuyoshi; Sun Jiying; Igarashi, Kazuhiko; Taketani, Shigeru
2004-01-01
The transcriptional factor Bach1 forms a heterodimer with small Maf family, and functions as a repressor of the Maf recognition element (MARE) in vivo. To investigate the involvement of Bach1 in the heme-dependent regulation of the expression of the α-globin gene, human erythroleukemia K562 cells were cultured with succinylacetone (SA), a heme biosynthetic inhibitor, and the level of α-globin mRNA was examined. A decrease of α-globin mRNA was observed in SA-treated cells, which was restored by the addition of hemin. The heme-dependent expression of α-globin occurred at the transcriptional level since the expression of human α-globin gene promoter-reporter gene containing hypersensitive site-40 (HS-40) was decreased when K562 cells were cultured with SA. Hemin treatment restored the decrease of the promoter activity by SA. The regulation of the HS-40 activity by heme was dependent on the NF-E2/AP-1 (NA) site, which is similar to MARE. The NA site-binding activity of Bach1 in K562 increased upon SA-treatment, and the increase was diminished by the addition of hemin. The transient expression of Bach1 and mutated Bach1 lacking CP motifs suppressed the HS-40 activity, and cancellation of the repressor activity by hemin was observed when wild-type Bach1 was expressed. The expression of NF-E2 strengthened the restoration of the Bach1-effect by hemin. Interestingly, nuclear localization of Bach1 increased when cells were treated with SA, while hemin induced the nuclear export of Bach1. These results indicated that heme plays an important role in the induction of α-globin gene expression through disrupting the interaction of Bach1 and the NA site in HS-40 enhancer in erythroid cells
WCET Analysis for Preemptive Scheduling
Altmeyer, Sebastian; Gebhard, Gernot
2008-01-01
Hard real-time systems induce strict constraints on the timing of the task set. Validation of these timing constraints is thus a major challenge during the design of such a system. Whereas the derivation of timing guarantees must already be considered complex if tasks are running to completion, it gets even more complex if tasks are scheduled preemptively -- especially due to caches, deployed to improve the average performance. In this paper we propose a new method to compu...
Zwitser, M.S.
2012-01-01
In the 1990s, the discussion about Bach and the Holy Spirit followed was revived after Renate Steiger’s (re)discovery of an emblem by Johann Saubert, in which the alto is described as the voice of the Holy Spirit in the believer’s heart. Despite the lively debate and some illuminating
Einarsdottir, Sigrun Lilja
2014-01-01
The purpose of this paper is to demonstrate how amateur choral singers experience collective group support as a method of learning "art music" choral work. Findings are derived from a grounded-theory based, socio-musical case study of an amateur "art music" Bach Choir, in the process of rehearsing and performing the Mass in B…
Johnson, Christopher M.
2000-01-01
Examines the effect of instruction in the use of specific rhythmic nuances on the timings of a musical performance. Forty volunteer upper-division and graduate students performed Johann Sebastian Bach's Suite no. 3 for Violoncello solo, Bouree no. 1, using a computer software program. Discusses the results. (CMK)
Heber, N.
2017-01-01
This PhD dissertation explores poverty and abundance in Bach’s life and sacred cantatas and inquires how he himself might have handled the tension between the material and spiritual aspects. Although his career was increasingly lucrative, Bach did not amass significant wealth. In 1730, he complained
Kuusk, Priit, 1938-
2000-01-01
Lepzigi Thomaskirches taastati Bachi-aegne barokkorel. Flandria Ooperis, Hamburgi Riigiooperis ja Brüsseli Thétre de la Monnaie's lavastuvatest XX saj. heliloojate ooperitest. 16.-26. nov. toimuvast Huddersfieldi kaasaegse muusika festivalist Inglismaal. Festivalist "Liszt ja Bach" Weimaris
Resende, Margarida Maria de Carvalho; Costa, Francisco Eduardo de Carvalho; Gardona, Rodrigo Galvão Bueno; Araújo, Rochilan Godinho; Mundim, Fiorita Gonzales Lopes; Costa, Maria José de Carvalho
2014-08-01
To evaluate the effect of Bach flower Rescue Remedy on the control of risk factors for cardiovascular disease in rats. A randomized longitudinal experimental study. Eighteen Wistar rats were randomly divided into three groups of six animals each and orogastrically dosed with either 200 μl of water (group A, control), or 100 μl of water and 100 μl of Bach flower remedy (group B), or 200 μl of Bach flower remedy (group C) every 2 days, for 20 days. All animals were fed standard rat chow and water ad libitum. Urine volume, body weight, feces weight, and food intake were measured every 2 days. On day 20, tests of glycemia, hyperuricemia, triglycerides, high-density lipoprotein (HDL) cholesterol, and total cholesterol were performed, and the anatomy and histopathology of the heart, liver and kidneys were evaluated. Data were analyzed using Tukey's test at a significance level of 5%. No significant differences were found in food intake, feces weight, urine volume and uric acid levels between groups. Group C had a significantly lower body weight gain than group A and lower glycemia compared with groups A and B. Groups B and C had significantly higher HDL-cholesterol and lower triglycerides than controls. Animals had mild hepatic steatosis, but no cardiac or renal damage was observed in the three groups. Bach flower Rescue Remedy was effective in controlling glycemia, triglycerides, and HDL-cholesterol and may serve as a strategy for reducing risk factors for cardiovascular disease in rats. This study provides some preliminary "proof of concept" data that Bach Rescue Remedy may exert some biological effects. Copyright © 2014 Elsevier Ltd. All rights reserved.
Solutions of the linearized Bach-Einstein equation in the static spherically symmetric case
International Nuclear Information System (INIS)
Schmidt, H.J.
1985-01-01
The Bach-Einstein equation linearized around Minkowski space-time is completely solved. The set of solutions depends on three parameters; a two-parameter subset of it becomes asymptotically flat. In that region the gravitational potential is of the type phi = -m/r + epsilon exp (-r/l). Because of the different asymptotic behaviour of both terms, it became necessary to linearize also around the Schwarzschild solution phi = -m/r. The linearized equation resulting in this case is discussed using qualitative methods. The result is that for m = 2l phi = -m/r + epsilon r -2 exp (-r/l) u, where u is some bounded function; m is arbitrary and epsilon again small. Further, the relation between the solution of the linearized and the full equation is discussed. (author)
Numerical analysis of the flow around the Bach-type Savonius wind turbine
International Nuclear Information System (INIS)
Kacprzak, K; Sobczak, K
2014-01-01
The performance of the Bach-type Savonius wind turbine with a constant cross-section is examined by means of quasi 2D and 3D flow predictions obtained from ANSYS CFX. Simulations were performed in a way allowing for a comparison with the wind tunnel data presented by Kamoji et al. The comparison with the experiment has revealed that 2D solutions give much higher deviation from the reference data than the 3D ones, which guarantees a good solution quality. It can be stated that even simplified (lack of laminar-turbulence transition modelling and a coarser mesh) 3D simulations can yield more accurate results than complex 2D solutions for turbines with a low aspect ratio. The paper also presents a systematic analysis of the most characteristic flow structures which are identified in the rotor.
Numerical analysis of the flow around the Bach-type Savonius wind turbine
Kacprzak, K.; Sobczak, K.
2014-08-01
The performance of the Bach-type Savonius wind turbine with a constant cross-section is examined by means of quasi 2D and 3D flow predictions obtained from ANSYS CFX. Simulations were performed in a way allowing for a comparison with the wind tunnel data presented by Kamoji et al. The comparison with the experiment has revealed that 2D solutions give much higher deviation from the reference data than the 3D ones, which guarantees a good solution quality. It can be stated that even simplified (lack of laminar-turbulence transition modelling and a coarser mesh) 3D simulations can yield more accurate results than complex 2D solutions for turbines with a low aspect ratio. The paper also presents a systematic analysis of the most characteristic flow structures which are identified in the rotor.
Bach Flower Remedies for psychological problems and pain: a systematic review
Directory of Open Access Journals (Sweden)
Langley Tessa
2009-05-01
Full Text Available Abstract Background Bach Flower Remedies are thought to help balance emotional state and are commonly recommended by practitioners for psychological problems and pain. We assessed whether Bach Flower Remedies (BFRs are safe and efficacious for these indications by performing a systematic review of the literature. Methods We searched MEDLINE®, Embase, AMED, and the Cochrane Library from inception until June 2008 and performed a hand-search of references from relevant key articles. For efficacy, we included all prospective studies with a control group. For safety, we also included retrospective, observational studies with more than 30 subjects. Two authors abstracted data and determined risk of bias using a recognised rating system of trial quality. Results Four randomised controlled trials (RCTs and two additional retrospective, observational studies were identified and included in the review. Three RCTs of BFRs for students with examination anxiety, and one RCT of BFRs for children with attention-deficit hyperactivity disorder (ADHD showed no overall benefit in comparison to placebo. Due to the number and quality of the studies the strength of the evidence is low or very low. We did not find any controlled prospective studies regarding the efficacy of BFRs for pain. Only four of the six studies included for safety explicitly reported adverse events. Conclusion Most of the available evidence regarding the efficacy and safety of BFRs has a high risk of bias. We conclude that, based on the reported adverse events in these six trials, BFRs are probably safe. Few controlled prospective trials of BFRs for psychological problems and pain exist. Our analysis of the four controlled trials of BFRs for examination anxiety and ADHD indicates that there is no evidence of benefit compared with a placebo intervention.
Fluvial sediment transport in a glacier-fed high-mountain river (Riffler Bach, Austrian Alps)
Morche, David; Weber, Martin; Faust, Matthias; Schuchardt, Anne; Baewert, Henning
2017-04-01
High-alpine environments are strongly affected by glacier retreat since the Little Ice Age (LIA). Due to ongoing climate change the hydrology of proglacial rivers is also influenced. It is expected that the growing proportions of snow melt and rainfall events will change runoff characteristics of proglacial rivers. Additionally, the importance of paraglacial sediment sources in recently deglaciating glacier forefields is increasing, while the role of glacial erosion is declining. Thus complex environmental conditions leading to a complex pattern of fluvial sediment transport in partly glaciated catchments of the European Alps. Under the umbrella of the joint PROSA-project the fluvial sediment transport of the river Riffler Bach (Kaunertal, Tyrol, Austria) was studied in 3 consecutive ablation seasons in order to quantify sediment yields. In June 2012 a probe for water level and an automatic water sampler (AWS) were installed at the outlet of the catchment (20km2). In order to calculate annual stage-discharge-relations by the rating-curve approach, discharge (Q) was repeatedly measured with current meters and by salt dilution. Concurrent to the discharge measurements bed load was collected using a portable Helley-Smith sampler. Bed load samples were weighted and sieved in the laboratory to gain annual bed load rating curves and grain size distributions. In total 564 (2012: 154, 2013: 209, 2014: 201) water samples were collected and subsequently filtered to quantify suspended sediment concentrations (SSC). Q-SSC-relations were calculated for single flood events due to the high variability of suspended sediment transport. The results show a high inter- and intra-annual variability of solid fluvial sediment transport, which can be explained by the characteristics of suspended sediment transport. Only 13 of 22 event-based Q-SSC-relations show causal dependency. In 2012, during a period with multiple pluvial-induced peak discharges most sediment was transported. On the
Fang, Shona C; Rosen, Raymond C; Vita, Joseph A; Ganz, Peter; Kupelian, Varant
2015-01-01
Erectile dysfunction (ED) is associated with cardiovascular disease (CVD); however, the association between change in ED status over time and future underlying CVD risk is unclear. The aim of this study was to investigate the association between change in ED status and Framingham CVD risk, as well change in Framingham risk. We studied 965 men free of CVD in the Boston Area Community Health (BACH) Survey, a longitudinal cohort study with three assessments. ED was assessed with the five-item International Index of Erectile Function at BACH I (2002-2005) and BACH II (2007-2010) and classified as no ED/transient ED/persistent ED. CVD risk was assessed with 10-year Framingham CVD risk algorithm at BACH I and BACH III (2010-2012). Linear regression models controlled for baseline age, socio-demographic and lifestyle factors, as well as baseline Framingham risk. Models were also stratified by age (≥/< 50 years). Framingham CVD risk and change in Framingham CVD risk were the main outcome measures. Transient and persistent ED was significantly associated with increased Framingham risk and change in risk over time in univariate and age-adjusted models. In younger men, persistent ED was associated with a Framingham risk that was 1.58 percentage points higher (95% confidence interval [CI]: 0.11, 3.06) and in older men, a Framingham risk that was 2.54 percentage points higher (95% CI: -1.5, 6.59), compared with those without ED. Change in Framingham risk over time was also associated with transient and persistent ED in men <50 years, but not in older men. Data suggest that even after taking into account other CVD risk factors, transient and persistent ED is associated with Framingham CVD risk and a greater increase in Framingham risk over time, particularly in younger men. Findings further support clinical assessment of CVD risk in men presenting with ED, especially those under 50 years. © 2014 International Society for Sexual Medicine.
Disney explains Bach: a pedagogic unit on the Fifth Brandenburg Concerto
Directory of Open Access Journals (Sweden)
Giorgio Pagannone
2014-12-01
Full Text Available This article illustrates a pedagogic unit on music listening that is centered on the first movement of the Fifth Brandenburg Concerto by Bach and is aimed at primary school pupils. The selected concert is of historical importance for the prominent role assumed by the harpsichord in comparison to the other two soloists (flute and violin. The didactic “bridge” that is useful to deal with this piece and carve out the cognitive contents ad hoc has been identified by Giorgio Pagannone in Walt Disney’s animated cartoon, Three Little Pigs (1933, based on a fairy tale of the same name.First, we present and analyse the music piece and animated cartoon. Next, we provide a detailed and articulated description of the pedagogic unit, that was adopted by Silvia Cancedda in a primary school in Bologna (Italy and produced very good results such as the intuitive reading of selected score fragments. The appendix includes some of the works done by the pupils.
Bach Is the Father of Harmony: Revealed by a 1/f Fluctuation Analysis across Musical Genres.
Wu, Dan; Kendrick, Keith M; Levitin, Daniel J; Li, Chaoyi; Yao, Dezhong
2015-01-01
Harmony is a fundamental attribute of music. Close connections exist between music and mathematics since both pursue harmony and unity. In music, the consonance of notes played simultaneously partly determines our perception of harmony; associates with aesthetic responses; and influences the emotion expression. The consonance could be considered as a window to understand and analyze harmony. Here for the first time we used a 1/f fluctuation analysis to investigate whether the consonance fluctuation structure in music with a wide range of composers and genres followed the scale free pattern that has been found for pitch, melody, rhythm, human body movements, brain activity, natural images and geographical features. We then used a network graph approach to investigate which composers were the most influential both within and across genres. Our results showed that patterns of consonance in music did follow scale-free characteristics, suggesting that this feature is a universally evolved one in both music and the living world. Furthermore, our network analysis revealed that Bach's harmony patterns were having the most influence on those used by other composers, followed closely by Mozart.
Geometry of deformed black holes. II. Schwarzschild hole surrounded by a Bach-Weyl ring
Basovník, M.; Semerák, O.
2016-08-01
We continue to study the response of black-hole space-times on the presence of additional strong sources of gravity. Restricting ourselves to static and axially symmetric (electro)vacuum exact solutions of Einstein's equations, we first considered the Majumdar-Papapetrou solution for a binary of extreme black holes in a previous paper, while here we deal with a Schwarzschild black hole surrounded by a concentric thin ring described by the Bach-Weyl solution. The geometry is again revealed on the simplest invariants determined by the metric (lapse function) and its gradient (gravitational acceleration), and by curvature (Kretschmann scalar). Extending the metric inside the black hole along null geodesics tangent to the horizon, we mainly focus on the black-hole interior (specifically, on its sections at constant Killing time) where the quantities behave in a way indicating a surprisingly strong influence of the external source. Being already distinct on the level of potential and acceleration, this is still more pronounced on the level of curvature: for a sufficiently massive and/or nearby (small) ring, the Kretschmann scalar even becomes negative in certain toroidal regions mostly touching the horizon from inside. Such regions have been interpreted as those where magnetic-type curvature dominates, but here we deal with space-times which do not involve rotation and the negative value is achieved due to the electric-type components of the Riemann/Weyl tensor. The Kretschmann scalar also shapes rather nontrivial landscapes outside the horizon.
Bach Is the Father of Harmony: Revealed by a 1/f Fluctuation Analysis across Musical Genres.
Directory of Open Access Journals (Sweden)
Dan Wu
Full Text Available Harmony is a fundamental attribute of music. Close connections exist between music and mathematics since both pursue harmony and unity. In music, the consonance of notes played simultaneously partly determines our perception of harmony; associates with aesthetic responses; and influences the emotion expression. The consonance could be considered as a window to understand and analyze harmony. Here for the first time we used a 1/f fluctuation analysis to investigate whether the consonance fluctuation structure in music with a wide range of composers and genres followed the scale free pattern that has been found for pitch, melody, rhythm, human body movements, brain activity, natural images and geographical features. We then used a network graph approach to investigate which composers were the most influential both within and across genres. Our results showed that patterns of consonance in music did follow scale-free characteristics, suggesting that this feature is a universally evolved one in both music and the living world. Furthermore, our network analysis revealed that Bach's harmony patterns were having the most influence on those used by other composers, followed closely by Mozart.
Lefebvre, Jean-Pierre; Ouillon, Sylvain; Vinh, Vu Duy; Arfi, Robert; Panché, Jean-Yves; Mari, Xavier; Van Thuoc, Chu; Torréton, Jean-Pascal
2012-04-01
In the Bach Dang-Cam Estuary, northern Vietnam, mechanisms governing cohesive sediment aggregation were investigated in situ in 2008-2009. As part of the Red River delta, this estuary exhibits a marked contrast in hydrological conditions between the monsoon and dry seasons. The impact on flocculation processes was assessed by means of surveys of water discharge, suspended particulate matter concentration and floc size distributions (FSDs) conducted during a tidal cycle at three selected sites along the estuary. A method was developed for calculating the relative volume concentration for the modes of various size classes from FSDs provided by the LISST 100X (Sequoia Scientific Inc.). It was found that all FSDs comprised four modes identified as particles/flocculi, fine and coarse microflocs, and macroflocs. Under the influence of the instantaneous turbulent kinetic energy, their proportions varied but without significant modification of their median diameters. In particular, when the turbulence level corresponded to a Kolmogorov microscale of less than ˜235 μm, a major breakup of flocs resulted in the formation of particles/flocculi and fine microflocs. Fluctuations in turbulence level were governed by seasonal variations in freshwater discharge and by the tidal cycle. During the wet season, strong freshwater input induced a high turbulent energy level that tended to generate sediment transfer from the coarser size classes (macroflocs, coarse microflocs) to finer ones (particles/flocculi and fine microflocs), and to promote a transport of sediment seawards. During the dry season, the influence of tides predominated. The turbulent energy level was then only episodically sufficiently high to generate transfer of sediment between floc size classes. At low turbulent energy, modifications in the proportions of floc size classes were due to differential settling. Tidal pumping produced a net upstream transport of sediment. Associated with the settling of sediment
International Nuclear Information System (INIS)
Zangrando, M.; Zacchigna, M.; Bondino, F.; Finazzi, M.; Pardini, T.; Plate, M.; Rochow, R.; Cocco, D.; Parmigiani, F.
2004-01-01
BACH, the new soft x-ray beamline for polarization dependent experiments at the Italian synchrotron radiation facility ELETTRA, has been commissioned, characterized and opened to external users. Based on two APPLE II undulators, it covers an energy range between 35 eV and 1600 eV with the control of the light polarization. The monochromator works either in high resolution or high flux mode. Resolving powers of 16000 at 50 eV, 12000 at 90 eV, more than 12000 at 400 eV, 15000 at 534 eV and 6600 at 867 eV have been achieved with the three high resolution gratings. The resolving powers of the high flux grating, which covers the 290 - 1600 eV range, have been measured reaching 7000 at 400 eV and 2200 at 867 eV. The fluxes, in the high resolution mode, range between 4·1011 photons/s at 125 eV and 2·1010 photons/s at about 1100 eV. Using the high flux grating with the best resolution achievable 1.7·1011 photons/s impinge on the sample at 900 eV. Two branches are installed after the monochromator allowing the set-up of two different experimental stations. One of them, besides several facilities for surface preparation and analysis, hosts a compact inelastic soft x-ray spectrometer (ComIXS) dedicated to x-ray emission experiments exploiting the small spot (10 μm in the vertical direction) on the sample. The other branch hosts a liquid helium cryostat equipped with a superconducting coil to perform absorption and transmission experiments with temperatures down to 2 K and magnetic field up to ±7 T
Pazderska, Agnieszka; Oftedal, Bergithe E; Napier, Catherine M; Ainsworth, Holly F; Husebye, Eystein S; Cordell, Heather J; Pearce, Simon H S; Mitchell, Anna L
2016-11-01
Autoimmune Addison's disease (AAD) is a rare but highly heritable condition. The BACH2 protein plays a crucial role in T lymphocyte maturation, and allelic variation in its gene has been associated with a number of autoimmune conditions. We aimed to determine whether alleles of the rs3757247 single nucleotide polymorphism (SNP) in the BACH2 gene are associated with AAD. This case-control association study was performed in two phases using Taqman chemistry. In the first phase, the rs3757247 SNP was genotyped in 358 UK AAD subjects and 166 local control subjects. Genotype data were also available from 5154 healthy UK controls from the Wellcome Trust (WTCCC2) for comparison. In the second phase, the SNP was genotyped in a validation cohort comprising 317 Norwegian AAD subjects and 365 controls. The frequency of the minor T allele was significantly higher in subjects with AAD from the United Kingdom compared to both the local and WTCCC2 control cohorts (58% vs 45 and 48%, respectively) (local controls, P = 1.1 × 10 -4 ; odds ratio [OR], 1.68; 95% confidence interval [CI], 1.29-2.18; WTCCC2 controls, P = 1.4 × 10 -6 ; OR, 1.44; 95% CI, 1.23-1.69). This finding was replicated in the Norwegian validation cohort (P = .0015; OR, 1.41; 95% CI, 1.14-1.75). Subgroup analysis showed that this association is present in subjects with both isolated AAD (OR, 1.53; 95% CI, 1.22-1.92) and autoimmune polyglandular syndrome type 2 (OR, 1.37; 95% CI, 1.12-1.69) in the UK cohort, and with autoimmune polyglandular syndrome type 2 in the Norwegian cohort (OR, 1.58; 95% CI, 1.22-2.06). We have demonstrated, for the first time, that allelic variability at the BACH2 locus is associated with susceptibility to AAD. Given its association with multiple autoimmune conditions, BACH2 can be considered a "universal" autoimmune susceptibility locus.
Liu, Lu; Wei, Jianrong; Zhang, Huishu; Xin, Jianhong; Huang, Jiping
2013-01-01
Because classical music has greatly affected our life and culture in its long history, it has attracted extensive attention from researchers to understand laws behind it. Based on statistical physics, here we use a different method to investigate classical music, namely, by analyzing cumulative distribution functions (CDFs) and autocorrelation functions of pitch fluctuations in compositions. We analyze 1,876 compositions of five representative classical music composers across 164 years from Bach, to Mozart, to Beethoven, to Mendelsohn, and to Chopin. We report that the biggest pitch fluctuations of a composer gradually increase as time evolves from Bach time to Mendelsohn/Chopin time. In particular, for the compositions of a composer, the positive and negative tails of a CDF of pitch fluctuations are distributed not only in power laws (with the scale-free property), but also in symmetry (namely, the probability of a treble following a bass and that of a bass following a treble are basically the same for each composer). The power-law exponent decreases as time elapses. Further, we also calculate the autocorrelation function of the pitch fluctuation. The autocorrelation function shows a power-law distribution for each composer. Especially, the power-law exponents vary with the composers, indicating their different levels of long-range correlation of notes. This work not only suggests a way to understand and develop music from a viewpoint of statistical physics, but also enriches the realm of traditional statistical physics by analyzing music.
Directory of Open Access Journals (Sweden)
Lu Liu
Full Text Available Because classical music has greatly affected our life and culture in its long history, it has attracted extensive attention from researchers to understand laws behind it. Based on statistical physics, here we use a different method to investigate classical music, namely, by analyzing cumulative distribution functions (CDFs and autocorrelation functions of pitch fluctuations in compositions. We analyze 1,876 compositions of five representative classical music composers across 164 years from Bach, to Mozart, to Beethoven, to Mendelsohn, and to Chopin. We report that the biggest pitch fluctuations of a composer gradually increase as time evolves from Bach time to Mendelsohn/Chopin time. In particular, for the compositions of a composer, the positive and negative tails of a CDF of pitch fluctuations are distributed not only in power laws (with the scale-free property, but also in symmetry (namely, the probability of a treble following a bass and that of a bass following a treble are basically the same for each composer. The power-law exponent decreases as time elapses. Further, we also calculate the autocorrelation function of the pitch fluctuation. The autocorrelation function shows a power-law distribution for each composer. Especially, the power-law exponents vary with the composers, indicating their different levels of long-range correlation of notes. This work not only suggests a way to understand and develop music from a viewpoint of statistical physics, but also enriches the realm of traditional statistical physics by analyzing music.
International Nuclear Information System (INIS)
Mai Trong Khoa; Tran Dinh Ha; Le Chinh Dai; Nguyen Quang Hung; Vu Huu Khiem
2011-01-01
Intensity modulated radiotherapy (IMRT) is one of the modern techniques in cancer treatment, in which dose is delivered optimally into the shape of the tumor and minimally in surround benign tissues. In developed countries, this technique has been performed routinely by Linacs with MLC for tumors at the critical areas. In Vietnam, because of the wet climate, the use of Linacs with MLC is difficult to operate and maintain. However, IMRT can be implemented by Linacs without MLC via independent jaws, Jaws-only IMRT (JO-IMRT), in which beams are separated into many segments with different weights to optimize highest dose in the tumor and lowest dose in the surrounding health organs. Methods: We describe the new treatment technic application and compare it with normal radiotherapy method (3D-CRT). Results: From 7/2008, the Dep of Nuclear Medicine and Oncology at Bach Mai Hospital has been conducting JO-IMRT to treat cancer patients. Up to now, we have 81 cases treated by IMRT including head and neck cancers (NPC, larynx cancer, maxillary sinus cancer, brain tumor), cancers in the thorax (esophagus cancer, lung cancer, breast cancer), cancers in the pelvis (prostate cancer, cervical cancer, rectal cancer). On the average, beam number is from 5 to 9 and 5-9 segments per beam. Treatment time for a fraction is from 6 to 12 minutes with 2.25 Gy for CTV1 per day. Discrepancies of doses were below 3% (0.15 to 2.84%) between planning and practice. In plan, the preeminences with IMRT are clearly superior to 3D radiation therapy. In clinical, almost patients had good respond, whereas side effects were quite less than conventional radiotherapy. Conclusions: JO-IMRT is a modern technic with more advantage than normal 3D-CRT. It help radiation dose to concentrate maximally in treatment target while influence minimally for sensitive surrounding tissues. Another, it is a high technic to appropriate with the climatic condition in Vietnam. (author)
Boydell, Barra
2004-01-01
When the 'Crucifixus' from Bach's Mass in B minor was performed for the first time in Ireland, by the University of Dublin Choral Society in May 1865, the Dublin Daily Express described it as 'this most crabbed of all earthly music'.
Jabłonowska, Elżbieta; Wójcik, Kamila; Szymańska, Bożena; Omulecka, Aleksandra; Cwiklińska, Hanna; Piekarska, Anna
2014-01-01
To analyze the expression of HMOX1 and miR-122 in liver biopsy samples obtained from HCV mono-and HIV/HCV co-infected patients in relation to selected clinical parameters, histological examination and IL-28B polymorphism as well as to determine whether HMOX1 expression is dependent on Bach-1. The study group consisted of 90 patients with CHC: 69 with HCV mono and 21 with HIV/HCV co-infection. RT-PCR was used in the analysis of HMOX1, Bach-1 and miR-122 expression in liver biopsy samples and in the assessment of IL-28B single-nucleotide polymorphism C/T (rs12979860) in the blood. Moreover in liver biopsy samples an analysis of HO-1 and Bach-1 protein level by Western Blot was performed. HCV mono-infected patients, with lower grading score (G600000 IU/mL) demonstrated higher expression of HMOX1. In patients with HIV/HCV co-infection, the expression of HMOX1 was lower in patients with lower lymphocyte CD4 count and higher HIV viral load. IL28B polymorphism did not affect the expression of either HMOX1 or miR-122. Higher HMOX1 expression correlated with higher expression of Bach-1 (Spearman's ρ = 0.586, p = 0.000001) and miR-122 (Spearman's ρ = 0.270, p = 0.014059). HMOX1 and miR-122 play an important role in the pathogenesis of CHC in HCV mono-and HIV/HCV co-infected patients. Reduced expression of HMOX1 in patients with HIV/HCV co-infection may indicate a worse prognosis in this group. Our results do not support the importance of Bach-1 in repression of HMOX1 in patients with chronic hepatitis C.
Contemporary suspended sediment yield of a partly glaciated catchment, Riffler Bach (Tyrol, Austria)
Weber, Martin; Baewert, Henning; Morche, David
2015-04-01
Due to glacier retreat since the LIA (Little Ice Age) proglacial areas in high mountain landscapes are growing. These systems are characterized by a high geomorphological activity, especially in the fluvial subsystem. Despite the long tradition of geomorphological research in the European Alps there is a still a lack of understanding in the interactions between hydrology, sediment sources, sediments sinks and suspended sediment transport. As emphasized by ORWIN ET AL. (2010) those problems can be solved by gathering data in a higher frequency and/or in a higher spatial resolution or density - both leading to a big amount of data. In 2012 a gauging station was installed at the outlet of the partly glaciated catchment of the Riffler Bach (Kaunertal valley, Tyrol). During the ablation seasons in 2012 and 2013 water stage was logged automatically every 15 minutes. In both seasons discharge was measured at different water levels to calculate a stage-discharge relation. Additionally, water samples were taken by an automatic water sampler. Within 16 sampling cycles with sampling frequencies ranging from 1 to 24 hours 389 water samples have been collected. The samples were filtered to calculate the suspended sediment concentration (SSC) of each sample. Furthermore, the climate station Weißsee provided meteorological data at a 15 minute interval. Due to the high variability in suspended sediment transport in proglacial rivers it is impossible to compute a robust annual Q-SSC-relation. Hence, two other approaches were used to calculate the suspended sediment load (SSL) and the suspended sediment yield (SSY): A) Q-SSC-relations for every single sampling cycle (e.g. GEILHAUSEN ET AL. 2013) B) Q-SSC-relations based on classification of dominant runoff-generating processes (e.g. ORWIN AND SMART 2004). The first approach uses commonly operated analysis methods that are well understood. While the hydro-climatic approach is more feasible to explain discharge generation and to
Directory of Open Access Journals (Sweden)
Giovanni Emanuele Corazza
2014-12-01
Full Text Available The main message carried by this article is that counterpoint can be taken as a model approach for the introduction of contrasting elements, not only in musical composition but also in the generation of ideas in any domain of knowledge. We start by an interdisciplinary review about the power of opposite concepts as constituting elements in nature. This is followed by the description of the DIMAI model for creative thinking, which is founded upon the dual forces of convergent and divergent thinking modalities. The main body of our work is an extraction of divergent modifiers from the Contrapunctus composed by Bach and collected in the Art of Fugue, with simple examples of application to the diversified fields of education and computer science.
Aayadi, Hoda; Mittal, Smriti P K; Deshpande, Anjali; Gore, Makarand; Ghaskadbi, Saroj S
2017-11-01
Geraniin, a hydrolysable tannin, used in traditional medicine in Southeast Asia, is known to exhibit various biological activities. As an antioxidant it is known to up-regulate phase II enzyme Heme oxygenase-1 (HO-1). However its mechanism is not clearly understood. Nuclear factor erythroid-derived 2 related factor 2 (Nrf-2) is transcriptionally up-regulated by Extracellular signal-regulated kinase (ERK) 1/2 and retained in nucleus due to inactivated Glycogen synthase kinase 3 beta (GSK-3β). Geraniin additionally down-regulates expression of microRNA 217 and 377 (miR-217 and miR-377) which target HO-1 mRNA. Expression of BTB and CNC homolog 1 (BACH-1), another regulator of HO-1, is also down-regulated by up-regulating microRNA 98 (miR-98), a negative regulator of BACH-1. Thus, geraniin up-regulates HO-1 expression both through activating its positive regulator Nrf-2 and by down-regulating its negative regulator BACH-1. Up-regulation of HO-1 also confers protection to HepG2 cells from tertiary butyl hydroperoxide (TBH) induced cytotoxicity. [BMB Reports 2017; 50(11): 560-565].
International Nuclear Information System (INIS)
Zangrando, M.; Zacchigna, M.; Finazzi, M.; Cocco, D.; Rochow, R.; Parmigiani, F.
2004-01-01
BACH, a soft x-ray beamline for polarization-dependent experiments at the Italian synchrotron radiation facility ELETTRA, was recently completed and characterized. Its performance, in terms of energy resolution, flux and polarization, is presented. Based on two APPLE II undulators, BACH covers the energy range between 35 and 1600 eV with the control of the light polarization. The monochromator is equipped with four gratings and allows one to work either in a high resolution or in a high flux mode. After the monochromator, the beamline is split into two branches with different refocusing properties. One is optimized to exploit the performance of the soft x-ray spectrometer (ComIXS) available at the beamline. Resolving powers between 12000 at 90 eV photon energy and 6600 near 867 eV were achieved using the high-resolution gratings and the smallest available slit width (10 μm). For the high-brilliance grating, which works between 290 and 1600 eV, resolving powers between 7000 at 400 eV and 2200 at 867 eV were obtained. The flux in the experimental chamber, measured with the high-resolution gratings for linearly polarized light at the best achievable resolution, ranges between 4x10 11 photons/s at 125 eV and 2x10 10 photons/s between 900 and 1250 eV. In circularly polarized mode the flux is two times larger for energies up to 380 eV. A gain of nearly one order of magnitude is obtained for the high-brilliance grating, in accordance with theoretical predictions. Flux beyond 1.3x10 11 photons/s was measured up to 1300 eV, and thus over nearly the complete energy range covered by this high-brilliance grating, with a maximum of 1.6x10 11 photons/s between 800 and 1100 eV. First results from polarization measurements confirm a polarization above 99.7% for both linearly and circularly polarized modes at low energies. Circular dichroism experiments indicate a circular polarization beyond 90% at the Fe L 2 /L 3 edge near 720 eV
Directory of Open Access Journals (Sweden)
Stella Almeida Rosa
2011-12-01
Full Text Available Este trabalho propõese a revelar elementos contextuais e musicais, especialmente aqueles ligados à expressividade, que aproximem a obra para teclado de Wilhelm Friedemann Bach ao movimento Sturm und Drang, ocorrido na Alemanha no início da segunda metade do século XVIII, através do reconhecimento dos procedimentos literários e musicais envolvidos e da análise da Polonaise nº 4, em Ré menor, como obra representativa do que se pretende demonstrar.This paper intends to point out contextual and musical elements, especially those relative to expressiveness, that brings Wilhelm Friedemann Bach's keyboard works close to German Sturm und Drang, that happened during the beginning of the second half of the eighteenth century, through the identification of the literary and musical procedures and the analysis of the Polonaise number 4, in D minor, as a representative work of this style.
Poon, Matthew; Schutz, Michael
2015-01-01
Acoustic cues such as pitch height and timing are effective at communicating emotion in both music and speech. Numerous experiments altering musical passages have shown that higher and faster melodies generally sound "happier" than lower and slower melodies, findings consistent with corpus analyses of emotional speech. However, equivalent corpus analyses of complex time-varying cues in music are less common, due in part to the challenges of assembling an appropriate corpus. Here, we describe a novel, score-based exploration of the use of pitch height and timing in a set of "balanced" major and minor key compositions. Our analysis included all 24 Preludes and 24 Fugues from Bach's Well-Tempered Clavier (book 1), as well as all 24 of Chopin's Preludes for piano. These three sets are balanced with respect to both modality (major/minor) and key chroma ("A," "B," "C," etc.). Consistent with predictions derived from speech, we found major-key (nominally "happy") pieces to be two semitones higher in pitch height and 29% faster than minor-key (nominally "sad") pieces. This demonstrates that our balanced corpus of major and minor key pieces uses low-level acoustic cues for emotion in a manner consistent with speech. A series of post hoc analyses illustrate interesting trade-offs, with sets featuring greater emphasis on timing distinctions between modalities exhibiting the least pitch distinction, and vice-versa. We discuss these findings in the broader context of speech-music research, as well as recent scholarship exploring the historical evolution of cue use in Western music.
DEFF Research Database (Denmark)
Rasch, Peter; Trapp, Stefan
2000-01-01
The organic pollution of the lower Belmer Bach can be attributed to its agriculturally intensively used drainage area. Already before entering the urbanized region, the macrozoobenthos is poor in species due to saprobic pollution and the structural poverty of the waterbed and the banks.......When comparing the species numbers of a reference sampling plot located before the urban region and a sampling plot situated within this region, hardly any differences could be ascertained. The longitudinal isolation and the rise in temperature of i 1 °C in the urban brook section negatively influence its fauna...
Aplicación de la terapia floral de Bach en niños con retardo del desarrollo psíquico
Directory of Open Access Journals (Sweden)
María Elena Francia Reyes
2003-08-01
Full Text Available Se realizó un trabajo investigativo, prospectivo, transversal, con 100 alumnos de la Escuela Especial de Retardo del Desarrollo Psíquico "Turcios Lima", situada en el consejo popular de Cayo Hueso en el municipio Centro Habana, con el objetivo de evaluar los resultados de la aplicación de la terapia floral de Bach en el aprendizaje de estos niños diagnosticados como retardados. A principios del siglo xx el doctor Edward Bach (1886-1936, médico de origen galés, desarrollo un original y eficaz sistema de curación. Él sostuvo que la enfermedad es producto del desequilibrio entre la mente y el cuerpo, que altera el campo energético del ser vivo. El organismo se enferma ante el padecer psicológico y el desorden emocional, y expresó: "La enfermedad no es un mal a suprimir sino un beneficio a comprender." Se seleccionaron de forma aleatoria a 100 niños de dicha escuela, constituyéndose un grupo estudio y otro control. Se revisaron sus expedientes y se realizó historia clínica floral. Se realizaron entrevistas a padres y maestros para confrontar resultados. Se aplicaron remedios florales y se arribaron a conclusiones como que la aplicación de estos remedios facilita el tránsito de escuela especial a enseñanza general.At the beginning of the 20th century, a Welsh doctor named Edward Bach(1886-1936, developed an original and efficient healing system. He affirmed that the disease was the result of imbalance of the mind and the body that distorted the energy field of the live being. The body got sick due to psychological and emotional disorders, so he said "Disease is not an evil to be eliminated but a benefit to be understood." A prospective cross-sectional study of 100 students was conducted in "Turcios Lima" Special School for Psychical Development Retardation located in Cayo Hueso neighborhood, Centro Habana municipality. The objective was to evaluate the results of the application of Bach´s flower therapy in the learning process of
Nuclear Criticality Calculation for Determining the Bach Size in a Pyroprocessing Facility
International Nuclear Information System (INIS)
Ko, Won Il; Lee, Ho Hee; Chang, Hong Rae; Song, Dae Yong; Kwon, Eun Ha; Jung, Chang Jun; Yoon, Suk Kyun
2009-01-01
The criticality analysis in a pyroprocessing facility is very important element for the R and D and the facility design in terms of the determination of batch size of the sub-processes as well as facility safety. Particularly, the determining the batch size is essential at the beginning stage of the R and D. In this report, the criticality analysis was carried out for the subprocesses such as voloxidation, electrolytic reduction, electrorefining and electrowinning process in order to estimate the maximum batch size of each process by using Monte Carlo code (MCNP4/C2). On the whole, the criticality problem could not give a big effect on the batch sizes in the voloxidation, electrolytic reduction and electrorefining. However, it was resulted that permissible amount of nuclear material to prevent the criticality accident in the electrowinning process was about 10kgHM
Nuclear Criticality Calculation for Determining the Bach Size in a Pyroprocessing Facility
Energy Technology Data Exchange (ETDEWEB)
Ko, Won Il; Lee, Ho Hee; Chang, Hong Rae; Song, Dae Yong; Kwon, Eun Ha; Jung, Chang Jun; Yoon, Suk Kyun [KAERI, Daejeon (Korea, Republic of)
2009-01-15
The criticality analysis in a pyroprocessing facility is very important element for the R and D and the facility design in terms of the determination of batch size of the sub-processes as well as facility safety. Particularly, the determining the batch size is essential at the beginning stage of the R and D. In this report, the criticality analysis was carried out for the subprocesses such as voloxidation, electrolytic reduction, electrorefining and electrowinning process in order to estimate the maximum batch size of each process by using Monte Carlo code (MCNP4/C2). On the whole, the criticality problem could not give a big effect on the batch sizes in the voloxidation, electrolytic reduction and electrorefining. However, it was resulted that permissible amount of nuclear material to prevent the criticality accident in the electrowinning process was about 10kgHM
Pujol, F.; Berner, Z.; Neumann, T.; Stüben, D.
2003-04-01
Trace element contents in authigenic pyrite were investigated in relationship to the geochemistry of host rocks in a 160 m deep drilling at Büdesheimer Bach (Prümer Mulde, Germany), in order to put constrains on possible changes in depositional conditions and seawater composition related to the Kellwasser events (Frasnian/Fammenian transition). The approach is based on the observation that the trace element pattern of authigenic pyrite is controlled by genetic conditions (Stüben et al., 2002) and that the content of elements with generally high degree of pyritization (DTMP, degree of trace metal pyritization, like As, Mo, Co, Ni, etc.) depends on their availability at the site of pyrite formation (e.g. Huerta-Diaz and Morse, 1992). The distribution of trace elements in the bulk rock essentially reflects mineralogical composition and redox conditions which are mainly controlled by the flux of organic matter entering the sediment. The lower and upper Kellwasser horizons are marked by an increase in carbonate and organic carbon content (up to 2%), coupled with an increase in the degree of pyritization of Fe (DOP: 0.4-0.8), indicating a change from normal marine to suboxic/anoxic conditions. A simultaneous drop in the Ba content of the host lithology, which usually is used as a proxy for paleoproductivity, can be explained by the removal of Ba dissolved in pore water under anoxic conditions (McManus et al., 1998). While low in the host rock, the Ba content of authigenic pyrite is high in these horizons, suggesting that pyrite may preserve the initial composition of pore water even for some elements with generally low DTMP, like Ba. Consequently, Ba content in pyrite may serve as indicator for productivity even when the Ba content of sediment can not be used due to its poor preservation. During these anoxic episodes also a significant increase in the content of As, U, V was registered in pyrite. Opposite to these, others like Ni, Co, Ag show a decrease in their
DEFF Research Database (Denmark)
Dossing, Kristina B. V.; Binderup, Tina; Kaczkowski, Bogumil
2014-01-01
by miR-129-5p. let-7 overexpression inhibited growth of carcinoid cell lines, and let-7 inhibition increased protein content of the transcription factor BACH1 and its targets MMP1 and HMGA2, all known to promote bone metastases. Immunohistochemistry analysis revealed that let-7 targets are highly...
Bijma, R.J.
2017-01-01
The study is devoted to Johann Sebastian Bach and his musicians in Leipzig’s two main churches, and to the question to which extent current opinions in musicology regarding historically-informed practice are actually correct. The core of Bach’s first Sunday choir in Leipzig consisted of the eight
Indian Academy of Sciences (India)
Western music allows the idea of 'modulation' from one key to another. ... 'tonic' in Indian music the tonic 'sa' is played throughout by the tanpura, and ... rules and greater freedom. A fugue ..... theorem and artificial intelligence but an excellent.
J.S.Bach variações "Goldberg" : um guia para a formação do homem completo
Helena Jank
1988-01-01
Resumo: Dentre muitas formas possíveis de abordagem para esta obra monumental de J. S. Bach, o presente trabalho traz um enfoque eminentemente humanístico, através do qual transparece a intenção do compositor de estruturar, na forma de tema e variações um ?guia para a formação do homem completo? Esta intenção se manifesta na organização geral da obra, com uma clara divisão em três grupos de variações, nos quais são explorados aspectos técnicos, composicionais e de interpretação. Com base na ...
Morche, David; Baewert, Henning; Weber, Martin; Schmidt, Karl-Heinz
2013-04-01
The hydrology of proglacial rivers is strongly affected by glacier melting. With ongoing glacier retreat the proportion of glacier meltwater in proglacial rivers is declining over longer time periods. Snow melt or rain fall events will play a more important role as water source. Due to glacial erosion the glacier system is also an important player in the orchestra of sediment sources/processes contributing to proglacial sediment budgets. The consequence of increasing deglaciation is a growing importance of other sediment sources/processes, mainly known as paraglacial, for sediment budgets in glacier forefields. The sediment export out of proglacial areas is mainly done by solid river load. Knowledge on the quantity of the exported sediments is important for reservoir management and torrent control. In order to measure fluvial sediment transport in the catchment area of the Gepatsch reservoir in the Ötztal Alps (Tyrol/Austria) we have installed a gauging station at the proglacial river Riffler Bach in June 2012. The catchment area of this station is about 20 km² with an altitudinal range from 1929 m to 3518 m. The higher altitudes in the southern part of the area are covered by the glacier Weißseeferner. Our station is equipped with an automatic water sampler (AWS 2002) and probes for water level, turbidity and electrical conductivity. All parameters are recorded in 5-15 minute intervals during the ablation period. Discharge is measured with current meters during wadable stages and salt dilution during higher floods. Bed load is measured concurrent to discharge measurements using a Helley-Smith sampler. In 2012, 189 water samples were taken and will be analyzed for suspended sediment concentration and ion content. Additionally, the grain size distribution will be determined using a Malvern laser diffractometer. Rating-curves will be used to calculate discharge from stage recordings. Solid load of the Riffler Bach will be quantified using the discharge data and
Bach, Rudolf; Weyl, Hermann
2012-03-01
This is the English translation of the third of a series of 3 papers by Hermann Weyl (the third one jointly with Rudolf Bach), first published in 1917-1922, in which the authors derived and discussed the now-famous Weyl two-body static axially symmetric vacuum solution of Einstein's equations. The English translations of the other two papers are published alongside this one. The papers have been selected by the Editors of General Relativity and Gravitation for re-publication in the Golden Oldies series of the journal. This republication is accompanied by an editorial note written by Gernot Neugebauer, David Petroff and Bahram Mashhoon, and by a brief biography of R. Bach, written by H. Goenner.
Benda, Susanne
1995-01-01
Uuest heliplaadist "Stalin Coctail: Schostakowitsch Kammersinfonie Nr. 2 (aus Streichquartett Nr. 3 op. 73). Pärt, Arvo: Collage über das Thema B-A-C-H. Cantus in memoriam B. Britten. Denisov Variationen über Haydns Kanon Tod ist ein langer Schlaf. Schtschedrin Stalin Coctail. Moskauer Virtuosen, Vladimir Spivakov. RCA/BMG-Ariola CD 09026 68061 2 (WD: 67'33")
Directory of Open Access Journals (Sweden)
J. E. Gonzalez-Zamora
2013-05-01
Full Text Available Twelve pesticides commonly used in citrus in Spain were tested on adults of Aphytis melinus DeBach to determine their effects on parasitoid survival and fecundity, and the duration of the residue of each pesticide. Six of these pesticides were found to be harmless to moderately harmful to this parasitoid in a laboratory assay in closed Petri dishes: spinosad (bait formulation, azadirachtin, fenbutatin, fosetyl-Al, copper oxichloride, and mancozeb, with their scores on the reduction of beneficial capacity (RBC index being between 21.4 and 94.6% after one week. The other six pesticides classified as harmful were tested on citrus plants to study their persistence over time under greenhouse conditions: Pirimicarb, pyriproxifen, paraffinic oil, abamectin, chlorpyrifos, and lambda-cyhalothrin. Most of these products reduced their negative effect on adults of A. melinus between one and six weeks after treatment, although lambda-cyhalothrin was still harmful to parasitoids 11 weeks after application. This information can help growers and consultants to make decisions about pesticide selection and application timing in citrus in order to support IPM implementation when A. melinus is present.
Rizzi, Malgorzata; Hemmingsen Schovsbo, Niels; Korte, Christoph; Bryld Wessel Fyhn, Michael
2017-04-01
To improve the understanding and interpretation of the depositional environment of a late Oligocene lacustrine organic rich oil-prone source rock succession, 2464 hand held (HH)-XRF measurements were made systematically on the 500 m long, continuous core from the fully cored Enreca-3 well. This core, drilled on the remote Bach Long Vi Island, northern Gulf of Tonkin, offshore Vietnam, represents a deep lake succession alternating between lacustrine pelagic dominated sediments interrupted by hyperpycnal turbidites, high density turbidites and debris flows [1, 2]. From a combined HH-XRF-XRD data set, multivariate data analysis and regression models are used to type the rock and to predict the XRD mineral composition based on HH-XRF composition. The rock types and the modelled mineral composition highlight the geochemical variations of the sediment and allows for direct comparison with sedimentological processes and facies changes. The modeling also depicts the cyclic alteration of rock types that are present on many different scales ranging from centimeters to hundreds of meters [1, 2]. The sedimentological and geochemical variations observed throughout the cored section reflects fluctuating paleoclimate, tectonism and hinterland condition controlling the depositional setting, which may provide a deeper understanding of the deposition of this and similar Paleogene syn-rift succession in the South China Sea region. It allows furthermore the development of a more generalized depositional model relevant for other deep-lacustrine syn-rift basins. [1] Petersen et al. (2014) Journal of Petroleum Geology, 37: 373-389. [2] Hovikoski et al. (2016) Journal of Sedimentary Research, 86(8): 982-1007.
Rizzi, M.; Schovsbo, N. H.; Fyhn, M. B. W.; Korte, C.
2017-12-01
We present a high-resolution stable isotope record based on bulk organic matter (δ13Corg) and fossil wood (δ13Cwood) originating from Oligocene deep lacustrine sediments cored on the Bach Long Vi Island, northern Gulf of Tonkin, offshore Vietnam. The sediments are exceptionally well preserved. They are thus excellently suited for a detailed stratigraphical analysis of the stable isotope record and as proxy for environmental and climatic changes within this period. The sediments were deposited in rapid subsiding, narrow and elongated fault-bound graben (Fyhn and Phach, 2015) and are represented by deep pelagic lacustrine organic-rich mud interrupted by numerous density-flow deposits (Hovikoski et al., 2016). The density-flow deposits contain abundant fragments of fossil wood. Therefore it was possible to obtain 262 coalified wood fragments together with 1063 bulk organic samples throughout the span of the core. This allowed to establish a high resolution stable C isotope record (δ13Corg and δ13Cwood). In addition 2464 handheld XRF determinations were carried out to further characterize the depositional environment (Rizzi et al., 2017). The organic carbon isotope trend from the 500 m core succession provides insight into the palaeoenvironmental changes of the lake during the Oligocene. Both, global and local factors control the δ13C variations. The aim of the study is to obtain pure global δ13Corg and δ13Cwood signals that would allow comparison of the studied sediments with coeval syn-rift successions in the South China Sea region and other parts of the world. [1] Fyhn and Phach (2015) Tectonics, 34(2): 290-312. [2] Hovikoski et al. (2016) Journal of Sedimentary Research, 86(8): 982-1007. [3] Rizzi et al. (2017) EGU General Assembly Abstract EGU 2017-17584.
International Nuclear Information System (INIS)
Mai Trong Khoa; Nguyen Quang Hung; Tran Dinh Ha
2011-01-01
The paper is evaluating results of treating brain tumor and some intracranial diseases by rotating gamma knife (RGK) at The Nuclear Medicine and Oncology Center, Bach Mai Hospital, from July 2007 to August 2010, for 1200 patients treated with RGK. In 1200 patients - average age: 42.6 years old, Male/Female ratio:1/1.08 - pituitary tumors accounted for 19.8%, meningioma 18.3%, arteriovenous malformations (AVM) (16.7%), acoustic neuroma (8.7%), brain metastases (7.5%), craniopharyngeal tumor (5.0%), pineal tumor (3.5%), cavernoma (6%), astrocytoma (5.2%), meduloblastoma (2.9%), ependymoma (2.6%), others (3.8%). Average target volume: minimum 0.6cm 3 , maximum 27.6cm 3 , median 6.2 ± 4.6 cm 3 . Average radiosurgery dose changed depend on nature of the tumor: pituitary tumor (12.4 Gy), meningioma (18.8 Gy), AVM (18 Gy), acoustic neuroma (14.6 Gy), brain metastases (18.2 Gy), craniopharyngeal tumor (12.8 Gy), pineal tumor (16.3 Gy), cavernoma (17.5 Gy), astrocytoma (14.6 Gy), medulloblastoma (16.1 Gy), ependymoma (16.3 Gy), others (15 Gy). Conclusions: Almost case have improved clinical symptoms significantly: 80.2% after 1 month (complete response 20.2%), 100% at 36th month (complete response: 94%). Size of the tumor were reduced remarkably. Treatment were safe, no death or severe complications were observed within and after radiosurgery. (author)
van Vuuren, Helize
2014-01-01
Van Niekerk se Memorandum-narratief is 'n gedenkskrif vir die elegies-realistiese skilder, Adriaan van Zyl, wat in September 2006 sterf - soos Walter Benjamin in sy laat veertigs en insgelyks met sy kunstenaarsloopbaan onderbreek. Van Niekerk stel in Memorandum vir haarself 'n uitdagende taak (soos Wiid dit formuleer aan die begin van "Memorandum 3"): om Bach se Passacaglia en fuga in C Mineur (BWV 582) (± 1715) te "vertaal" in prosa en om 'n kleiner, Afrikaanse weergawe van Benjamin se Das P...
Baewert, Henning; Weber, Martin; Morche, David
2015-04-01
The hydrology of a proglacial river is strongly affected by glacier melting. Due to glacier retreat the effects of snow melt and rain storms will become more important in future decades. Additionally, the development of periglacial landscapes will play a more important role in the hydrology of proglacial rivers. The importance of paraglacial sediment sources in sediment budgets of glacier forefields is increasing, while the role of glacial erosion is declining. In two consecutive ablation seasons the fluvial sediment transport of the river Riffler Bach in the Kaunertal (Tyrol/Austria) was quantified. The catchment area of this station is 20 km² with an altitudinal range from 1929 m to 3518 m above msl. The "Weißseeferner" glacier (2.34 km² in 2012) is the greatest of the remaining glaciers. An automatic water sampler (AWS 2002) and a probe for water level were installed were installed at the outlet of the catchment. In order to calculate annual stage-discharge-relations, discharge (Q) was repeatedly measured with current meters. Concurrent to the discharge measurements bed load was collected using a portable Helley-Smith sampler. Bed load (BL) samples were weighted and sieved in the laboratory to gain annual bed load rating curves and grain size distributions. In 2012, 154 water samples were sampled during 7 periods and subsequently filtered to quantify suspended sediment concentrations (SSC). A Q-SSC-relation was calculated for every period due to the high variability in suspended sediment transport. In addition, the grain size distribution of the filtered material was determined by laser diffraction analysis. In 2013, the same procedure was performed for 232 water samples which were collected during 9 periods. Meteorological data were logged at the climate station "Weißsee", which is located in the centre of the study area. First results show a high variability of discharge and solid sediment transport both at the inter-annual as well as at the intra
巴赫型音乐对波普尔“客观知识”学说的影响%The Influence of Bach-Type Music on Popper＇s Thought of Objective Knowledge
Institute of Scientific and Technical Information of China (English)
何超
2012-01-01
摘耍：波普尔的“客客观知识”在其客观性属性上，类似于超脱贝多芬的自我性、体现客观美感的巴赫型音乐。巴赫音乐音符与音符之间具有严谨又不失美感的客观秩序、不能被理性所摒弃的严密的数的逻辑构架，其客观性诉求与波普尔寻求超脱于主观、从而能够在客观形式中达到统一的知识类型——“客观知识”学说建构之间，存在发生学关联，直接导致波普尔对世界2与世界3的医分。巴赫型音乐对于波普尔科学哲学的深刻影响，为音乐与科学、哲学之间的彼此通约，提供了典型的个案。关键词：波普尔；巴赫型音乐；贝多芬；客观知识；世界3%The objective style of Bach type music enlightens in depth the thought of ＂objective knowledge＂ proposed by Karl Popper. Beethoven＇s music gives expression to ＂subjective＂ appeal, while Bach-type music, ＂subjective＂ soberness and reason in style. Karl Popper＇s ＂objective knowledge＂ is based on the partition of three worlds. As ＂world 3＂, it is characterized by objectivity, autonomy, shareability, etc. The isostructuralism and coherence in style between ＂objective knowledge＂ and Bach- type music provide an illustrative case for the commensuration of art and philosophy in certain sense through the reflective han- dling by Karl Popper.
Om guldbiller, Bach og basepar
DEFF Research Database (Denmark)
Mogensen, T.E.
2003-01-01
I sin stærkt encyklopædiske roman The Gold Bug Variations søger den amerikanske forfatter Richard Powers gennem utallige detaljer, synekdoker og intrikate forbindelser at inddrage al verden og radikalt udvide romanens rum. Men romanen er også, eller især, en fortælling om kærlighed....
Wang, Shuai; Hannafon, Bethany N; Wolf, Roman F; Zhou, Jundong; Avery, Jori E; Wu, Jinchang; Lind, Stuart E; Ding, Wei-Qun
2014-05-01
The effect of docosahexaenoic acid (DHA) on heme oxygenase-1 (HO-1) expression in cancer cells has never been characterized. This study examines DHA-induced HO-1 expression in human cancer cell model systems. DHA enhanced HO-1 gene expression in a time- and concentration-dependent manner, with maximal induction at 21 h of treatment. This induction of HO-1 expression was confirmed in vivo using a xenograft nude mouse model fed a fish-oil-enriched diet. The increase in HO-1 gene transcription induced by DHA was significantly attenuated by the antioxidant N-acetyl cysteine, suggesting the involvement of oxidative stress. This was supported by direct measurement of lipid peroxide levels after DHA treatment. Using a human HO-1 gene promoter reporter construct, we identified two antioxidant response elements (AREs) that mediate the DHA-induced increase in HO-1 gene transcription. Knockdown of nuclear factor (erythroid-derived 2)-like 2 (Nrf2) expression compromised the DHA-induced increase in HO-1 gene transcription, indicating the importance of the Nrf2 pathway in this event. However, the nuclear protein levels of Nrf2 remained unchanged upon DHA treatment. Further studies demonstrated that DHA reduces nuclear Bach1 protein expression by promoting its degradation and attenuates Bach1 binding to the AREs in the HO-1 gene promoter. In contrast, DHA enhanced Nrf2 binding to the AREs without affecting nuclear Nrf2 expression levels, indicating a new cellular mechanism that mediates DHA's induction of HO-1 gene transcription. To our knowledge, this is the first characterization of DHA-induced HO-1 expression in human malignant cells. Copyright © 2014 Elsevier Inc. All rights reserved.
Worst-case execution time analysis-driven object cache design
DEFF Research Database (Denmark)
Huber, Benedikt; Puffitsch, Wolfgang; Schoeberl, Martin
2012-01-01
result in a WCET analysis‐friendly design. Aiming for a time‐predictable design, we therefore propose to employ WCET analysis techniques for the design space exploration of processor architectures. We evaluated different object cache configurations using static analysis techniques. The number of field......Hard real‐time systems need a time‐predictable computing platform to enable static worst‐case execution time (WCET) analysis. All performance‐enhancing features need to be WCET analyzable. However, standard data caches containing heap‐allocated data are very hard to analyze statically....... In this paper we explore a new object cache design, which is driven by the capabilities of static WCET analysis. Simulations of standard benchmarks estimating the expected average case performance usually drive computer architecture design. The design decisions derived from this methodology do not necessarily...
Papoutsoglou, Sofronios E; Karakatsouli, Nafsika; Psarrou, Anna; Apostolidou, Sofia; Papoutsoglou, Eustratios S; Batzina, Alkisti; Leondaritis, Georgios; Sakellaridis, N
2015-02-01
This study presents the results of the response of Sparus aurata to three different musical stimuli, derived from the transmission (4 h per day, 5 days per week) of particular music pieces by Mozart, Romanza and Bach (140 dB(rms) re 1 μPa), compared to the same transmission level of white noise, while the underwater ambient noise in all the experimental tanks was 121 dB(rms) re 1 μPa. Using recirculating sea water facilities, 10 groups, 2 for each treatment, of 20 specimens of 11.2 ± 0.02 g (S.E.), were reared for 94 days, under 150 ± 10 l× 12L-12D, and were fed an artificial diet three times per day. Fish body weight showed significant differences after 55 days, while its maximum level was observed after the 69th day until the end of the experiment, the highest value demonstrated in Mozart (M) groups, followed by those of Romanza (R), Bach (B), control (C) and white noise (WN). SGR (M = B), %WG (M = B) and FCR (all groups fed same % b.w.) were also improved for M group. Brain neurotransmitters results exhibited significant differences in DA-dopamine, (M > B), 5HIAA (C > B), 5HIAA:5HT (WN > R), DOPAC (M > B), DOPAC:DA and (DOPAC + HVA):DA, (C > M), while no significant differences were observed in 5HT, NA, HVA and HVA:DA. No differences were observed in biometric measurements, protease activity, % fatty acids of fillet, visceral fat and liver, while differences were observed regarding carbohydrase activity and the amount (mg/g w.w.) of some fatty acids in liver, fillet and visceral fat. In conclusion, present results confirm those reported for S. aurata, concerning the observed relaxing influence--due to its brain neurotransmitters action--of the transmission of Mozart music (compared to R and B), which resulted in the achievement of maximum growth rate, body weight and improved FCR. This conclusion definitely supports the musical "understanding" and sensitivity of S. aurata to music stimuli as well as suggesting a specific effect of white noise.
Time-Predictable Computer Architecture
Directory of Open Access Journals (Sweden)
Schoeberl Martin
2009-01-01
Full Text Available Today's general-purpose processors are optimized for maximum throughput. Real-time systems need a processor with both a reasonable and a known worst-case execution time (WCET. Features such as pipelines with instruction dependencies, caches, branch prediction, and out-of-order execution complicate WCET analysis and lead to very conservative estimates. In this paper, we evaluate the issues of current architectures with respect to WCET analysis. Then, we propose solutions for a time-predictable computer architecture. The proposed architecture is evaluated with implementation of some features in a Java processor. The resulting processor is a good target for WCET analysis and still performs well in the average case.
Directory of Open Access Journals (Sweden)
Medem Federico
1977-12-01
(Carr, op. cit., an overlapping zone possibly exists somewhere in Nicaragua or Costa Rica.22. In Colombia, acutirostris -locally known as "la bache"- is to be found within the following areas:a Along the Pacific coast between Bahía de Solano (Chocó] and Río Mataje (Nariño which latter forms the border with Ecuador.b In the Cauca Valley proper (Cauca and Valle.c In the region of Armenia (Quindío into which it evidently migrated from the Cauca Valley. Its migration into the Magdalena River System is blocked either by the Cañón del Cauca, a gorge situated between the Cordillera Occidental and Central, or by the western slopes of the Quindío Mountains which belong to the Cordillera Central.d On the Atlantic sector of the Chocó, between Caño Negro, Río Tanela, lower Río Atrato and its tributary, Río Truandó, as well as Río Nercua, an affluent of the latter. Its presence on the middle and upper reaches of the Atrato is not proved but can be suspected.e On the upper Rio Sinú (Córdoba, between Caño Juí in the vicinity of the village Tierralta and Río Manso, the uppermost tributary of the Sinú.23. No studies exist about its presence between the eastern shore of the Gulf of Urabá or Darién and Rio Sinú, but certainly it once migrated from the Chocó into the latter.24. The center of evolution of the genus Chelydra apparently was situated in the United States, from where the migration took place during the Terciary. Fossils are known from the Pleistocene of the United States (Carr, op. cit., but no fossil material was described from Colombia until now.25. Many more studies, carried out with large series of adults of both sexes, juveniles, and especially with living specimens, are badly needed in order to clarify the still remaining problems concerning the taxonomy, ecology and geographical distribution of acutirostris and rossignonii.1º Se presenta un estudio sobre Chelydra serpentina acutirostris, dando énfasis a las características morfol
Bach and Rock in the Music Classroom.
Ponick, F. S.
2000-01-01
Focuses on the use of popular music in music education, addressing issues such as defining popular music, approaches for using popular music in the classroom, and whether the National Standards for Music Education can be attained using popular music. Lists resources for teaching popular music. (CMK)
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian
2011-01-01
of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....
WCET analysis in shared resources real-time systems with TDMA buses
Rihani, H.; Moy, M.; Maiza, C.; Altmeyer, S.
2015-01-01
Predictability is an important aspect in real-time and safety-critical systems, where non-functional properties -- such as the timing behavior -- have high impact on the system correctness. As many safety-critical systems have a growing performance demand, simple, but outdated architectures are not
DEFF Research Database (Denmark)
Arndt, Channing; Simler, Kenneth R.
2010-01-01
A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes a......, with the current approach tending to systematically overestimate (underestimate) poverty in urban (rural) zones.......A fundamental premise of absolute poverty lines is that they represent the same level of utility through time and space. Disturbingly, a series of recent studies in middle- and low-income economies show that even carefully derived poverty lines rarely satisfy this premise. This article proposes...... an information-theoretic approach to estimating cost-of-basic-needs (CBN) poverty lines that are utility consistent. Applications to date illustrate that utility-consistent poverty measurements derived from the proposed approach and those derived from current CBN best practices often differ substantially...
Variance estimation for generalized Cavalieri estimators
Johanna Ziegel; Eva B. Vedel Jensen; Karl-Anton Dorph-Petersen
2011-01-01
The precision of stereological estimators based on systematic sampling is of great practical importance. This paper presents methods of data-based variance estimation for generalized Cavalieri estimators where errors in sampling positions may occur. Variance estimators are derived under perturbed systematic sampling, systematic sampling with cumulative errors and systematic sampling with random dropouts. Copyright 2011, Oxford University Press.
Bach in 2014: Music Composition with Recurrent Neural Network
Liu, I-Ting; Ramakrishnan, Bhiksha
2014-01-01
We propose a framework for computer music composition that uses resilient propagation (RProp) and long short term memory (LSTM) recurrent neural network. In this paper, we show that LSTM network learns the structure and characteristics of music pieces properly by demonstrating its ability to recreate music. We also show that predicting existing music using RProp outperforms Back propagation through time (BPTT).
Surprised by Bird, Bard, and Bach: Language, Silence, and Transcendence.
Suhor, Charles
1991-01-01
Argues the importance of the relationships among silence and literature, the arts, and other experiences that point toward transcendence. Suggests that English teachers can expand the repertoire of classroom activities and teaching techniques that make use of silence. (KEH)
Muuseum - market? Bach või Meie Mees? / Valner Valme
Valme, Valner, 1970-
2005-01-01
28. septembril 2005. a. Rüütelkonna hoones toimunud avalikult arutelult muuseumi koha ja funktsiooni üle tänapäeva ühiskonnas. Vestlusringis Jaak Kangilaski, Rein Raud, Eha Komissarov, Merike Lang, Johannes Saar, Anders Härm, Sirje Helme jt
Remember Bach: an investigation in episodic memory for music.
Eschrich, Susann; Münte, Thomas F; Altenmüller, Eckart O
2005-12-01
Emotional events are remembered better than nonemotional ones, especially after a long period of time. In this study, we investigated whether emotional music is kept better in episodic long-term memory than less emotional music and to which extent musical structure is important.
Variable Kernel Density Estimation
Terrell, George R.; Scott, David W.
1992-01-01
We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...
Chatterji, Gano
2011-01-01
Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.
Optimal fault signal estimation
Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.
2002-01-01
We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By
DEFF Research Database (Denmark)
Jørgensen, Ivan Harald Holger; Bogason, Gudmundur; Bruun, Erik
1995-01-01
This paper proposes a new way to estimate the flow in a micromechanical flow channel. A neural network is used to estimate the delay of random temperature fluctuations induced in a fluid. The design and implementation of a hardware efficient neural flow estimator is described. The system...... is implemented using switched-current technique and is capable of estimating flow in the μl/s range. The neural estimator is built around a multiplierless neural network, containing 96 synaptic weights which are updated using the LMS1-algorithm. An experimental chip has been designed that operates at 5 V...
Adjusting estimative prediction limits
Masao Ueki; Kaoru Fueda
2007-01-01
This note presents a direct adjustment of the estimative prediction limit to reduce the coverage error from a target value to third-order accuracy. The adjustment is asymptotically equivalent to those of Barndorff-Nielsen & Cox (1994, 1996) and Vidoni (1998). It has a simpler form with a plug-in estimator of the coverage probability of the estimative limit at the target value. Copyright 2007, Oxford University Press.
Estimation of measurement variances
International Nuclear Information System (INIS)
Anon.
1981-01-01
In the previous two sessions, it was assumed that the measurement error variances were known quantities when the variances of the safeguards indices were calculated. These known quantities are actually estimates based on historical data and on data generated by the measurement program. Session 34 discusses how measurement error parameters are estimated for different situations. The various error types are considered. The purpose of the session is to enable participants to: (1) estimate systematic error variances from standard data; (2) estimate random error variances from data as replicate measurement data; (3) perform a simple analysis of variances to characterize the measurement error structure when biases vary over time
Del Pico, Wayne J
2014-01-01
Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el
Fast, Interactive Worst-Case Execution Time Analysis With Back-Annotation
DEFF Research Database (Denmark)
Harmon, Trevor; Schoeberl, Martin; Kirner, Raimund
2012-01-01
into the development cycle, requiring WCET analysis to be postponed until a final verification phase. In this paper, we propose interactive WCET analysis as a new method to provide near-instantaneous WCET feedback to the developer during software programming. We show that interactive WCET analysis is feasible using...
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
DEFF Research Database (Denmark)
Andersen, C K; Andersen, K; Kragh-Sørensen, P
2000-01-01
on these criteria, a two-part model was chosen. In this model, the probability of incurring any costs was estimated using a logistic regression, while the level of the costs was estimated in the second part of the model. The choice of model had a substantial impact on the predicted health care costs, e...
Heemstra, F.J.
1992-01-01
The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be
Heemstra, F.J.; Heemstra, F.J.
1993-01-01
The paper gives an overview of the state of the art of software cost estimation (SCE). The main questions to be answered in the paper are: (1) What are the reasons for overruns of budgets and planned durations? (2) What are the prerequisites for estimating? (3) How can software development effort be
Coherence in quantum estimation
Giorda, Paolo; Allegra, Michele
2018-01-01
The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.
Overconfidence in Interval Estimates
Soll, Jack B.; Klayman, Joshua
2004-01-01
Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…
Adaptive Spectral Doppler Estimation
DEFF Research Database (Denmark)
Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt
2009-01-01
. The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to pro- vide good spectral resolution and contrast even when the ob- servation window is very short. The 2 adaptive techniques are tested......In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...... and compared with the averaged periodogram (Welch’s method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set...
Optomechanical parameter estimation
International Nuclear Information System (INIS)
Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P
2013-01-01
We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)
DEFF Research Database (Denmark)
2015-01-01
A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over the communicat......A method includes determining a sequence of first coefficient estimates of a communication channel based on a sequence of pilots arranged according to a known pilot pattern and based on a receive signal, wherein the receive signal is based on the sequence of pilots transmitted over...... the communication channel. The method further includes determining a sequence of second coefficient estimates of the communication channel based on a decomposition of the first coefficient estimates in a dictionary matrix and a sparse vector of the second coefficient estimates, the dictionary matrix including...... filter characteristics of at least one known transceiver filter arranged in the communication channel....
International Nuclear Information System (INIS)
Schull, W.J.; Texas Univ., Houston, TX
1992-01-01
Estimation of the risk of cancer following exposure to ionizing radiation remains largely empirical, and models used to adduce risk incorporate few, if any, of the advances in molecular biology of a past decade or so. These facts compromise the estimation risk where the epidemiological data are weakest, namely, at low doses and dose rates. Without a better understanding of the molecular and cellular events ionizing radiation initiates or promotes, it seems unlikely that this situation will improve. Nor will the situation improve without further attention to the identification and quantitative estimation of the effects of those host and environmental factors that enhance or attenuate risk. (author)
DEFF Research Database (Denmark)
Bollerslev, Tim; Todorov, Victor
We propose a new and flexible non-parametric framework for estimating the jump tails of Itô semimartingale processes. The approach is based on a relatively simple-to-implement set of estimating equations associated with the compensator for the jump measure, or its "intensity", that only utilizes...... the weak assumption of regular variation in the jump tails, along with in-fill asymptotic arguments for uniquely identifying the "large" jumps from the data. The estimation allows for very general dynamic dependencies in the jump tails, and does not restrict the continuous part of the process...... and the temporal variation in the stochastic volatility. On implementing the new estimation procedure with actual high-frequency data for the S&P 500 aggregate market portfolio, we find strong evidence for richer and more complex dynamic dependencies in the jump tails than hitherto entertained in the literature....
Bridged Race Population Estimates
U.S. Department of Health & Human Services — Population estimates from "bridging" the 31 race categories used in Census 2000, as specified in the 1997 Office of Management and Budget (OMB) race and ethnicity...
Estimation of measurement variances
International Nuclear Information System (INIS)
Jaech, J.L.
1984-01-01
The estimation of measurement error parameters in safeguards systems is discussed. Both systematic and random errors are considered. A simple analysis of variances to characterize the measurement error structure with biases varying over time is presented
APLIKASI SPLINE ESTIMATOR TERBOBOT
Directory of Open Access Journals (Sweden)
I Nyoman Budiantara
2001-01-01
Full Text Available We considered the nonparametric regression model : Zj = X(tj + ej, j = 1,2, ,n, where X(tj is the regression curve. The random error ej are independently distributed normal with a zero mean and a variance s2/bj, bj > 0. The estimation of X obtained by minimizing a Weighted Least Square. The solution of this optimation is a Weighted Spline Polynomial. Further, we give an application of weigted spline estimator in nonparametric regression. Abstract in Bahasa Indonesia : Diberikan model regresi nonparametrik : Zj = X(tj + ej, j = 1,2, ,n, dengan X (tj kurva regresi dan ej sesatan random yang diasumsikan berdistribusi normal dengan mean nol dan variansi s2/bj, bj > 0. Estimasi kurva regresi X yang meminimumkan suatu Penalized Least Square Terbobot, merupakan estimator Polinomial Spline Natural Terbobot. Selanjutnya diberikan suatu aplikasi estimator spline terbobot dalam regresi nonparametrik. Kata kunci: Spline terbobot, Regresi nonparametrik, Penalized Least Square.
Fractional cointegration rank estimation
DEFF Research Database (Denmark)
Lasak, Katarzyna; Velasco, Carlos
the parameters of the model under the null hypothesis of the cointegration rank r = 1, 2, ..., p-1. This step provides consistent estimates of the cointegration degree, the cointegration vectors, the speed of adjustment to the equilibrium parameters and the common trends. In the second step we carry out a sup......-likelihood ratio test of no-cointegration on the estimated p - r common trends that are not cointegrated under the null. The cointegration degree is re-estimated in the second step to allow for new cointegration relationships with different memory. We augment the error correction model in the second step...... to control for stochastic trend estimation effects from the first step. The critical values of the tests proposed depend only on the number of common trends under the null, p - r, and on the interval of the cointegration degrees b allowed, but not on the true cointegration degree b0. Hence, no additional...
Estimation of spectral kurtosis
Sutawanir
2017-03-01
Rolling bearings are the most important elements in rotating machinery. Bearing frequently fall out of service for various reasons: heavy loads, unsuitable lubrications, ineffective sealing. Bearing faults may cause a decrease in performance. Analysis of bearing vibration signals has attracted attention in the field of monitoring and fault diagnosis. Bearing vibration signals give rich information for early detection of bearing failures. Spectral kurtosis, SK, is a parameter in frequency domain indicating how the impulsiveness of a signal varies with frequency. Faults in rolling bearings give rise to a series of short impulse responses as the rolling elements strike faults, SK potentially useful for determining frequency bands dominated by bearing fault signals. SK can provide a measure of the distance of the analyzed bearings from a healthy one. SK provides additional information given by the power spectral density (psd). This paper aims to explore the estimation of spectral kurtosis using short time Fourier transform known as spectrogram. The estimation of SK is similar to the estimation of psd. The estimation falls in model-free estimation and plug-in estimator. Some numerical studies using simulations are discussed to support the methodology. Spectral kurtosis of some stationary signals are analytically obtained and used in simulation study. Kurtosis of time domain has been a popular tool for detecting non-normality. Spectral kurtosis is an extension of kurtosis in frequency domain. The relationship between time domain and frequency domain analysis is establish through power spectrum-autocovariance Fourier transform. Fourier transform is the main tool for estimation in frequency domain. The power spectral density is estimated through periodogram. In this paper, the short time Fourier transform of the spectral kurtosis is reviewed, a bearing fault (inner ring and outer ring) is simulated. The bearing response, power spectrum, and spectral kurtosis are plotted to
Approximate Bayesian recursive estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav
2014-01-01
Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf
Ranking as parameter estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav; Guy, Tatiana Valentine
2009-01-01
Roč. 4, č. 2 (2009), s. 142-158 ISSN 1745-7645 R&D Projects: GA MŠk 2C06001; GA AV ČR 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : ranking * Bayesian estimation * negotiation * modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/AS/karny- ranking as parameter estimation.pdf
Maximal combustion temperature estimation
International Nuclear Information System (INIS)
Golodova, E; Shchepakina, E
2006-01-01
This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models
Single snapshot DOA estimation
Häcker, P.; Yang, B.
2010-10-01
In array signal processing, direction of arrival (DOA) estimation has been studied for decades. Many algorithms have been proposed and their performance has been studied thoroughly. Yet, most of these works are focused on the asymptotic case of a large number of snapshots. In automotive radar applications like driver assistance systems, however, only a small number of snapshots of the radar sensor array or, in the worst case, a single snapshot is available for DOA estimation. In this paper, we investigate and compare different DOA estimators with respect to their single snapshot performance. The main focus is on the estimation accuracy and the angular resolution in multi-target scenarios including difficult situations like correlated targets and large target power differences. We will show that some algorithms lose their ability to resolve targets or do not work properly at all. Other sophisticated algorithms do not show a superior performance as expected. It turns out that the deterministic maximum likelihood estimator is a good choice under these hard conditions.
Thermodynamic estimation: Ionic materials
International Nuclear Information System (INIS)
Glasser, Leslie
2013-01-01
Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy
Distribution load estimation - DLE
Energy Technology Data Exchange (ETDEWEB)
Seppaelae, A. [VTT Energy, Espoo (Finland)
1996-12-31
The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems
Generalized estimating equations
Hardin, James W
2002-01-01
Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th
Hassani, Majid; Macchiavello, Chiara; Maccone, Lorenzo
2017-11-01
Quantum metrology calculates the ultimate precision of all estimation strategies, measuring what is their root-mean-square error (RMSE) and their Fisher information. Here, instead, we ask how many bits of the parameter we can recover; namely, we derive an information-theoretic quantum metrology. In this setting, we redefine "Heisenberg bound" and "standard quantum limit" (the usual benchmarks in the quantum estimation theory) and show that the former can be attained only by sequential strategies or parallel strategies that employ entanglement among probes, whereas parallel-separable strategies are limited by the latter. We highlight the differences between this setting and the RMSE-based one.
Distribution load estimation - DLE
Energy Technology Data Exchange (ETDEWEB)
Seppaelae, A [VTT Energy, Espoo (Finland)
1997-12-31
The load research project has produced statistical information in the form of load models to convert the figures of annual energy consumption to hourly load values. The reliability of load models is limited to a certain network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to make improvements in the load models. Distribution load estimation (DLE) is the method developed here to improve load estimates from the load models. The method is also quite cheap to apply as it utilises information that is already available in SCADA systems
Burke, Gary; Nesheiwat, Jeffrey; Su, Ling
1994-01-01
Verification is important aspect of process of designing application-specific integrated circuit (ASIC). Design must not only be functionally accurate, but must also maintain correct timing. IFA, Intelligent Front Annotation program, assists in verifying timing of ASIC early in design process. This program speeds design-and-verification cycle by estimating delays before layouts completed. Written in C language.
Organizational flexibility estimation
Komarynets, Sofia
2013-01-01
By the help of parametric estimation the evaluation scale of organizational flexibility and its parameters was formed. Definite degrees of organizational flexibility and its parameters for the Lviv region enterprises were determined. Grouping of the enterprises under the existing scale was carried out. Special recommendations to correct the enterprises behaviour were given.
On Functional Calculus Estimates
Schwenninger, F.L.
2015-01-01
This thesis presents various results within the field of operator theory that are formulated in estimates for functional calculi. Functional calculus is the general concept of defining operators of the form $f(A)$, where f is a function and $A$ is an operator, typically on a Banach space. Norm
DEFF Research Database (Denmark)
2000-01-01
Using a pulsed ultrasound field, the two-dimensional velocity vector can be determined with the invention. The method uses a transversally modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation. The new...
Quantifying IT estimation risks
Kulk, G.P.; Peters, R.J.; Verhoef, C.
2009-01-01
A statistical method is proposed for quantifying the impact of factors that influence the quality of the estimation of costs for IT-enabled business projects. We call these factors risk drivers as they influence the risk of the misestimation of project costs. The method can effortlessly be
Numerical Estimation in Preschoolers
Berteletti, Ilaria; Lucangeli, Daniela; Piazza, Manuela; Dehaene, Stanislas; Zorzi, Marco
2010-01-01
Children's sense of numbers before formal education is thought to rely on an approximate number system based on logarithmically compressed analog magnitudes that increases in resolution throughout childhood. School-age children performing a numerical estimation task have been shown to increasingly rely on a formally appropriate, linear…
McDonald, Judith A.; Thornton, Robert J.
2011-01-01
Course research projects that use easy-to-access real-world data and that generate findings with which undergraduate students can readily identify are hard to find. The authors describe a project that requires students to estimate the current female-male earnings gap for new college graduates. The project also enables students to see to what…
Fast fundamental frequency estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom
2017-01-01
Modelling signals as being periodic is common in many applications. Such periodic signals can be represented by a weighted sum of sinusoids with frequencies being an integer multiple of the fundamental frequency. Due to its widespread use, numerous methods have been proposed to estimate the funda...
Czech Academy of Sciences Publication Activity Database
Fabián, Zdeněk
2017-01-01
Roč. 56, č. 2 (2017), s. 125-132 ISSN 0973-1377 Institutional support: RVO:67985807 Keywords : gnostic theory * statistics * robust estimates Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability http://www.ceser.in/ceserp/index.php/ijamas/article/view/4707
Estimation of morbidity effects
International Nuclear Information System (INIS)
Ostro, B.
1994-01-01
Many researchers have related exposure to ambient air pollution to respiratory morbidity. To be included in this review and analysis, however, several criteria had to be met. First, a careful study design and a methodology that generated quantitative dose-response estimates were required. Therefore, there was a focus on time-series regression analyses relating daily incidence of morbidity to air pollution in a single city or metropolitan area. Studies that used weekly or monthly average concentrations or that involved particulate measurements in poorly characterized metropolitan areas (e.g., one monitor representing a large region) were not included in this review. Second, studies that minimized confounding ad omitted variables were included. For example, research that compared two cities or regions and characterized them as 'high' and 'low' pollution area were not included because of potential confounding by other factors in the respective areas. Third, concern for the effects of seasonality and weather had to be demonstrated. This could be accomplished by either stratifying and analyzing the data by season, by examining the independent effects of temperature and humidity, and/or by correcting the model for possible autocorrelation. A fourth criterion for study inclusion was that the study had to include a reasonably complete analysis of the data. Such analysis would include an careful exploration of the primary hypothesis as well as possible examination of te robustness and sensitivity of the results to alternative functional forms, specifications, and influential data points. When studies reported the results of these alternative analyses, the quantitative estimates that were judged as most representative of the overall findings were those that were summarized in this paper. Finally, for inclusion in the review of particulate matter, the study had to provide a measure of particle concentration that could be converted into PM10, particulate matter below 10
Histogram Estimators of Bivariate Densities
National Research Council Canada - National Science Library
Husemann, Joyce A
1986-01-01
One-dimensional fixed-interval histogram estimators of univariate probability density functions are less efficient than the analogous variable-interval estimators which are constructed from intervals...
Vamos¸, C˘alin
2013-01-01
Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.
Distribution load estimation (DLE)
Energy Technology Data Exchange (ETDEWEB)
Seppaelae, A; Lehtonen, M [VTT Energy, Espoo (Finland)
1998-08-01
The load research has produced customer class load models to convert the customers` annual energy consumption to hourly load values. The reliability of load models applied from a nation-wide sample is limited in any specific network because many local circumstances are different from utility to utility and time to time. Therefore there is a need to find improvements to the load models or, in general, improvements to the load estimates. In Distribution Load Estimation (DLE) the measurements from the network are utilized to improve the customer class load models. The results of DLE will be new load models that better correspond to the loading of the distribution network but are still close to the original load models obtained by load research. The principal data flow of DLE is presented
Estimating ISABELLE shielding requirements
International Nuclear Information System (INIS)
Stevens, A.J.; Thorndike, A.M.
1976-01-01
Estimates were made of the shielding thicknesses required at various points around the ISABELLE ring. Both hadron and muon requirements are considered. Radiation levels at the outside of the shield and at the BNL site boundary are kept at or below 1000 mrem per year and 5 mrem/year respectively. Muon requirements are based on the Wang formula for pion spectra, and the hadron requirements on the hadron cascade program CYLKAZ of Ranft. A muon shield thickness of 77 meters of sand is indicated outside the ring in one area, and hadron shields equivalent to from 2.7 to 5.6 meters in thickness of sand above the ring. The suggested safety allowance would increase these values to 86 meters and 4.0 to 7.2 meters respectively. There are many uncertainties in such estimates, but these last figures are considered to be rather conservative
Variance Function Estimation. Revision.
1987-03-01
UNLSIFIED RFOSR-TR-87-±112 F49620-85-C-O144 F/C 12/3 NL EEEEEEh LOUA28~ ~ L53 11uLoo MICROOP REOUINTS-’HR ------ N L E U INARF-% - IS %~1 %i % 0111...and 9 jointly. If 7,, 0. and are any preliminary estimators for 71, 6. and 3. define 71 and 6 to be the solutions of (4.1) N1 IN2 (7., ’ Td " ~ - / =0P
Aswath Damodaran
1999-01-01
Over the last three decades, the capital asset pricing model has occupied a central and often controversial place in most corporate finance analysts’ tool chests. The model requires three inputs to compute expected returns – a riskfree rate, a beta for an asset and an expected risk premium for the market portfolio (over and above the riskfree rate). Betas are estimated, by most practitioners, by regressing returns on an asset against a stock index, with the slope of the regression being the b...
Estimating Venezuelas Latent Inflation
Juan Carlos Bencomo; Hugo J. Montesinos; Hugo M. Montesinos; Jose Roberto Rondo
2011-01-01
Percent variation of the consumer price index (CPI) is the inflation indicator most widely used. This indicator, however, has some drawbacks. In addition to measurement errors of the CPI, there is a problem of incongruence between the definition of inflation as a sustained and generalized increase of prices and the traditional measure associated with the CPI. We use data from 1991 to 2005 to estimate a complementary indicator for Venezuela, the highest inflation country in Latin America. Late...
Chernobyl source term estimation
International Nuclear Information System (INIS)
Gudiksen, P.H.; Harvey, T.F.; Lange, R.
1990-09-01
The Chernobyl source term available for long-range transport was estimated by integration of radiological measurements with atmospheric dispersion modeling and by reactor core radionuclide inventory estimation in conjunction with WASH-1400 release fractions associated with specific chemical groups. The model simulations revealed that the radioactive cloud became segmented during the first day, with the lower section heading toward Scandinavia and the upper part heading in a southeasterly direction with subsequent transport across Asia to Japan, the North Pacific, and the west coast of North America. By optimizing the agreement between the observed cloud arrival times and duration of peak concentrations measured over Europe, Japan, Kuwait, and the US with the model predicted concentrations, it was possible to derive source term estimates for those radionuclides measured in airborne radioactivity. This was extended to radionuclides that were largely unmeasured in the environment by performing a reactor core radionuclide inventory analysis to obtain release fractions for the various chemical transport groups. These analyses indicated that essentially all of the noble gases, 60% of the radioiodines, 40% of the radiocesium, 10% of the tellurium and about 1% or less of the more refractory elements were released. These estimates are in excellent agreement with those obtained on the basis of worldwide deposition measurements. The Chernobyl source term was several orders of magnitude greater than those associated with the Windscale and TMI reactor accidents. However, the 137 Cs from the Chernobyl event is about 6% of that released by the US and USSR atmospheric nuclear weapon tests, while the 131 I and 90 Sr released by the Chernobyl accident was only about 0.1% of that released by the weapon tests. 13 refs., 2 figs., 7 tabs
Estimating Corporate Yield Curves
Antionio Diaz; Frank Skinner
2001-01-01
This paper represents the first study of retail deposit spreads of UK financial institutions using stochastic interest rate modelling and the market comparable approach. By replicating quoted fixed deposit rates using the Black Derman and Toy (1990) stochastic interest rate model, we find that the spread between fixed and variable rates of interest can be modeled (and priced) using an interest rate swap analogy. We also find that we can estimate an individual bank deposit yield curve as a spr...
Estimation of inspection effort
International Nuclear Information System (INIS)
Mullen, M.F.; Wincek, M.A.
1979-06-01
An overview of IAEA inspection activities is presented, and the problem of evaluating the effectiveness of an inspection is discussed. Two models are described - an effort model and an effectiveness model. The effort model breaks the IAEA's inspection effort into components; the amount of effort required for each component is estimated; and the total effort is determined by summing the effort for each component. The effectiveness model quantifies the effectiveness of inspections in terms of probabilities of detection and quantities of material to be detected, if diverted over a specific period. The method is applied to a 200 metric ton per year low-enriched uranium fuel fabrication facility. A description of the model plant is presented, a safeguards approach is outlined, and sampling plans are calculated. The required inspection effort is estimated and the results are compared to IAEA estimates. Some other applications of the method are discussed briefly. Examples are presented which demonstrate how the method might be useful in formulating guidelines for inspection planning and in establishing technical criteria for safeguards implementation
Qualitative Robustness in Estimation
Directory of Open Access Journals (Sweden)
Mohammed Nasser
2012-07-01
Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.
Estimating directional epistasis
Le Rouzic, Arnaud
2014-01-01
Epistasis, i.e., the fact that gene effects depend on the genetic background, is a direct consequence of the complexity of genetic architectures. Despite this, most of the models used in evolutionary and quantitative genetics pay scant attention to genetic interactions. For instance, the traditional decomposition of genetic effects models epistasis as noise around the evolutionarily-relevant additive effects. Such an approach is only valid if it is assumed that there is no general pattern among interactions—a highly speculative scenario. Systematic interactions generate directional epistasis, which has major evolutionary consequences. In spite of its importance, directional epistasis is rarely measured or reported by quantitative geneticists, not only because its relevance is generally ignored, but also due to the lack of simple, operational, and accessible methods for its estimation. This paper describes conceptual and statistical tools that can be used to estimate directional epistasis from various kinds of data, including QTL mapping results, phenotype measurements in mutants, and artificial selection responses. As an illustration, I measured directional epistasis from a real-life example. I then discuss the interpretation of the estimates, showing how they can be used to draw meaningful biological inferences. PMID:25071828
Adaptive Nonparametric Variance Estimation for a Ratio Estimator ...
African Journals Online (AJOL)
Kernel estimators for smooth curves require modifications when estimating near end points of the support, both for practical and asymptotic reasons. The construction of such boundary kernels as solutions of variational problem is a difficult exercise. For estimating the error variance of a ratio estimator, we suggest an ...
Estimation of Lung Ventilation
Ding, Kai; Cao, Kunlin; Du, Kaifang; Amelon, Ryan; Christensen, Gary E.; Raghavan, Madhavan; Reinhardt, Joseph M.
Since the primary function of the lung is gas exchange, ventilation can be interpreted as an index of lung function in addition to perfusion. Injury and disease processes can alter lung function on a global and/or a local level. MDCT can be used to acquire multiple static breath-hold CT images of the lung taken at different lung volumes, or with proper respiratory control, 4DCT images of the lung reconstructed at different respiratory phases. Image registration can be applied to this data to estimate a deformation field that transforms the lung from one volume configuration to the other. This deformation field can be analyzed to estimate local lung tissue expansion, calculate voxel-by-voxel intensity change, and make biomechanical measurements. The physiologic significance of the registration-based measures of respiratory function can be established by comparing to more conventional measurements, such as nuclear medicine or contrast wash-in/wash-out studies with CT or MR. An important emerging application of these methods is the detection of pulmonary function change in subjects undergoing radiation therapy (RT) for lung cancer. During RT, treatment is commonly limited to sub-therapeutic doses due to unintended toxicity to normal lung tissue. Measurement of pulmonary function may be useful as a planning tool during RT planning, may be useful for tracking the progression of toxicity to nearby normal tissue during RT, and can be used to evaluate the effectiveness of a treatment post-therapy. This chapter reviews the basic measures to estimate regional ventilation from image registration of CT images, the comparison of them to the existing golden standard and the application in radiation therapy.
Estimating Subjective Probabilities
DEFF Research Database (Denmark)
Andersen, Steffen; Fountain, John; Harrison, Glenn W.
2014-01-01
either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake 'calibrating adjustments' to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments...... that theory calls for. We illustrate this approach using data from a controlled experiment with real monetary consequences to the subjects. This allows the observer to make inferences about the latent subjective probability, under virtually any well-specified model of choice under subjective risk, while still...
Buttrey, Samuel E.; Washburn, Alan R.; Price, Wilson L.; Operations Research
2011-01-01
The article of record as published may be located at http://dx.doi.org/10.2202/1559-0410.1334 We propose a model to estimate the rates at which NHL teams score and yield goals. In the model, goals occur as if from a Poisson process whose rate depends on the two teams playing, the home-ice advantage, and the manpower (power-play, short-handed) situation. Data on all the games from the 2008-2009 season was downloaded and processed into a form suitable for the analysis. The model...
Risk estimation and evaluation
Energy Technology Data Exchange (ETDEWEB)
Ferguson, R A.D.
1982-10-01
Risk assessment involves subjectivity, which makes objective decision making difficult in the nuclear power debate. The author reviews the process and uncertainties of estimating risks as well as the potential for misinterpretation and misuse. Risk data from a variety of aspects cannot be summed because the significance of different risks is not comparable. A method for including political, social, moral, psychological, and economic factors, environmental impacts, catastrophes, and benefits in the evaluation process could involve a broad base of lay and technical consultants, who would explain and argue their evaluation positions. 15 references. (DCK)
Estimating Gear Teeth Stiffness
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2013-01-01
The estimation of gear stiffness is important for determining the load distribution between the gear teeth when two sets of teeth are in contact. Two factors have a major influence on the stiffness; firstly the boundary condition through the gear rim size included in the stiffness calculation...... and secondly the size of the contact. In the FE calculation the true gear tooth root profile is applied. The meshing stiffness’s of gears are highly non-linear, it is however found that the stiffness of an individual tooth can be expressed in a linear form assuming that the contact length is constant....
Mixtures Estimation and Applications
Mengersen, Kerrie; Titterington, Mike
2011-01-01
This book uses the EM (expectation maximization) algorithm to simultaneously estimate the missing data and unknown parameter(s) associated with a data set. The parameters describe the component distributions of the mixture; the distributions may be continuous or discrete. The editors provide a complete account of the applications, mathematical structure and statistical analysis of finite mixture distributions along with MCMC computational methods, together with a range of detailed discussions covering the applications of the methods and features chapters from the leading experts on the subject
Robust Wave Resource Estimation
DEFF Research Database (Denmark)
Lavelle, John; Kofoed, Jens Peter
2013-01-01
density estimates of the PDF as a function both of Hm0 and Tp, and Hm0 and T0;2, together with the mean wave power per unit crest length, Pw, as a function of Hm0 and T0;2. The wave elevation parameters, from which the wave parameters are calculated, are filtered to correct or remove spurious data....... An overview is given of the methods used to do this, and a method for identifying outliers of the wave elevation data, based on the joint distribution of wave elevations and accelerations, is presented. The limitations of using a JONSWAP spectrum to model the measured wave spectra as a function of Hm0 and T0......;2 or Hm0 and Tp for the Hanstholm site data are demonstrated. As an alternative, the non-parametric loess method, which does not rely on any assumptions about the shape of the wave elevation spectra, is used to accurately estimate Pw as a function of Hm0 and T0;2....
Estimations of actual availability
International Nuclear Information System (INIS)
Molan, M.; Molan, G.
2001-01-01
Adaptation of working environment (social, organizational, physical and physical) should assure higher level of workers' availability and consequently higher level of workers' performance. A special theoretical model for description of connections between environmental factors, human availability and performance was developed and validated. The central part of the model is evaluations of human actual availability in the real working situation or fitness for duties self-estimation. The model was tested in different working environments. On the numerous (2000) workers, standardized values and critical limits for an availability questionnaire were defined. Standardized method was used in identification of the most important impact of environmental factors. Identified problems were eliminated by investments in the organization in modification of selection and training procedures in humanization of working .environment. For workers with behavioural and health problems individual consultancy was offered. The described method is a tool for identification of impacts. In combination with behavioural analyses and mathematical analyses of connections, it offers possibilities to keep adequate level of human availability and fitness for duty in each real working situation. The model should be a tool for achieving adequate level of nuclear safety by keeping the adequate level of workers' availability and fitness for duty. For each individual worker possibility for estimation of level of actual fitness for duty is possible. Effects of prolonged work and additional tasks should be evaluated. Evaluations of health status effects and ageing are possible on the individual level. (author)
Comparison of variance estimators for metaanalysis of instrumental variable estimates
Schmidt, A. F.; Hingorani, A. D.; Jefferis, B. J.; White, J.; Groenwold, R. H H; Dudbridge, F.; Ben-Shlomo, Y.; Chaturvedi, N.; Engmann, J.; Hughes, A.; Humphries, S.; Hypponen, E.; Kivimaki, M.; Kuh, D.; Kumari, M.; Menon, U.; Morris, R.; Power, C.; Price, J.; Wannamethee, G.; Whincup, P.
2016-01-01
Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two
Introduction to variance estimation
Wolter, Kirk M
2007-01-01
We live in the information age. Statistical surveys are used every day to determine or evaluate public policy and to make important business decisions. Correct methods for computing the precision of the survey data and for making inferences to the target population are absolutely essential to sound decision making. Now in its second edition, Introduction to Variance Estimation has for more than twenty years provided the definitive account of the theory and methods for correct precision calculations and inference, including examples of modern, complex surveys in which the methods have been used successfully. The book provides instruction on the methods that are vital to data-driven decision making in business, government, and academe. It will appeal to survey statisticians and other scientists engaged in the planning and conduct of survey research, and to those analyzing survey data and charged with extracting compelling information from such data. It will appeal to graduate students and university faculty who...
Directory of Open Access Journals (Sweden)
Laurence Booth
2015-04-01
Full Text Available Discount rates are essential to applied finance, especially in setting prices for regulated utilities and valuing the liabilities of insurance companies and defined benefit pension plans. This paper reviews the basic building blocks for estimating discount rates. It also examines market risk premiums, as well as what constitutes a benchmark fair or required rate of return, in the aftermath of the financial crisis and the U.S. Federal Reserve’s bond-buying program. Some of the results are disconcerting. In Canada, utilities and pension regulators responded to the crash in different ways. Utilities regulators haven’t passed on the full impact of low interest rates, so that consumers face higher prices than they should whereas pension regulators have done the opposite, and forced some contributors to pay more. In both cases this is opposite to the desired effect of monetary policy which is to stimulate aggregate demand. A comprehensive survey of global finance professionals carried out last year provides some clues as to where adjustments are needed. In the U.S., the average equity market required return was estimated at 8.0 per cent; Canada’s is 7.40 per cent, due to the lower market risk premium and the lower risk-free rate. This paper adds a wealth of historic and survey data to conclude that the ideal base long-term interest rate used in risk premium models should be 4.0 per cent, producing an overall expected market return of 9-10.0 per cent. The same data indicate that allowed returns to utilities are currently too high, while the use of current bond yields in solvency valuations of pension plans and life insurers is unhelpful unless there is a realistic expectation that the plans will soon be terminated.
T-CREST: Time-predictable multi-core architecture for embedded systems
DEFF Research Database (Denmark)
Schoeberl, Martin; Abbaspourseyedi, Sahar; Jordan, Alexander
2015-01-01
-core architectures that are optimized for the WCET instead of the average-case execution time. The resulting time-predictable resources (processors, interconnect, memory arbiter, and memory controller) and tools (compiler, WCET analysis) are designed to ease WCET analysis and to optimize WCET performance. Compared...... domain shows that the WCET can be reduced for computation-intensive tasks when distributing the tasks on several cores and using the network-on-chip for communication. With three cores the WCET is improved by a factor of 1.8 and with 15 cores by a factor of 5.7.The T-CREST project is the result...
Toxicity Estimation Software Tool (TEST)
The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...
Sampling and estimating recreational use.
Timothy G. Gregoire; Gregory J. Buhyoff
1999-01-01
Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.
Flexible and efficient estimating equations for variogram estimation
Sun, Ying; Chang, Xiaohui; Guan, Yongtao
2018-01-01
Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.
Flexible and efficient estimating equations for variogram estimation
Sun, Ying
2018-01-11
Variogram estimation plays a vastly important role in spatial modeling. Different methods for variogram estimation can be largely classified into least squares methods and likelihood based methods. A general framework to estimate the variogram through a set of estimating equations is proposed. This approach serves as an alternative approach to likelihood based methods and includes commonly used least squares approaches as its special cases. The proposed method is highly efficient as a low dimensional representation of the weight matrix is employed. The statistical efficiency of various estimators is explored and the lag effect is examined. An application to a hydrology dataset is also presented.
Improved Estimates of Thermodynamic Parameters
Lawson, D. D.
1982-01-01
Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.
State estimation in networked systems
Sijs, J.
2012-01-01
This thesis considers state estimation strategies for networked systems. State estimation refers to a method for computing the unknown state of a dynamic process by combining sensor measurements with predictions from a process model. The most well known method for state estimation is the Kalman
Global Polynomial Kernel Hazard Estimation
DEFF Research Database (Denmark)
Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch
2015-01-01
This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...
Uveal melanoma: Estimating prognosis
Directory of Open Access Journals (Sweden)
Swathi Kaliki
2015-01-01
Full Text Available Uveal melanoma is the most common primary malignant tumor of the eye in adults, predominantly found in Caucasians. Local tumor control of uveal melanoma is excellent, yet this malignancy is associated with relatively high mortality secondary to metastasis. Various clinical, histopathological, cytogenetic features and gene expression features help in estimating the prognosis of uveal melanoma. The clinical features associated with poor prognosis in patients with uveal melanoma include older age at presentation, male gender, larger tumor basal diameter and thickness, ciliary body location, diffuse tumor configuration, association with ocular/oculodermal melanocytosis, extraocular tumor extension, and advanced tumor staging by American Joint Committee on Cancer classification. Histopathological features suggestive of poor prognosis include epithelioid cell type, high mitotic activity, higher values of mean diameter of ten largest nucleoli, higher microvascular density, extravascular matrix patterns, tumor-infiltrating lymphocytes, tumor-infiltrating macrophages, higher expression of insulin-like growth factor-1 receptor, and higher expression of human leukocyte antigen Class I and II. Monosomy 3, 1p loss, 6q loss, and 8q and those classified as Class II by gene expression are predictive of poor prognosis of uveal melanoma. In this review, we discuss the prognostic factors of uveal melanoma. A database search was performed on PubMed, using the terms "uvea," "iris," "ciliary body," "choroid," "melanoma," "uveal melanoma" and "prognosis," "metastasis," "genetic testing," "gene expression profiling." Relevant English language articles were extracted, reviewed, and referenced appropriately.
Approaches to estimating decommissioning costs
International Nuclear Information System (INIS)
Smith, R.I.
1990-07-01
The chronological development of methodology for estimating the cost of nuclear reactor power station decommissioning is traced from the mid-1970s through 1990. Three techniques for developing decommissioning cost estimates are described. The two viable techniques are compared by examining estimates developed for the same nuclear power station using both methods. The comparison shows that the differences between the estimates are due largely to differing assumptions regarding the size of the utility and operating contractor overhead staffs. It is concluded that the two methods provide bounding estimates on a range of manageable costs, and provide reasonable bases for the utility rate adjustments necessary to pay for future decommissioning costs. 6 refs
Estimating Stochastic Volatility Models using Prediction-based Estimating Functions
DEFF Research Database (Denmark)
Lunde, Asger; Brix, Anne Floor
to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...
A new estimator for vector velocity estimation [medical ultrasonics
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2001-01-01
A new estimator for determining the two-dimensional velocity vector using a pulsed ultrasound field is derived. The estimator uses a transversely modulated ultrasound field for probing the moving medium under investigation. A modified autocorrelation approach is used in the velocity estimation...... be introduced, and the velocity estimation is done at a fixed depth in tissue to reduce the influence of a spatial velocity spread. Examples for different velocity vectors and field conditions are shown using both simple and more complex field simulations. A relative accuracy of 10.1% is obtained...
International Nuclear Information System (INIS)
Vetrinskaya, N.I.; Manasbayeva, A.B.
1998-01-01
Water has a particular ecological function and it is an indicator of the general state of the biosphere. In relation with this summary, the toxicological evaluation of water by biologic testing methods is very actual. The peculiarity of biologic testing information is an integral reflection of all totality properties of examination of the environment in position of its perception by living objects. Rapid integral evaluation of anthropological situation is a base aim of biologic testing. If this evaluation has deviations from normal state, detailed analysis and revelation of dangerous components could be conducted later. The quality of water from the Degelen gallery, where nuclear explosions were conducted, was investigated by bio-testing methods. The micro-organisms (Micrococcus Luteus, Candida crusei, Pseudomonas algaligenes) and water plant elodea (Elodea canadensis Rich) were used as test-objects. It is known that the transporting functions of cell membranes of living organisms are violated the first time in extreme conditions by difference influences. Therefore, ion penetration of elodeas and micro-organisms cells, which contained in the examination water with toxicants, were used as test-function. Alteration of membrane penetration was estimated by measurement of electrolytes electrical conductivity, which gets out from living objects cells to distillate water. Index of water toxic is ratio of electrical conductivity in experience to electrical conductivity in control. Also, observations from common state of plant, which was incubated in toxic water, were made. (Chronic experience conducted for 60 days.) The plants were incubated in water samples, which were picked out from gallery in the years 1996 and 1997. The time of incubation is 1-10 days. The results of investigation showed that ion penetration of elodeas and micro-organisms cells changed very much with influence of radionuclides, which were contained in testing water. Changes are taking place even in
WAYS HIERARCHY OF ACCOUNTING ESTIMATES
Directory of Open Access Journals (Sweden)
ŞERBAN CLAUDIU VALENTIN
2015-03-01
Full Text Available Based on one hand on the premise that the estimate is an approximate evaluation, completed with the fact that the term estimate is increasingly common and used by a variety of both theoretical and practical areas, particularly in situations where we can not decide ourselves with certainty, it must be said that, in fact, we are dealing with estimates and in our case with an accounting estimate. Completing on the other hand the idea above with the phrase "estimated value", which implies that we are dealing with a value obtained from an evaluation process, but its size is not exact but approximated, meaning is close to the actual size, it becomes obvious the neccessity to delimit the hierarchical relationship between evaluation / estimate while considering the context in which the evaluation activity is derulated at entity level.
Spring Small Grains Area Estimation
Palmer, W. F.; Mohler, R. J.
1986-01-01
SSG3 automatically estimates acreage of spring small grains from Landsat data. Report describes development and testing of a computerized technique for using Landsat multispectral scanner (MSS) data to estimate acreage of spring small grains (wheat, barley, and oats). Application of technique to analysis of four years of data from United States and Canada yielded estimates of accuracy comparable to those obtained through procedures that rely on trained analysis.
Parameter estimation in plasmonic QED
Jahromi, H. Rangani
2018-03-01
We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.
Quantity Estimation Of The Interactions
International Nuclear Information System (INIS)
Gorana, Agim; Malkaj, Partizan; Muda, Valbona
2007-01-01
In this paper we present some considerations about quantity estimations, regarding the range of interaction and the conservations laws in various types of interactions. Our estimations are done under classical and quantum point of view and have to do with the interaction's carriers, the radius, the influence range and the intensity of interactions
CONDITIONS FOR EXACT CAVALIERI ESTIMATION
Directory of Open Access Journals (Sweden)
Mónica Tinajero-Bravo
2014-03-01
Full Text Available Exact Cavalieri estimation amounts to zero variance estimation of an integral with systematic observations along a sampling axis. A sufficient condition is given, both in the continuous and the discrete cases, for exact Cavalieri sampling. The conclusions suggest improvements on the current stereological application of fractionator-type sampling.
Optimization of Barron density estimates
Czech Academy of Sciences Publication Activity Database
Vajda, Igor; van der Meulen, E. C.
2001-01-01
Roč. 47, č. 5 (2001), s. 1867-1883 ISSN 0018-9448 R&D Projects: GA ČR GA102/99/1137 Grant - others:Copernicus(XE) 579 Institutional research plan: AV0Z1075907 Keywords : Barron estimator * chi-square criterion * density estimation Subject RIV: BD - Theory of Information Impact factor: 2.077, year: 2001
Stochastic Estimation via Polynomial Chaos
2015-10-01
AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic
Bayesian estimates of linkage disequilibrium
Directory of Open Access Journals (Sweden)
Abad-Grau María M
2007-06-01
Full Text Available Abstract Background The maximum likelihood estimator of D' – a standard measure of linkage disequilibrium – is biased toward disequilibrium, and the bias is particularly evident in small samples and rare haplotypes. Results This paper proposes a Bayesian estimation of D' to address this problem. The reduction of the bias is achieved by using a prior distribution on the pair-wise associations between single nucleotide polymorphisms (SNPs that increases the likelihood of equilibrium with increasing physical distances between pairs of SNPs. We show how to compute the Bayesian estimate using a stochastic estimation based on MCMC methods, and also propose a numerical approximation to the Bayesian estimates that can be used to estimate patterns of LD in large datasets of SNPs. Conclusion Our Bayesian estimator of D' corrects the bias toward disequilibrium that affects the maximum likelihood estimator. A consequence of this feature is a more objective view about the extent of linkage disequilibrium in the human genome, and a more realistic number of tagging SNPs to fully exploit the power of genome wide association studies.
Reactivity estimation using digital nonlinear H∞ estimator for VHTRC experiment
International Nuclear Information System (INIS)
Suzuki, Katsuo; Nabeshima, Kunihiko; Yamane, Tsuyoshi
2003-01-01
On-line and real-time estimation of time-varying reactivity in a nuclear reactor in necessary for early detection of reactivity anomaly and safe operation. Using a digital nonlinear H ∞ estimator, an experiment of real-time dynamic reactivity estimation was carried out in the Very High Temperature Reactor Critical Assembly (VHTRC) of Japan Atomic Energy Research Institute. Some technical issues of the experiment are described, such as reactivity insertion, data sampling frequency, anti-aliasing filter, experimental circuit and digitalising nonlinear H ∞ reactivity estimator, and so on. Then, we discussed the experimental results obtained by the digital nonlinear H ∞ estimator with sampled data of the nuclear instrumentation signal for the power responses under various reactivity insertions. Good performances of estimated reactivity were observed, with almost no delay to the true reactivity and sufficient accuracy between 0.05 cent and 0.1 cent. The experiment shows that real-time reactivity for data sampling period of 10 ms can be certainly realized. From the results of the experiment, it is concluded that the digital nonlinear H ∞ reactivity estimator can be applied as on-line real-time reactivity meter for actual nuclear plants. (author)
DEFF Research Database (Denmark)
Tangmose, Sara; Thevissen, Patrick; Lynnerup, Niels
2015-01-01
A radiographic assessment of third molar development is essential for differentiating between juveniles and adolescents in forensic age estimations. As the developmental stages of third molars are highly correlated, age estimates based on a combination of a full set of third molar scores...... are statistically complicated. Transition analysis (TA) is a statistical method developed for estimating age at death in skeletons, which combines several correlated developmental traits into one age estimate including a 95% prediction interval. The aim of this study was to evaluate the performance of TA...... in the living on a full set of third molar scores. A cross sectional sample of 854 panoramic radiographs, homogenously distributed by sex and age (15.0-24.0 years), were randomly split in two; a reference sample for obtaining age estimates including a 95% prediction interval according to TA; and a validation...
UNBIASED ESTIMATORS OF SPECIFIC CONNECTIVITY
Directory of Open Access Journals (Sweden)
Jean-Paul Jernot
2011-05-01
Full Text Available This paper deals with the estimation of the specific connectivity of a stationary random set in IRd. It turns out that the "natural" estimator is only asymptotically unbiased. The example of a boolean model of hypercubes illustrates the amplitude of the bias produced when the measurement field is relatively small with respect to the range of the random set. For that reason unbiased estimators are desired. Such an estimator can be found in the literature in the case where the measurement field is a right parallelotope. In this paper, this estimator is extended to apply to measurement fields of various shapes, and to possess a smaller variance. Finally an example from quantitative metallography (specific connectivity of a population of sintered bronze particles is given.
Laser cost experience and estimation
International Nuclear Information System (INIS)
Shofner, F.M.; Hoglund, R.L.
1977-01-01
This report addresses the question of estimating the capital and operating costs for LIS (Laser Isotope Separation) lasers, which have performance requirements well beyond the state of mature art. This question is seen with different perspectives by political leaders, ERDA administrators, scientists, and engineers concerned with reducing LIS to economically successful commercial practice, on a timely basis. Accordingly, this report attempts to provide ''ballpark'' estimators for capital and operating costs and useful design and operating information for lasers based on mature technology, and their LIS analogs. It is written very basically and is intended to respond about equally to the perspectives of administrators, scientists, and engineers. Its major contributions are establishing the current, mature, industrialized laser track record (including capital and operating cost estimators, reliability, types of application, etc.) and, especially, evolution of generalized estimating procedures for capital and operating cost estimators for new laser design
Estimation of toxicity using the Toxicity Estimation Software Tool (TEST)
Tens of thousands of chemicals are currently in commerce, and hundreds more are introduced every year. Since experimental measurements of toxicity are extremely time consuming and expensive, it is imperative that alternative methods to estimate toxicity are developed.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Radiation dose estimates for radiopharmaceuticals
International Nuclear Information System (INIS)
Stabin, M.G.; Stubbs, J.B.; Toohey, R.E.
1996-04-01
Tables of radiation dose estimates based on the Cristy-Eckerman adult male phantom are provided for a number of radiopharmaceuticals commonly used in nuclear medicine. Radiation dose estimates are listed for all major source organs, and several other organs of interest. The dose estimates were calculated using the MIRD Technique as implemented in the MIRDOSE3 computer code, developed by the Oak Ridge Institute for Science and Education, Radiation Internal Dose Information Center. In this code, residence times for source organs are used with decay data from the MIRD Radionuclide Data and Decay Schemes to produce estimates of radiation dose to organs of standardized phantoms representing individuals of different ages. The adult male phantom of the Cristy-Eckerman phantom series is different from the MIRD 5, or Reference Man phantom in several aspects, the most important of which is the difference in the masses and absorbed fractions for the active (red) marrow. The absorbed fractions for flow energy photons striking the marrow are also different. Other minor differences exist, but are not likely to significantly affect dose estimates calculated with the two phantoms. Assumptions which support each of the dose estimates appears at the bottom of the table of estimates for a given radiopharmaceutical. In most cases, the model kinetics or organ residence times are explicitly given. The results presented here can easily be extended to include other radiopharmaceuticals or phantoms
Risk estimation using probability machines
2014-01-01
Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306
Boundary methods for mode estimation
Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.
1999-08-01
This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).
Generalized Centroid Estimators in Bioinformatics
Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi
2011-01-01
In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017
NASA Software Cost Estimation Model: An Analogy Based Estimation Model
Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James
2015-01-01
The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K- nearest neighbor prediction model performance on the same data set.
Likelihood estimators for multivariate extremes
Huser, Raphaë l; Davison, Anthony C.; Genton, Marc G.
2015-01-01
The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.
Likelihood estimators for multivariate extremes
Huser, Raphaël
2015-11-17
The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.
Analytical estimates of structural behavior
Dym, Clive L
2012-01-01
Explicitly reintroducing the idea of modeling to the analysis of structures, Analytical Estimates of Structural Behavior presents an integrated approach to modeling and estimating the behavior of structures. With the increasing reliance on computer-based approaches in structural analysis, it is becoming even more important for structural engineers to recognize that they are dealing with models of structures, not with the actual structures. As tempting as it is to run innumerable simulations, closed-form estimates can be effectively used to guide and check numerical results, and to confirm phys
Phase estimation in optical interferometry
Rastogi, Pramod
2014-01-01
Phase Estimation in Optical Interferometry covers the essentials of phase-stepping algorithms used in interferometry and pseudointerferometric techniques. It presents the basic concepts and mathematics needed for understanding the phase estimation methods in use today. The first four chapters focus on phase retrieval from image transforms using a single frame. The next several chapters examine the local environment of a fringe pattern, give a broad picture of the phase estimation approach based on local polynomial phase modeling, cover temporal high-resolution phase evaluation methods, and pre
An Analytical Cost Estimation Procedure
National Research Council Canada - National Science Library
Jayachandran, Toke
1999-01-01
Analytical procedures that can be used to do a sensitivity analysis of a cost estimate, and to perform tradeoffs to identify input values that can reduce the total cost of a project, are described in the report...
Spectral unmixing: estimating partial abundances
CSIR Research Space (South Africa)
Debba, Pravesh
2009-01-01
Full Text Available techniques is complicated when considering very similar spectral signatures. Iron-bearing oxide/hydroxide/sulfate minerals have similar spectral signatures. The study focuses on how could estimates of abundances of spectrally similar iron-bearing oxide...
50th Percentile Rent Estimates
Department of Housing and Urban Development — Rent estimates at the 50th percentile (or median) are calculated for all Fair Market Rent areas. Fair Market Rents (FMRs) are primarily used to determine payment...
LPS Catch and Effort Estimation
National Oceanic and Atmospheric Administration, Department of Commerce — Data collected from the LPS dockside (LPIS) and the LPS telephone (LPTS) surveys are combined to produce estimates of total recreational catch, landings, and fishing...
Exploratory shaft liner corrosion estimate
International Nuclear Information System (INIS)
Duncan, D.R.
1985-10-01
An estimate of expected corrosion degradation during the 100-year design life of the Exploratory Shaft (ES) is presented. The basis for the estimate is a brief literature survey of corrosion data, in addition to data taken by the Basalt Waste Isolation Project. The scope of the study is expected corrosion environment of the ES, the corrosion modes of general corrosion, pitting and crevice corrosion, dissimilar metal corrosion, and environmentally assisted cracking. The expected internal and external environment of the shaft liner is described in detail and estimated effects of each corrosion mode are given. The maximum amount of general corrosion degradation was estimated to be 70 mils at the exterior and 48 mils at the interior, at the shaft bottom. Corrosion at welds or mechanical joints could be significant, dependent on design. After a final determination of corrosion allowance has been established by the project it will be added to the design criteria. 10 refs., 6 figs., 5 tabs
Project Cost Estimation for Planning
2010-02-26
For Nevada Department of Transportation (NDOT), there are far too many projects that ultimately cost much more than initially planned. Because project nominations are linked to estimates of future funding and the analysis of system needs, the inaccur...
Robust estimation and hypothesis testing
Tiku, Moti L
2004-01-01
In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...
Estimating Emissions from Railway Traffic
DEFF Research Database (Denmark)
Jørgensen, Morten W.; Sorenson, Spencer C.
1998-01-01
Several parameters of importance for estimating emissions from railway traffic are discussed, and typical results presented. Typical emissions factors from diesel engines and electrical power generation are presented, and the effect of differences in national electrical generation sources...
Travel time estimation using Bluetooth.
2015-06-01
The objective of this study was to investigate the feasibility of using a Bluetooth Probe Detection System (BPDS) to : estimate travel time in an urban area. Specifically, the study investigated the possibility of measuring overall congestion, the : ...
Estimating uncertainty in resolution tests
CSIR Research Space (South Africa)
Goncalves, DP
2006-05-01
Full Text Available frequencies yields a biased estimate, and we provide an improved estimator. An application illustrates how the results derived can be incorporated into a larger un- certainty analysis. ? 2006 Society of Photo-Optical Instrumentation Engineers. H20851DOI: 10....1117/1.2202914H20852 Subject terms: resolution testing; USAF 1951 test target; resolution uncertainity. Paper 050404R received May 20, 2005; revised manuscript received Sep. 2, 2005; accepted for publication Sep. 9, 2005; published online May 10, 2006. 1...
Estimating solar radiation in Ghana
International Nuclear Information System (INIS)
Anane-Fenin, K.
1986-04-01
The estimates of global radiation on a horizontal surface for 9 towns in Ghana, West Africa, are deduced from their sunshine data using two methods developed by Angstrom and Sabbagh. An appropriate regional parameter is determined with the first method and used to predict solar irradiation in all the 9 stations with an accuracy better than 15%. Estimation of diffuse solar irradiation by Page, Lin and Jordan and three other authors' correlation are performed and the results examined. (author)
The Psychology of Cost Estimating
Price, Andy
2016-01-01
Cost estimation for large (and even not so large) government programs is a challenge. The number and magnitude of cost overruns associated with large Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) programs highlight the difficulties in developing and promulgating accurate cost estimates. These overruns can be the result of inadequate technology readiness or requirements definition, the whims of politicians or government bureaucrats, or even as failures of the cost estimating profession itself. However, there may be another reason for cost overruns that is right in front of us, but only recently have we begun to grasp it: the fact that cost estimators and their customers are human. The last 70+ years of research into human psychology and behavioral economics have yielded amazing findings into how we humans process and use information to make judgments and decisions. What these scientists have uncovered is surprising: humans are often irrational and illogical beings, making decisions based on factors such as emotion and perception, rather than facts and data. These built-in biases to our thinking directly affect how we develop our cost estimates and how those cost estimates are used. We cost estimators can use this knowledge of biases to improve our cost estimates and also to improve how we communicate and work with our customers. By understanding how our customers think, and more importantly, why they think the way they do, we can have more productive relationships and greater influence. By using psychology to our advantage, we can more effectively help the decision maker and our organizations make fact-based decisions.
Estimating emissions from railway traffic
Energy Technology Data Exchange (ETDEWEB)
Joergensen, M.W.; Sorenson, C.
1997-07-01
The report discusses methods that can be used to estimate the emissions from various kinds of railway traffic. The methods are based on the estimation of the energy consumption of the train, so that comparisons can be made between electric and diesel driven trains. Typical values are given for the necessary traffic parameters, emission factors, and train loading. Detailed models for train energy consumption are presented, as well as empirically based methods using average train speed and distance between stop. (au)
Efficient, Differentially Private Point Estimators
Smith, Adam
2008-01-01
Differential privacy is a recent notion of privacy for statistical databases that provides rigorous, meaningful confidentiality guarantees, even in the presence of an attacker with access to arbitrary side information. We show that for a large class of parametric probability models, one can construct a differentially private estimator whose distribution converges to that of the maximum likelihood estimator. In particular, it is efficient and asymptotically unbiased. This result provides (furt...
Computer-Aided Parts Estimation
Cunningham, Adam; Smart, Robert
1993-01-01
In 1991, Ford Motor Company began deployment of CAPE (computer-aided parts estimating system), a highly advanced knowledge-based system designed to generate, evaluate, and cost automotive part manufacturing plans. cape is engineered on an innovative, extensible, declarative process-planning and estimating knowledge representation language, which underpins the cape kernel architecture. Many manufacturing processes have been modeled to date, but eventually every significant process in motor veh...
Guideline to Estimate Decommissioning Costs
Energy Technology Data Exchange (ETDEWEB)
Yun, Taesik; Kim, Younggook; Oh, Jaeyoung [KHNP CRI, Daejeon (Korea, Republic of)
2016-10-15
The primary objective of this work is to provide guidelines to estimate the decommissioning cost as well as the stakeholders with plausible information to understand the decommissioning activities in a reasonable manner, which eventually contribute to acquiring the public acceptance for the nuclear power industry. Although several cases of the decommissioning cost estimate have been made for a few commercial nuclear power plants, the different technical, site-specific and economic assumptions used make it difficult to interpret those cost estimates and compare them with that of a relevant plant. Trustworthy cost estimates are crucial to plan a safe and economic decommissioning project. The typical approach is to break down the decommissioning project into a series of discrete and measurable work activities. Although plant specific differences derived from the economic and technical assumptions make a licensee difficult to estimate reliable decommissioning costs, estimating decommissioning costs is the most crucial processes since it encompasses all the spectrum of activities from the planning to the final evaluation on whether a decommissioning project has successfully been preceded from the perspective of safety and economic points. Hence, it is clear that tenacious efforts should be needed to successfully perform the decommissioning project.
Comparison of density estimators. [Estimation of probability density functions
Energy Technology Data Exchange (ETDEWEB)
Kao, S.; Monahan, J.F.
1977-09-01
Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)
Weldon Spring historical dose estimate
International Nuclear Information System (INIS)
Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.
1986-07-01
This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr
Weldon Spring historical dose estimate
Energy Technology Data Exchange (ETDEWEB)
Meshkov, N.; Benioff, P.; Wang, J.; Yuan, Y.
1986-07-01
This study was conducted to determine the estimated radiation doses that individuals in five nearby population groups and the general population in the surrounding area may have received as a consequence of activities at a uranium processing plant in Weldon Spring, Missouri. The study is retrospective and encompasses plant operations (1957-1966), cleanup (1967-1969), and maintenance (1969-1982). The dose estimates for members of the nearby population groups are as follows. Of the three periods considered, the largest doses to the general population in the surrounding area would have occurred during the plant operations period (1957-1966). Dose estimates for the cleanup (1967-1969) and maintenance (1969-1982) periods are negligible in comparison. Based on the monitoring data, if there was a person residing continually in a dwelling 1.2 km (0.75 mi) north of the plant, this person is estimated to have received an average of about 96 mrem/yr (ranging from 50 to 160 mrem/yr) above background during plant operations, whereas the dose to a nearby resident during later years is estimated to have been about 0.4 mrem/yr during cleanup and about 0.2 mrem/yr during the maintenance period. These values may be compared with the background dose in Missouri of 120 mrem/yr.
An improved estimation and focusing scheme for vector velocity estimation
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Munk, Peter
1999-01-01
to reduce spatial velocity dispersion. Examples of different velocity vector conditions are shown using the Field II simulation program. A relative accuracy of 10.1 % is obtained for the lateral velocity estimates for a parabolic velocity profile for a flow perpendicular to the ultrasound beam and a signal...
Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust...
estimating formwork striking time for concrete mixes estimating
African Journals Online (AJOL)
eobe
In this study, we estimated the time for strength development in concrete cured up to 56 days. Water. In this .... regression analysis using MS Excel 2016 Software performed on the ..... [1] Abolfazl, K. R, Peroti S. and Rahemi L 'The Effect of.
Moving Horizon Estimation and Control
DEFF Research Database (Denmark)
Jørgensen, John Bagterp
successful and applied methodology beyond PID-control for control of industrial processes. The main contribution of this thesis is introduction and definition of the extended linear quadratic optimal control problem for solution of numerical problems arising in moving horizon estimation and control...... problems. Chapter 1 motivates moving horizon estimation and control as a paradigm for control of industrial processes. It introduces the extended linear quadratic control problem and discusses its central role in moving horizon estimation and control. Introduction, application and efficient solution....... It provides an algorithm for computation of the maximal output admissible set for linear model predictive control. Appendix D provides results concerning linear regression. Appendix E discuss prediction error methods for identification of linear models tailored for model predictive control....
Heuristic introduction to estimation methods
International Nuclear Information System (INIS)
Feeley, J.J.; Griffith, J.M.
1982-08-01
The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems
Estimation of effective wind speed
Østergaard, K. Z.; Brath, P.; Stoustrup, J.
2007-07-01
The wind speed has a huge impact on the dynamic response of wind turbine. Because of this, many control algorithms use a measure of the wind speed to increase performance, e.g. by gain scheduling and feed forward. Unfortunately, no accurate measurement of the effective wind speed is online available from direct measurements, which means that it must be estimated in order to make such control methods applicable in practice. In this paper a new method is presented for the estimation of the effective wind speed. First, the rotor speed and aerodynamic torque are estimated by a combined state and input observer. These two variables combined with the measured pitch angle is then used to calculate the effective wind speed by an inversion of a static aerodynamic model.
Estimation and valuation in accounting
Directory of Open Access Journals (Sweden)
Cicilia Ionescu
2014-03-01
Full Text Available The relationships of the enterprise with the external environment give rise to a range of informational needs. Satisfying those needs requires the production of coherent, comparable, relevant and reliable information included into the individual or consolidated financial statements. International Financial Reporting Standards IAS / IFRS aim to ensure the comparability and relevance of the accounting information, providing, among other things, details about the issue of accounting estimates and changes in accounting estimates. Valuation is a process continually used, in order to assign values to the elements that are to be recognised in the financial statements. Most of the times, the values reflected in the books are clear, they are recorded in the contracts with third parties, in the supporting documents, etc. However, the uncertainties in which a reporting entity operates determines that, sometimes, the assigned or values attributable to some items composing the financial statements be determined by use estimates.
Integral Criticality Estimators in MCATK
Energy Technology Data Exchange (ETDEWEB)
Nolen, Steven Douglas [Los Alamos National Laboratory; Adams, Terry R. [Los Alamos National Laboratory; Sweezy, Jeremy Ed [Los Alamos National Laboratory
2016-06-14
The Monte Carlo Application ToolKit (MCATK) is a component-based software toolset for delivering customized particle transport solutions using the Monte Carlo method. Currently under development in the XCP Monte Carlo group at Los Alamos National Laboratory, the toolkit has the ability to estimate the ke f f and a eigenvalues for static geometries. This paper presents a description of the estimators and variance reduction techniques available in the toolkit and includes a preview of those slated for future releases. Along with the description of the underlying algorithms is a description of the available user inputs for controlling the iterations. The paper concludes with a comparison of the MCATK results with those provided by analytic solutions. The results match within expected statistical uncertainties and demonstrate MCATK’s usefulness in estimating these important quantities.
Order statistics & inference estimation methods
Balakrishnan, N
1991-01-01
The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co
Methods for estimating the semivariogram
DEFF Research Database (Denmark)
Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle
2002-01-01
. In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...
Albedo estimation for scene segmentation
Energy Technology Data Exchange (ETDEWEB)
Lee, C H; Rosenfeld, A
1983-03-01
Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.
Estimation of strong ground motion
International Nuclear Information System (INIS)
Watabe, Makoto
1993-01-01
Fault model has been developed to estimate a strong ground motion in consideration of characteristics of seismic source and propagation path of seismic waves. There are two different approaches in the model. The first one is a theoretical approach, while the second approach is a semi-empirical approach. Though the latter is more practical than the former to be applied to the estimation of input motions, it needs at least the small-event records, the value of the seismic moment of the small event and the fault model of the large event
Multicollinearity and maximum entropy leuven estimator
Sudhanshu Mishra
2004-01-01
Multicollinearity is a serious problem in applied regression analysis. Q. Paris (2001) introduced the MEL estimator to resolve the multicollinearity problem. This paper improves the MEL estimator to the Modular MEL (MMEL) estimator and shows by Monte Carlo experiments that MMEL estimator performs significantly better than OLS as well as MEL estimators.
Unrecorded Alcohol Consumption: Quantitative Methods of Estimation
Razvodovsky, Y. E.
2010-01-01
unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.
Collider Scaling and Cost Estimation
International Nuclear Information System (INIS)
Palmer, R.B.
1986-01-01
This paper deals with collider cost and scaling. The main points of the discussion are the following ones: 1) scaling laws and cost estimation: accelerating gradient requirements, total stored RF energy considerations, peak power consideration, average power consumption; 2) cost optimization; 3) Bremsstrahlung considerations; 4) Focusing optics: conventional, laser focusing or super disruption. 13 refs
Helicopter Toy and Lift Estimation
Shakerin, Said
2013-01-01
A $1 plastic helicopter toy (called a Wacky Whirler) can be used to demonstrate lift. Students can make basic measurements of the toy, use reasonable assumptions and, with the lift formula, estimate the lift, and verify that it is sufficient to overcome the toy's weight. (Contains 1 figure.)
Estimation of potential uranium resources
International Nuclear Information System (INIS)
Curry, D.L.
1977-09-01
Potential estimates, like reserves, are limited by the information on hand at the time and are not intended to indicate the ultimate resources. Potential estimates are based on geologic judgement, so their reliability is dependent on the quality and extent of geologic knowledge. Reliability differs for each of the three potential resource classes. It is greatest for probable potential resources because of the greater knowledge base resulting from the advanced stage of exploration and development in established producing districts where most of the resources in this class are located. Reliability is least for speculative potential resources because no significant deposits are known, and favorability is inferred from limited geologic data. Estimates of potential resources are revised as new geologic concepts are postulated, as new types of uranium ore bodies are discovered, and as improved geophysical and geochemical techniques are developed and applied. Advances in technology that permit the exploitation of deep or low-grade deposits, or the processing of ores of previously uneconomic metallurgical types, also will affect the estimates
An Improved Cluster Richness Estimator
Energy Technology Data Exchange (ETDEWEB)
Rozo, Eduardo; /Ohio State U.; Rykoff, Eli S.; /UC, Santa Barbara; Koester, Benjamin P.; /Chicago U. /KICP, Chicago; McKay, Timothy; /Michigan U.; Hao, Jiangang; /Michigan U.; Evrard, August; /Michigan U.; Wechsler, Risa H.; /SLAC; Hansen, Sarah; /Chicago U. /KICP, Chicago; Sheldon, Erin; /New York U.; Johnston, David; /Houston U.; Becker, Matthew R.; /Chicago U. /KICP, Chicago; Annis, James T.; /Fermilab; Bleem, Lindsey; /Chicago U.; Scranton, Ryan; /Pittsburgh U.
2009-08-03
Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.
Estimation of Bridge Reliability Distributions
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...
Estimation of Motion Vector Fields
DEFF Research Database (Denmark)
Larsen, Rasmus
1993-01-01
This paper presents an approach to the estimation of 2-D motion vector fields from time varying image sequences. We use a piecewise smooth model based on coupled vector/binary Markov random fields. We find the maximum a posteriori solution by simulated annealing. The algorithm generate sample...... fields by means of stochastic relaxation implemented via the Gibbs sampler....
Multispacecraft current estimates at swarm
DEFF Research Database (Denmark)
Dunlop, M. W.; Yang, Y.-Y.; Yang, J.-Y.
2015-01-01
During the first several months of the three-spacecraft Swarm mission all three spacecraft camerepeatedly into close alignment, providing an ideal opportunity for validating the proposed dual-spacecraftmethod for estimating current density from the Swarm magnetic field data. Two of the Swarm...
Estimating Swedish biomass energy supply
International Nuclear Information System (INIS)
Johansson, J.; Lundqvist, U.
1999-01-01
Biomass is suggested to supply an increasing amount of energy in Sweden. There have been several studies estimating the potential supply of biomass energy, including that of the Swedish Energy Commission in 1995. The Energy Commission based its estimates of biomass supply on five other analyses which presented a wide variation in estimated future supply, in large part due to differing assumptions regarding important factors. In this paper, these studies are assessed, and the estimated potential biomass energy supplies are discusses regarding prices, technical progress and energy policy. The supply of logging residues depends on the demand for wood products and is limited by ecological, technological, and economic restrictions. The supply of stemwood from early thinning for energy and of straw from cereal and oil seed production is mainly dependent upon economic considerations. One major factor for the supply of willow and reed canary grass is the size of arable land projected to be not needed for food and fodder production. Future supply of biomass energy depends on energy prices and technical progress, both of which are driven by energy policy priorities. Biomass energy has to compete with other energy sources as well as with alternative uses of biomass such as forest products and food production. Technical progress may decrease the costs of biomass energy and thus increase the competitiveness. Economic instruments, including carbon taxes and subsidies, and allocation of research and development resources, are driven by energy policy goals and can change the competitiveness of biomass energy
Estimates of wildland fire emissions
Yongqiang Liu; John J. Qu; Wanting Wang; Xianjun Hao
2013-01-01
Wildland fire missions can significantly affect regional and global air quality, radiation, climate, and the carbon cycle. A fundamental and yet challenging prerequisite to understanding the environmental effects is to accurately estimate fire emissions. This chapter describes and analyzes fire emission calculations. Various techniques (field measurements, empirical...
State Estimation for Tensegrity Robots
Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas
2016-01-01
Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.
Fuel Estimation Using Dynamic Response
National Research Council Canada - National Science Library
Hines, Michael S
2007-01-01
...?s simulated satellite (SimSAT) to known control inputs. With an iterative process, the moment of inertia of SimSAT about the yaw axis was estimated by matching a model of SimSAT to the measured angular rates...
Empirical estimates of the NAIRU
DEFF Research Database (Denmark)
Madsen, Jakob Brøchner
2005-01-01
equations. In this paper it is shown that a high proportion of the constant term is a statistical artefact and suggests a new method which yields approximately unbiased estimates of the time-invariant NAIRU. Using data for OECD countries it is shown that the constant-term correction lowers the unadjusted...
Online Wavelet Complementary velocity Estimator.
Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin
2018-02-01
In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Load Estimation from Modal Parameters
DEFF Research Database (Denmark)
Aenlle, Manuel López; Brincker, Rune; Fernández, Pelayo Fernández
2007-01-01
In Natural Input Modal Analysis the modal parameters are estimated just from the responses while the loading is not recorded. However, engineers are sometimes interested in knowing some features of the loading acting on a structure. In this paper, a procedure to determine the loading from a FRF m...
Gini estimation under infinite variance
A. Fontanari (Andrea); N.N. Taleb (Nassim Nicholas); P. Cirillo (Pasquale)
2018-01-01
textabstractWe study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α∈(1,2)). We show that, in such a case, the Gini coefficient
Software Cost-Estimation Model
Tausworthe, R. C.
1985-01-01
Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.
Correlation Dimension Estimation for Classification
Czech Academy of Sciences Publication Activity Database
Jiřina, Marcel; Jiřina jr., M.
2006-01-01
Roč. 1, č. 3 (2006), s. 547-557 ISSN 1895-8648 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : correlation dimension * probability density estimation * classification * UCI MLR Subject RIV: BA - General Mathematics
Molecular pathology and age estimation.
Meissner, Christoph; Ritz-Timme, Stefanie
2010-12-15
Over the course of our lifetime a stochastic process leads to gradual alterations of biomolecules on the molecular level, a process that is called ageing. Important changes are observed on the DNA-level as well as on the protein level and are the cause and/or consequence of our 'molecular clock', influenced by genetic as well as environmental parameters. These alterations on the molecular level may aid in forensic medicine to estimate the age of a living person, a dead body or even skeletal remains for identification purposes. Four such important alterations have become the focus of molecular age estimation in the forensic community over the last two decades. The age-dependent accumulation of the 4977bp deletion of mitochondrial DNA and the attrition of telomeres along with ageing are two important processes at the DNA-level. Among a variety of protein alterations, the racemisation of aspartic acid and advanced glycation endproducs have already been tested for forensic applications. At the moment the racemisation of aspartic acid represents the pinnacle of molecular age estimation for three reasons: an excellent standardization of sampling and methods, an evaluation of different variables in many published studies and highest accuracy of results. The three other mentioned alterations often lack standardized procedures, published data are sparse and often have the character of pilot studies. Nevertheless it is important to evaluate molecular methods for their suitability in forensic age estimation, because supplementary methods will help to extend and refine accuracy and reliability of such estimates. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
23 CFR 635.115 - Agreement estimate.
2010-04-01
... CONSTRUCTION AND MAINTENANCE Contract Procedures § 635.115 Agreement estimate. (a) Following the award of contract, an agreement estimate based on the contract unit prices and estimated quantities shall be...
On semiautomatic estimation of surface area
DEFF Research Database (Denmark)
Dvorak, J.; Jensen, Eva B. Vedel
2013-01-01
and the surfactor. For ellipsoidal particles, it is shown that the flower estimator is equal to the pivotal estimator based on support function measurements along four perpendicular rays. This result makes the pivotal estimator a powerful approximation to the flower estimator. In a simulation study of prolate....... If the segmentation is correct the estimate is computed automatically, otherwise the expert performs the necessary measurements manually. In case of convex particles we suggest to base the semiautomatic estimation on the so-called flower estimator, a new local stereological estimator of particle surface area....... For convex particles, the estimator is equal to four times the area of the support set (flower set) of the particle transect. We study the statistical properties of the flower estimator and compare its performance to that of two discretizations of the flower estimator, namely the pivotal estimator...
Estimating sediment discharge: Appendix D
Gray, John R.; Simões, Francisco J. M.
2008-01-01
Sediment-discharge measurements usually are available on a discrete or periodic basis. However, estimates of sediment transport often are needed for unmeasured periods, such as when daily or annual sediment-discharge values are sought, or when estimates of transport rates for unmeasured or hypothetical flows are required. Selected methods for estimating suspended-sediment, bed-load, bed- material-load, and total-load discharges have been presented in some detail elsewhere in this volume. The purposes of this contribution are to present some limitations and potential pitfalls associated with obtaining and using the requisite data and equations to estimate sediment discharges and to provide guidance for selecting appropriate estimating equations. Records of sediment discharge are derived from data collected with sufficient frequency to obtain reliable estimates for the computational interval and period. Most sediment- discharge records are computed at daily or annual intervals based on periodically collected data, although some partial records represent discrete or seasonal intervals such as those for flood periods. The method used to calculate sediment- discharge records is dependent on the types and frequency of available data. Records for suspended-sediment discharge computed by methods described by Porterfield (1972) are most prevalent, in part because measurement protocols and computational techniques are well established and because suspended sediment composes the bulk of sediment dis- charges for many rivers. Discharge records for bed load, total load, or in some cases bed-material load plus wash load are less common. Reliable estimation of sediment discharges presupposes that the data on which the estimates are based are comparable and reliable. Unfortunately, data describing a selected characteristic of sediment were not necessarily derived—collected, processed, analyzed, or interpreted—in a consistent manner. For example, bed-load data collected with
Estimating Foreign Exchange Reserve Adequacy
Directory of Open Access Journals (Sweden)
Abdul Hakim
2013-04-01
Full Text Available Accumulating foreign exchange reserves, despite their cost and their impacts on other macroeconomics variables, provides some benefits. This paper models such foreign exchange reserves. To measure the adequacy of foreign exchange reserves for import, it uses total reserves-to-import ratio (TRM. The chosen independent variables are gross domestic product growth, exchange rates, opportunity cost, and a dummy variable separating the pre and post 1997 Asian financial crisis. To estimate the risky TRM value, this paper uses conditional Value-at-Risk (VaR, with the help of Glosten-Jagannathan-Runkle (GJR model to estimate the conditional volatility. The results suggest that all independent variables significantly influence TRM. They also suggest that the short and long run volatilities are evident, with the additional evidence of asymmetric effects of negative and positive past shocks. The VaR, which are calculated assuming both normal and t distributions, provide similar results, namely violations in 2005 and 2008.
Organ volume estimation using SPECT
Zaidi, H
1996-01-01
Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. In order to improve single-photon emission computed tomography (SPECT) quantitation, attenuation correction was performed according to Chang's algorithm. The dual-window method was used for scatter subtraction. We used a Monte Carlo simulation of the SPECT system to accurately determine the scatter multiplier factor k. Volume estimation using SPECT was performed by summing up the volume elements (voxels) lying within the contour of the object, determined by a fixed threshold and the gray level histogram (GLH) method. Thyroid phantom and patient studies were performed and the influence of 1) fixed thresholding, 2) automatic thresholding, 3) attenuation, 4) scatter, and 5) reconstruction filter were investigated. This study shows that accurate volume estimation of the thyroid gland is feasible when accurate corrections are perform...
Comments on mutagenesis risk estimation
International Nuclear Information System (INIS)
Russell, W.L.
1976-01-01
Several hypotheses and concepts have tended to oversimplify the problem of mutagenesis and can be misleading when used for genetic risk estimation. These include: the hypothesis that radiation-induced mutation frequency depends primarily on the DNA content per haploid genome, the extension of this concept to chemical mutagenesis, the view that, since DNA is DNA, mutational effects can be expected to be qualitatively similar in all organisms, the REC unit, and the view that mutation rates from chronic irradiation can be theoretically and accurately predicted from acute irradiation data. Therefore, direct determination of frequencies of transmitted mutations in mammals continues to be important for risk estimation, and the specific-locus method in mice is shown to be not as expensive as is commonly supposed for many of the chemical testing requirements
Bayesian estimation in homodyne interferometry
International Nuclear Information System (INIS)
Olivares, Stefano; Paris, Matteo G A
2009-01-01
We address phase-shift estimation by means of squeezed vacuum probe and homodyne detection. We analyse Bayesian estimator, which is known to asymptotically saturate the classical Cramer-Rao bound to the variance, and discuss convergence looking at the a posteriori distribution as the number of measurements increases. We also suggest two feasible adaptive methods, acting on the squeezing parameter and/or the homodyne local oscillator phase, which allow us to optimize homodyne detection and approach the ultimate bound to precision imposed by the quantum Cramer-Rao theorem. The performances of our two-step methods are investigated by means of Monte Carlo simulated experiments with a small number of homodyne data, thus giving a quantitative meaning to the notion of asymptotic optimality.
Parameter estimation and inverse problems
Aster, Richard C; Thurber, Clifford H
2005-01-01
Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...
Cost Estimates and Investment Decisions
International Nuclear Information System (INIS)
Emhjellen, Kjetil; Emhjellen Magne; Osmundsen, Petter
2001-08-01
When evaluating new investment projects, oil companies traditionally use the discounted cashflow method. This method requires expected cashflows in the numerator and a risk adjusted required rate of return in the denominator in order to calculate net present value. The capital expenditure (CAPEX) of a project is one of the major cashflows used to calculate net present value. Usually the CAPEX is given by a single cost figure, with some indication of its probability distribution. In the oil industry and many other industries, it is common practice to report a CAPEX that is the estimated 50/50 (median) CAPEX instead of the estimated expected (expected value) CAPEX. In this article we demonstrate how the practice of using a 50/50 (median) CAPEX, when the cost distributions are asymmetric, causes project valuation errors and therefore may lead to wrong investment decisions with acceptance of projects that have negative net present values. (author)
Extended exome sequencing identifies BACH2 as a novel major risk locus for Addison's disease
Eriksson, D.; Bianchi, Matteo; Landegren, Nils; Nordin, Jessika; Dalin, Frida; Mathioudaki, Argyri; Eriksson, G. N.; Hultin-Rosenberg, Lina; Dahlqvist, Johanna; Zetterqvist, H.; Karlsson, Andreas; Hallgren, Anna; Farias, F. H. G.; Murén, Eva; Ahlgren, Kerstin M.
2016-01-01
BackgroundAutoimmune disease is one of the leading causes of morbidity and mortality worldwide. In Addisons disease, the adrenal glands are targeted by destructive autoimmunity. Despite being the most common cause of primary adrenal failure, little is known about its aetiology. MethodsTo understand the genetic background of Addisons disease, we utilized the extensively characterized patients of the Swedish Addison Registry. We developed an extended exome capture array comprising a selected se...
BRIP1 (BACH1 variants and familial breast cancer risk: a case-control study
Directory of Open Access Journals (Sweden)
Bugert Peter
2007-05-01
Full Text Available Abstract Background Inactivating and truncating mutations of the nuclear BRCA1-interacting protein 1 (BRIP1 have been shown to be the major cause of Fanconi anaemia and, due to subsequent alterations of BRCA1 function, predispose to breast cancer (BC. Methods We investigated the effect of BRIP1 -64G>A and Pro919Ser on familial BC risk by means of TaqMan allelic discrimination, analysing BRCA1/BRCA2 mutation-negative index patients of 571 German BC families and 712 control individuals. Results No significant differences in genotype frequencies between BC cases and controls for BRIP1 -64G>A and Pro919Ser were observed. Conclusion We found no effect of the putatively functional BRIP1 variants -64G>A and Pro919Ser on the risk of familial BC.
Musical rhythm spectra from Bach to Joplin obey a 1/f power law.
Levitin, Daniel J; Chordia, Parag; Menon, Vinod
2012-03-06
Much of our enjoyment of music comes from its balance of predictability and surprise. Musical pitch fluctuations follow a 1/f power law that precisely achieves this balance. Musical rhythms, especially those of Western classical music, are considered highly regular and predictable, and this predictability has been hypothesized to underlie rhythm's contribution to our enjoyment of music. Are musical rhythms indeed entirely predictable and how do they vary with genre and composer? To answer this question, we analyzed the rhythm spectra of 1,788 movements from 558 compositions of Western classical music. We found that an overwhelming majority of rhythms obeyed a 1/f(β) power law across 16 subgenres and 40 composers, with β ranging from ∼0.5-1. Notably, classical composers, whose compositions are known to exhibit nearly identical 1/f pitch spectra, demonstrated distinctive 1/f rhythm spectra: Beethoven's rhythms were among the most predictable, and Mozart's among the least. Our finding of the ubiquity of 1/f rhythm spectra in compositions spanning nearly four centuries demonstrates that, as with musical pitch, musical rhythms also exhibit a balance of predictability and surprise that could contribute in a fundamental way to our aesthetic experience of music. Although music compositions are intended to be performed, the fact that the notated rhythms follow a 1/f spectrum indicates that such structure is no mere artifact of performance or perception, but rather, exists within the written composition before the music is performed. Furthermore, composers systematically manipulate (consciously or otherwise) the predictability in 1/f rhythms to give their compositions unique identities.
From the Cover: Musical rhythm spectra from Bach to Joplin obey a 1/f power law
Levitin, Daniel J.; Chordia, Parag; Menon, Vinod
2012-03-01
Much of our enjoyment of music comes from its balance of predictability and surprise. Musical pitch fluctuations follow a 1/f power law that precisely achieves this balance. Musical rhythms, especially those of Western classical music, are considered highly regular and predictable, and this predictability has been hypothesized to underlie rhythm's contribution to our enjoyment of music. Are musical rhythms indeed entirely predictable and how do they vary with genre and composer? To answer this question, we analyzed the rhythm spectra of 1,788 movements from 558 compositions of Western classical music. We found that an overwhelming majority of rhythms obeyed a 1/fβ power law across 16 subgenres and 40 composers, with β ranging from ∼0.5-1. Notably, classical composers, whose compositions are known to exhibit nearly identical 1/f pitch spectra, demonstrated distinctive 1/f rhythm spectra: Beethoven's rhythms were among the most predictable, and Mozart's among the least. Our finding of the ubiquity of 1/f rhythm spectra in compositions spanning nearly four centuries demonstrates that, as with musical pitch, musical rhythms also exhibit a balance of predictability and surprise that could contribute in a fundamental way to our aesthetic experience of music. Although music compositions are intended to be performed, the fact that the notated rhythms follow a 1/f spectrum indicates that such structure is no mere artifact of performance or perception, but rather, exists within the written composition before the music is performed. Furthermore, composers systematically manipulate (consciously or otherwise) the predictability in 1/f rhythms to give their compositions unique identities.
A quantitative study of seven historically informed performances of Bach's BWV1007 Prelude
Vaquero, C.
2015-01-01
In the field of early music, the urge to realize historically informed interpretations has led to new perspectives about our musical legacy from scholars and performers alike. Consequently, different schools of early music performance practice have been developed through the 20th and 21st centuries.
If Bach Had Owned a Computer: Technology and Teaching the Novel.
Boyer, Jay
1987-01-01
Considers the changes the world has undergone (advances in technology) since World War II and uses this as a basis to analyze why students increasingly seem to find the novel a difficult form to handle. (NKA)
Legg, Robert
2012-01-01
This article applies Bourdieu's notion of "cultural capital" to historical, documentary research which investigates the construction of a scholastic canon within England's A-level music examinations. A digest of the ways in which this canon evolved between 1951 and 1986 is presented in support of the idea that examiners' responses to…
Location Estimation using Delayed Measurements
DEFF Research Database (Denmark)
Bak, Martin; Larsen, Thomas Dall; Nørgård, Peter Magnus
1998-01-01
When combining data from various sensors it is vital to acknowledge possible measurement delays. Furthermore, the sensor fusion algorithm, often a Kalman filter, should be modified in order to handle the delay. The paper examines different possibilities for handling delays and applies a new techn...... technique to a sensor fusion system for estimating the location of an autonomous guided vehicle. The system fuses encoder and vision measurements in an extended Kalman filter. Results from experiments in a real environment are reported...
Prior information in structure estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav; Nedoma, Petr; Khailova, Natalia; Pavelková, Lenka
2003-01-01
Roč. 150, č. 6 (2003), s. 643-653 ISSN 1350-2379 R&D Projects: GA AV ČR IBS1075102; GA AV ČR IBS1075351; GA ČR GA102/03/0049 Institutional research plan: CEZ:AV0Z1075907 Keywords : prior knowledge * structure estimation * autoregressive models Subject RIV: BC - Control Systems Theory Impact factor: 0.745, year: 2003 http://library.utia.cas.cz/separaty/historie/karny-0411258.pdf
Radiation in space: risk estimates
International Nuclear Information System (INIS)
Fry, R.J.M.
2002-01-01
The complexity of radiation environments in space makes estimation of risks more difficult than for the protection of terrestrial population. In deep space the duration of the mission, position of the solar cycle, number and size of solar particle events (SPE) and the spacecraft shielding are the major determinants of risk. In low-earth orbit missions there are the added factors of altitude and orbital inclination. Different radiation qualities such as protons and heavy ions and secondary radiations inside the spacecraft such as neutrons of various energies, have to be considered. Radiation dose rates in space are low except for short periods during very large SPEs. Risk estimation for space activities is based on the human experience of exposure to gamma rays and to a lesser extent X rays. The doses of protons, heavy ions and neutrons are adjusted to take into account the relative biological effectiveness (RBE) of the different radiation types and thus derive equivalent doses. RBE values and factors to adjust for the effect of dose rate have to be obtained from experimental data. The influence of age and gender on the cancer risk is estimated from the data from atomic bomb survivors. Because of the large number of variables the uncertainties in the probability of the effects are large. Information needed to improve the risk estimates includes: (1) risk of cancer induction by protons, heavy ions and neutrons; (2) influence of dose rate and protraction, particularly on potential tissue effects such as reduced fertility and cataracts; and (3) possible effects of heavy ions on the central nervous system. Risk cannot be eliminated and thus there must be a consensus on what level of risk is acceptable. (author)
Properties of estimated characteristic roots
Bent Nielsen; Heino Bohn Nielsen
2008-01-01
Estimated characteristic roots in stationary autoregressions are shown to give rather noisy information about their population equivalents. This is remarkable given the central role of the characteristic roots in the theory of autoregressive processes. In the asymptotic analysis the problems appear when multiple roots are present as this implies a non-differentiablity so the Î´-method does not apply, convergence rates are slow, and the asymptotic distribution is non-normal. In finite samples ...
Recent estimates of capital flight
Claessens, Stijn; Naude, David
1993-01-01
Researchers and policymakers have in recent years paid considerable attention to the phenomenon of capital flight. Researchers have focused on four questions: What concept should be used to measure capital flight? What figure for capital flight will emerge, using this measure? Can the occurrence and magnitude of capital flight be explained by certain (economic) variables? What policy changes can be useful to reverse capital flight? The authors focus strictly on presenting estimates of capital...
Effort Estimation in BPMS Migration
Drews, Christopher; Lantow, Birger
2018-01-01
Usually Business Process Management Systems (BPMS) are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation re...
Reactor core performance estimating device
International Nuclear Information System (INIS)
Tanabe, Akira; Yamamoto, Toru; Shinpuku, Kimihiro; Chuzen, Takuji; Nishide, Fusayo.
1995-01-01
The present invention can autonomously simplify a neural net model thereby enabling to conveniently estimate various amounts which represents reactor core performances by a simple calculation in a short period of time. Namely, a reactor core performance estimation device comprises a nerve circuit net which divides the reactor core into a large number of spacial regions, and receives various physical amounts for each region as input signals for input nerve cells and outputs estimation values of each amount representing the reactor core performances as output signals of output nerve cells. In this case, the nerve circuit net (1) has a structure of extended multi-layered model having direct coupling from an upper stream layer to each of downstream layers, (2) has a forgetting constant q in a corrected equation for a joined load value ω using an inverse error propagation method, (3) learns various amounts representing reactor core performances determined using the physical models as teacher signals, (4) determines the joined load value ω decreased as '0' when it is to less than a predetermined value upon learning described above, and (5) eliminates elements of the nerve circuit net having all of the joined load value decreased to 0. As a result, the neural net model comprises an autonomously simplifying means. (I.S.)
Contact Estimation in Robot Interaction
Directory of Open Access Journals (Sweden)
Filippo D'Ippolito
2014-07-01
Full Text Available In the paper, safety issues are examined in a scenario in which a robot manipulator and a human perform the same task in the same workspace. During the task execution, the human should be able to physically interact with the robot, and in this case an estimation algorithm for both interaction forces and a contact point is proposed in order to guarantee safety conditions. The method, starting from residual joint torque estimation, allows both direct and adaptive computation of the contact point and force, based on a principle of equivalence of the contact forces. At the same time, all the unintended contacts must be avoided, and a suitable post-collision strategy is considered to move the robot away from the collision area or else to reduce impact effects. Proper experimental tests have demonstrated the applicability in practice of both the post-impact strategy and the estimation algorithms; furthermore, experiments demonstrate the different behaviour resulting from the adaptation of the contact point as opposed to direct calculation.
Statistical estimation of process holdup
International Nuclear Information System (INIS)
Harris, S.P.
1988-01-01
Estimates of potential process holdup and their random and systematic error variances are derived to improve the inventory difference (ID) estimate and its associated measure of uncertainty for a new process at the Savannah River Plant. Since the process is in a start-up phase, data have not yet accumulated for statistical modelling. The material produced in the facility will be a very pure, highly enriched 235U with very small isotopic variability. Therefore, data published in LANL's unclassified report on Estimation Methods for Process Holdup of a Special Nuclear Materials was used as a starting point for the modelling process. LANL's data were gathered through a series of designed measurements of special nuclear material (SNM) holdup at two of their materials-processing facilities. Also, they had taken steps to improve the quality of data through controlled, larger scale, experiments outside of LANL at highly enriched uranium processing facilities. The data they have accumulated are on an equipment component basis. Our modelling has been restricted to the wet chemistry area. We have developed predictive models for each of our process components based on the LANL data. 43 figs
Abundance estimation and conservation biology
Nichols, J.D.; MacKenzie, D.I.
2004-01-01
Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001). The initial capture–recapture models developed for partially (Darroch, 1959) and completely (Jolly, 1965; Seber, 1965) open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992), and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993). However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001). The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004) is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004) emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004) also suggest that our attention
Abundance estimation and Conservation Biology
Directory of Open Access Journals (Sweden)
Nichols, J. D.
2004-06-01
Full Text Available Abundance is the state variable of interest in most population–level ecological research and in most programs involving management and conservation of animal populations. Abundance is the single parameter of interest in capture–recapture models for closed populations (e.g., Darroch, 1958; Otis et al., 1978; Chao, 2001. The initial capture–recapture models developed for partially (Darroch, 1959 and completely (Jolly, 1965; Seber, 1965 open populations represented efforts to relax the restrictive assumption of population closure for the purpose of estimating abundance. Subsequent emphases in capture–recapture work were on survival rate estimation in the 1970’s and 1980’s (e.g., Burnham et al., 1987; Lebreton et al.,1992, and on movement estimation in the 1990’s (Brownie et al., 1993; Schwarz et al., 1993. However, from the mid–1990’s until the present time, capture–recapture investigators have expressed a renewed interest in abundance and related parameters (Pradel, 1996; Schwarz & Arnason, 1996; Schwarz, 2001. The focus of this session was abundance, and presentations covered topics ranging from estimation of abundance and rate of change in abundance, to inferences about the demographic processes underlying changes in abundance, to occupancy as a surrogate of abundance. The plenary paper by Link & Barker (2004 is provocative and very interesting, and it contains a number of important messages and suggestions. Link & Barker (2004 emphasize that the increasing complexity of capture–recapture models has resulted in large numbers of parameters and that a challenge to ecologists is to extract ecological signals from this complexity. They offer hierarchical models as a natural approach to inference in which traditional parameters are viewed as realizations of stochastic processes. These processes are governed by hyperparameters, and the inferential approach focuses on these hyperparameters. Link & Barker (2004 also suggest that
Estimating the Costs of Preventive Interventions
Foster, E. Michael; Porter, Michele M.; Ayers, Tim S.; Kaplan, Debra L.; Sandler, Irwin
2007-01-01
The goal of this article is to improve the practice and reporting of cost estimates of prevention programs. It reviews the steps in estimating the costs of an intervention and the principles that should guide estimation. The authors then review prior efforts to estimate intervention costs using a sample of well-known but diverse studies. Finally,…
Thermodynamics and life span estimation
International Nuclear Information System (INIS)
Kuddusi, Lütfullah
2015-01-01
In this study, the life span of people living in seven regions of Turkey is estimated by applying the first and second laws of thermodynamics to the human body. The people living in different regions of Turkey have different food habits. The first and second laws of thermodynamics are used to calculate the entropy generation rate per unit mass of a human due to the food habits. The lifetime entropy generation per unit mass of a human was previously found statistically. The two entropy generations, lifetime entropy generation and entropy generation rate, enable one to determine the life span of people living in seven regions of Turkey with different food habits. In order to estimate the life span, some statistics of Turkish Statistical Institute regarding the food habits of the people living in seven regions of Turkey are used. The life spans of people that live in Central Anatolia and Eastern Anatolia regions are the longest and shortest, respectively. Generally, the following inequality regarding the life span of people living in seven regions of Turkey is found: Eastern Anatolia < Southeast Anatolia < Black Sea < Mediterranean < Marmara < Aegean < Central Anatolia. - Highlights: • The first and second laws of thermodynamics are applied to the human body. • The entropy generation of a human due to his food habits is determined. • The life span of Turks is estimated by using the entropy generation method. • Food habits of a human have effect on his life span
The estimation of genetic divergence
Holmquist, R.; Conroy, T.
1981-01-01
Consideration is given to the criticism of Nei and Tateno (1978) of the REH (random evolutionary hits) theory of genetic divergence in nucleic acids and proteins, and to their proposed alternative estimator of total fixed mutations designated X2. It is argued that the assumption of nonuniform amino acid or nucleotide substitution will necessarily increase REH estimates relative to those made for a model where each locus has an equal likelihood of fixing mutations, thus the resulting value will not be an overestimation. The relative values of X2 and measures calculated on the basis of the PAM and REH theories for the number of nucleotide substitutions necessary to explain a given number of observed amino acid differences between two homologous proteins are compared, and the smaller values of X2 are attributed to (1) a mathematical model based on the incorrect assumption that an entire structural gene is free to fix mutations and (2) the assumptions of different numbers of variable codons for the X2 and REH calculations. Results of a repeat of the computer simulations of Nei and Tateno are presented which, in contrast to the original results, confirm the REH theory. It is pointed out that while a negative correlation is observed between estimations of the fixation intensity per varion and the number of varions for a given pair of sequences, the correlation between the two fixation intensities and varion numbers of two different pairs of sequences need not be negative. Finally, REH theory is used to resolve a paradox concerning the high rate of covarion turnover and the nature of general function sites as permanent covarions.
Nonparametric e-Mixture Estimation.
Takano, Ken; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru
2016-12-01
This study considers the common situation in data analysis when there are few observations of the distribution of interest or the target distribution, while abundant observations are available from auxiliary distributions. In this situation, it is natural to compensate for the lack of data from the target distribution by using data sets from these auxiliary distributions-in other words, approximating the target distribution in a subspace spanned by a set of auxiliary distributions. Mixture modeling is one of the simplest ways to integrate information from the target and auxiliary distributions in order to express the target distribution as accurately as possible. There are two typical mixtures in the context of information geometry: the [Formula: see text]- and [Formula: see text]-mixtures. The [Formula: see text]-mixture is applied in a variety of research fields because of the presence of the well-known expectation-maximazation algorithm for parameter estimation, whereas the [Formula: see text]-mixture is rarely used because of its difficulty of estimation, particularly for nonparametric models. The [Formula: see text]-mixture, however, is a well-tempered distribution that satisfies the principle of maximum entropy. To model a target distribution with scarce observations accurately, this letter proposes a novel framework for a nonparametric modeling of the [Formula: see text]-mixture and a geometrically inspired estimation algorithm. As numerical examples of the proposed framework, a transfer learning setup is considered. The experimental results show that this framework works well for three types of synthetic data sets, as well as an EEG real-world data set.
Dose estimation by biological methods
International Nuclear Information System (INIS)
Guerrero C, C.; David C, L.; Serment G, J.; Brena V, M.
1997-01-01
The human being is exposed to strong artificial radiation sources, mainly of two forms: the first is referred to the occupationally exposed personnel (POE) and the second, to the persons that require radiological treatment. A third form less common is by accidents. In all these conditions it is very important to estimate the absorbed dose. The classical biological dosimetry is based in the dicentric analysis. The present work is part of researches to the process to validate the In situ Fluorescent hybridation (FISH) technique which allows to analyse the aberrations on the chromosomes. (Author)
Stochastic estimation of electricity consumption
International Nuclear Information System (INIS)
Kapetanovic, I.; Konjic, T.; Zahirovic, Z.
1999-01-01
Electricity consumption forecasting represents a part of the stable functioning of the power system. It is very important because of rationality and increase of control process efficiency and development planning of all aspects of society. On a scientific basis, forecasting is a possible way to solve problems. Among different models that have been used in the area of forecasting, the stochastic aspect of forecasting as a part of quantitative models takes a very important place in applications. ARIMA models and Kalman filter as stochastic estimators have been treated together for electricity consumption forecasting. Therefore, the main aim of this paper is to present the stochastic forecasting aspect using short time series. (author)
Size Estimates in Inverse Problems
Di Cristo, Michele
2014-01-06
Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.
Location Estimation of Mobile Devices
Directory of Open Access Journals (Sweden)
Kamil ŽIDEK
2009-06-01
Full Text Available This contribution describes mathematical model (kinematics for Mobile Robot carriage. The mathematical model is fully parametric. Model is designed universally for any measures three or four wheeled carriage. The next conditions are: back wheels are driving-wheel, front wheels change angle of Robot turning. Position of the front wheel gives the actual position of the robot. Position of the robot is described by coordinates x, y and by angle of the front wheel α in reference position. Main reason for model implementation is indoor navigation. We need some estimation of robot position especially after turning of the Robot. Next use is for outdoor navigation especially for precising GPS information.
Estimation of the energy needs
International Nuclear Information System (INIS)
Ailleret
1955-01-01
The present report draws up the balance on the present and estimable energy consumption for the next twenty years. The present energy comes mainly of the consumption of coal, oil products and essentially hydraulic electric energy. the market development comes essentially of the development the industrial activity and of new applications tributary of the cost and the distribution of the electric energy. To this effect, the atomic energy offers good industrial perspectives in complement of the energy present resources in order to answer the new needs. (M.B.) [fr
Random Decrement Based FRF Estimation
DEFF Research Database (Denmark)
Brincker, Rune; Asmussen, J. C.
to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...
Random Decrement Based FRF Estimation
DEFF Research Database (Denmark)
Brincker, Rune; Asmussen, J. C.
1997-01-01
to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...
Applied parameter estimation for chemical engineers
Englezos, Peter
2000-01-01
Formulation of the parameter estimation problem; computation of parameters in linear models-linear regression; Gauss-Newton method for algebraic models; other nonlinear regression methods for algebraic models; Gauss-Newton method for ordinary differential equation (ODE) models; shortcut estimation methods for ODE models; practical guidelines for algorithm implementation; constrained parameter estimation; Gauss-Newton method for partial differential equation (PDE) models; statistical inferences; design of experiments; recursive parameter estimation; parameter estimation in nonlinear thermodynam
Graph Sampling for Covariance Estimation
Chepuri, Sundeep Prabhakar
2017-04-25
In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.
Note on demographic estimates 1979.
1979-01-01
Based on UN projections, national projections, and the South Pacific Commission data, the ESCAP Population Division has compiled estimates of the 1979 population and demogaphic figures for the 38 member countries and associate members. The 1979 population is estimated at 2,400 million, 55% of the world total of 4,336 million. China comprises 39% of the region, India, 28%. China, India, Indonesia, Japan, Bangladesh, and Pakistan comprise 6 of the 10 largest countries in the world. China and India are growing at the rate of 1 million people per month. Between 1978-9 Hong Kong experienced the highest rate of growth, 6.2%, Niue the lowest, 4.5%. Life expectancy at birth is 58.7 years in the ESCAP region, but is over 70 in Japan, Hong Kong, Australia, New Zealand, and Singapore. At 75.2 years life expectancy in Japan is highest in the world. By world standards, a high percentage of females aged 16-64 are economically active. More than half the women aged 15-64 are in the labor force in 10 of the ESCAP countries. The region is still 73% rural. By the end of the 20th century the population of the ESCAP region is projected at 3,272 million, a 36% increase over the 1979 total.
Practical global oceanic state estimation
Wunsch, Carl; Heimbach, Patrick
2007-06-01
The problem of oceanographic state estimation, by means of an ocean general circulation model (GCM) and a multitude of observations, is described and contrasted with the meteorological process of data assimilation. In practice, all such methods reduce, on the computer, to forms of least-squares. The global oceanographic problem is at the present time focussed primarily on smoothing, rather than forecasting, and the data types are unlike meteorological ones. As formulated in the consortium Estimating the Circulation and Climate of the Ocean (ECCO), an automatic differentiation tool is used to calculate the so-called adjoint code of the GCM, and the method of Lagrange multipliers used to render the problem one of unconstrained least-squares minimization. Major problems today lie less with the numerical algorithms (least-squares problems can be solved by many means) than with the issues of data and model error. Results of ongoing calculations covering the period of the World Ocean Circulation Experiment, and including among other data, satellite altimetry from TOPEX/POSEIDON, Jason-1, ERS- 1/2, ENVISAT, and GFO, a global array of profiling floats from the Argo program, and satellite gravity data from the GRACE mission, suggest that the solutions are now useful for scientific purposes. Both methodology and applications are developing in a number of different directions.
LOD estimation from DORIS observations
Stepanek, Petr; Filler, Vratislav; Buday, Michal; Hugentobler, Urs
2016-04-01
The difference between astronomically determined duration of the day and 86400 seconds is called length of day (LOD). The LOD could be also understood as the daily rate of the difference between the Universal Time UT1, based on the Earth rotation, and the International Atomic Time TAI. The LOD is estimated using various Satellite Geodesy techniques as GNSS and SLR, while absolute UT1-TAI difference is precisely determined by VLBI. Contrary to other IERS techniques, the LOD estimation using DORIS (Doppler Orbitography and Radiopositioning Integrated by satellite) measurement did not achieve a geodetic accuracy in the past, reaching the precision at the level of several ms per day. However, recent experiments performed by IDS (International DORIS Service) analysis centre at Geodetic Observatory Pecny show a possibility to reach accuracy around 0.1 ms per day, when not adjusting the cross-track harmonics in the Satellite orbit model. The paper presents the long term LOD series determined from the DORIS solutions. The series are compared with C04 as the reference. Results are discussed in the context of accuracy achieved with GNSS and SLR. Besides the multi-satellite DORIS solutions, also the LOD series from the individual DORIS satellite solutions are analysed.
CONSTRUCTING ACCOUNTING UNCERTAINITY ESTIMATES VARIABLE
Directory of Open Access Journals (Sweden)
Nino Serdarevic
2012-10-01
Full Text Available This paper presents research results on the BIH firms’ financial reporting quality, utilizing empirical relation between accounting conservatism, generated in created critical accounting policy choices, and management abilities in estimates and prediction power of domicile private sector accounting. Primary research is conducted based on firms’ financial statements, constructing CAPCBIH (Critical Accounting Policy Choices relevant in B&H variable that presents particular internal control system and risk assessment; and that influences financial reporting positions in accordance with specific business environment. I argue that firms’ management possesses no relevant capacity to determine risks and true consumption of economic benefits, leading to creation of hidden reserves in inventories and accounts payable; and latent losses for bad debt and assets revaluations. I draw special attention to recent IFRS convergences to US GAAP, especially in harmonizing with FAS 130 Reporting comprehensive income (in revised IAS 1 and FAS 157 Fair value measurement. CAPCBIH variable, resulted in very poor performance, presents considerable lack of recognizing environment specifics. Furthermore, I underline the importance of revised ISAE and re-enforced role of auditors in assessing relevance of management estimates.
International Nuclear Information System (INIS)
Pochin, E.E.
1980-01-01
In an increasing number of situations, it is becoming possible to obtain and compare numerical estimates of the biological risks involved in different alternative sources of action. In some cases these risks are similar in kind, as for example when the risk of including fatal cancer of the breast or stomach by x-ray screening of a population at risk, is compared with the risk of such cancers proving fatal if not detected by a screening programme. In other cases in which it is important to attempt a comparison, the risks are dissimilar in type, as when the safety of occupations involving exposure to radiation or chemical carcinogens is compared with that of occupations in which the major risks are from lung disease or from accidental injury and death. Similar problems of assessing the relative severity of unlike effects occur in any attempt to compare the total biological harm associated with a given output of electricity derived from different primary fuel sources, with its contributions both of occupation and of public harm. In none of these instances is the numerical frequency of harmful effects alone an adequate measure of total biological detriment, nor is such detriment the only factor which should influence decisions. Estimations of risk appear important however, since otherwise public health decisions are likely to be made on more arbitrary grounds, and public opinion will continue to be affected predominantly by the type rather than also by the size of risk. (author)
Variance function estimation for immunoassays
International Nuclear Information System (INIS)
Raab, G.M.; Thompson, R.; McKenzie, I.
1980-01-01
A computer program is described which implements a recently described, modified likelihood method of determining an appropriate weighting function to use when fitting immunoassay dose-response curves. The relationship between the variance of the response and its mean value is assumed to have an exponential form, and the best fit to this model is determined from the within-set variability of many small sets of repeated measurements. The program estimates the parameter of the exponential function with its estimated standard error, and tests the fit of the experimental data to the proposed model. Output options include a list of the actual and fitted standard deviation of the set of responses, a plot of actual and fitted standard deviation against the mean response, and an ordered list of the 10 sets of data with the largest ratios of actual to fitted standard deviation. The program has been designed for a laboratory user without computing or statistical expertise. The test-of-fit has proved valuable for identifying outlying responses, which may be excluded from further analysis by being set to negative values in the input file. (Auth.)
Information and crystal structure estimation
International Nuclear Information System (INIS)
Wilkins, S.W.; Commonwealth Scientific and Industrial Research Organization, Clayton; Varghese, J.N.; Steenstrup, S.
1984-01-01
The conceptual foundations of a general information-theoretic based approach to X-ray structure estimation are reexamined with a view to clarifying some of the subtleties inherent in the approach and to enhancing the scope of the method. More particularly, general reasons for choosing the minimum of the Shannon-Kullback measure for information as the criterion for inference are discussed and it is shown that the minimum information (or maximum entropy) principle enters the present treatment of the structure estimation problem in at least to quite separate ways, and that three formally similar but conceptually quite different expressions for relative information appear at different points in the theory. One of these is the general Shannon-Kullback expression, while the second is a derived form pertaining only under the restrictive assumptions of the present stochastic model for allowed structures, and the third is a measure of the additional information involved in accepting a fluctuation relative to an arbitrary mean structure. (orig.)
PHAZE, Parametric Hazard Function Estimation
International Nuclear Information System (INIS)
2002-01-01
1 - Description of program or function: Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking of the model assumptions. 2 - Methods: PHAZE assumes that the failures of a component follow a time-dependent (or non-homogenous) Poisson process and that the failure counts in non-overlapping time intervals are independent. Implicit in the independence property is the assumption that the component is restored to service immediately after any failure, with negligible repair time. The failures of one component are assumed to be independent of those of another component; a proportional hazards model is used. Data for a component are called time censored if the component is observed for a fixed time-period, or plant records covering a fixed time-period are examined, and the failure times are recorded. The number of these failures is random. Data are called failure censored if the component is kept in service until a predetermined number of failures has occurred, at which time the component is removed from service. In this case, the number of failures is fixed, but the end of the observation period equals the final failure time and is random. A typical PHAZE session consists of reading failure data from a file prepared previously, selecting one of the three models, and performing data analysis (i.e., performing the usual statistical inference about the parameters of the model, with special emphasis on the parameter(s) that determine whether the hazard function is increasing). The final goals of the inference are a point estimate
Bayesian estimation methods in metrology
International Nuclear Information System (INIS)
Cox, M.G.; Forbes, A.B.; Harris, P.M.
2004-01-01
In metrology -- the science of measurement -- a measurement result must be accompanied by a statement of its associated uncertainty. The degree of validity of a measurement result is determined by the validity of the uncertainty statement. In recognition of the importance of uncertainty evaluation, the International Standardization Organization in 1995 published the Guide to the Expression of Uncertainty in Measurement and the Guide has been widely adopted. The validity of uncertainty statements is tested in interlaboratory comparisons in which an artefact is measured by a number of laboratories and their measurement results compared. Since the introduction of the Mutual Recognition Arrangement, key comparisons are being undertaken to determine the degree of equivalence of laboratories for particular measurement tasks. In this paper, we discuss the possible development of the Guide to reflect Bayesian approaches and the evaluation of key comparison data using Bayesian estimation methods
International Nuclear Information System (INIS)
Anon.
1982-01-01
The way nuclear power plants are built practically excludes accidents with serious consequences. This is attended to by careful selection of material, control of fabrication and regular retesting as well as by several safety systems working independently. But the remaining risk, a 'hypothetic' uncontrollable incident with catastrophic effects is the main subject of the discussion on the peaceful utilization of nuclear power. The this year's 'Annual Meeting on Nuclear Engineering' in Mannheim and the meeting 'Reactor Safety Research' in Cologne showed, that risk studies so far were too pessimistic. 'Best estimate' calculations suggest that core melt-down accidents only occur if almost all safety systems fail, that accidents take place much more slowly, and that the release of radioactive fission products is by several magnitudes lower than it was assumed until now. (orig.) [de
Neutron background estimates in GESA
Directory of Open Access Journals (Sweden)
Fernandes A.C.
2014-01-01
Full Text Available The SIMPLE project looks for nuclear recoil events generated by rare dark matter scattering interactions. Nuclear recoils are also produced by more prevalent cosmogenic neutron interactions. While the rock overburden shields against (μ,n neutrons to below 10−8 cm−2 s−1, it itself contributes via radio-impurities. Additional shielding of these is similar, both suppressing and contributing neutrons. We report on the Monte Carlo (MCNP estimation of the on-detector neutron backgrounds for the SIMPLE experiment located in the GESA facility of the Laboratoire Souterrain à Bas Bruit, and its use in defining additional shielding for measurements which have led to a reduction in the extrinsic neutron background to ∼ 5 × 10−3 evts/kgd. The calculated event rate induced by the neutron background is ∼ 0,3 evts/kgd, with a dominant contribution from the detector container.
International Nuclear Information System (INIS)
Carlberg, R.G.
1990-01-01
The redshift dependence of the fraction of galaxies which are merging or strongly interacting is a steep function of Omega and depends on the ratio of the cutoff velocity for interactions to the pairwise velocity dispersion. For typical galaxies the merger rate is shown to vary as (1 + z)exp m, where m is about 4.51 (Omega)exp 0.42, for Omega near 1 and a CDM-like cosmology. The index m has a relatively weak dependence on the maximum merger velocity, the mass of the galaxy, and the background cosmology, for small variations around a cosmology with a low redshift, z of about 2, of galaxy formation. Estimates of m from optical and IRAS galaxies have found that m is about 3-4, but with very large uncertainties. If quasar evolution follows the evolution of galaxy merging and m for quasars is greater than 4, then Omega is greater than 0.8. 21 refs
2007 Estimated International Energy Flows
Energy Technology Data Exchange (ETDEWEB)
Smith, C A; Belles, R D; Simon, A J
2011-03-10
An energy flow chart or 'atlas' for 136 countries has been constructed from data maintained by the International Energy Agency (IEA) and estimates of energy use patterns for the year 2007. Approximately 490 exajoules (460 quadrillion BTU) of primary energy are used in aggregate by these countries each year. While the basic structure of the energy system is consistent from country to country, patterns of resource use and consumption vary. Energy can be visualized as it flows from resources (i.e. coal, petroleum, natural gas) through transformations such as electricity generation to end uses (i.e. residential, commercial, industrial, transportation). These flow patterns are visualized in this atlas of 136 country-level energy flow charts.
Data Handling and Parameter Estimation
DEFF Research Database (Denmark)
Sin, Gürkan; Gernaey, Krist
2016-01-01
,engineers, and professionals. However, it is also expected that they will be useful both for graduate teaching as well as a stepping stone for academic researchers who wish to expand their theoretical interest in the subject. For the models selected to interpret the experimental data, this chapter uses available models from...... literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii......Modelling is one of the key tools at the disposal of modern wastewater treatment professionals, researchers and engineers. It enables them to study and understand complex phenomena underlying the physical, chemical and biological performance of wastewater treatment plants at different temporal...
Model for traffic emissions estimation
Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.
A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.
Effort Estimation in BPMS Migration
Directory of Open Access Journals (Sweden)
Christopher Drews
2018-04-01
Full Text Available Usually Business Process Management Systems (BPMS are highly integrated in the IT of organizations and are at the core of their business. Thus, migrating from one BPMS solution to another is not a common task. However, there are forces that are pushing organizations to perform this step, e.g. maintenance costs of legacy BPMS or the need for additional functionality. Before the actual migration, the risk and the effort must be evaluated. This work provides a framework for effort estimation regarding the technical aspects of BPMS migration. The framework provides questions for BPMS comparison and an effort evaluation schema. The applicability of the framework is evaluated based on a simplified BPMS migration scenario.
Supplemental report on cost estimates'
International Nuclear Information System (INIS)
1992-01-01
The Office of Management and Budget (OMB) and the U.S. Army Corps of Engineers have completed an analysis of the Department of Energy's (DOE) Fiscal Year (FY) 1993 budget request for its Environmental Restoration and Waste Management (ERWM) program. The results were presented to an interagency review group (IAG) of senior-Administration officials for their consideration in the budget process. This analysis included evaluations of the underlying legal requirements and cost estimates on which the ERWM budget request was based. The major conclusions are contained in a separate report entitled, ''Interagency Review of the Department of Energy Environmental Restoration and Waste Management Program.'' This Corps supplemental report provides greater detail on the cost analysis
Age Estimation in Forensic Sciences
Alkass, Kanar; Buchholz, Bruce A.; Ohtani, Susumu; Yamamoto, Toshiharu; Druid, Henrik; Spalding, Kirsty L.
2010-01-01
Age determination of unknown human bodies is important in the setting of a crime investigation or a mass disaster because the age at death, birth date, and year of death as well as gender can guide investigators to the correct identity among a large number of possible matches. Traditional morphological methods used by anthropologists to determine age are often imprecise, whereas chemical analysis of tooth dentin, such as aspartic acid racemization, has shown reproducible and more precise results. In this study, we analyzed teeth from Swedish individuals using both aspartic acid racemization and radiocarbon methodologies. The rationale behind using radiocarbon analysis is that aboveground testing of nuclear weapons during the cold war (1955–1963) caused an extreme increase in global levels of carbon-14 (14C), which has been carefully recorded over time. Forty-four teeth from 41 individuals were analyzed using aspartic acid racemization analysis of tooth crown dentin or radiocarbon analysis of enamel, and 10 of these were split and subjected to both radiocarbon and racemization analysis. Combined analysis showed that the two methods correlated well (R2 = 0.66, p Aspartic acid racemization also showed a good precision with an overall absolute error of 5.4 ± 4.2 years. Whereas radiocarbon analysis gives an estimated year of birth, racemization analysis indicates the chronological age of the individual at the time of death. We show how these methods in combination can also assist in the estimation of date of death of an unidentified victim. This strategy can be of significant assistance in forensic casework involving dead victim identification. PMID:19965905
Runoff estimation in residencial area
Directory of Open Access Journals (Sweden)
Meire Regina de Almeida Siqueira
2013-12-01
Full Text Available This study aimed to estimate the watershed runoff caused by extreme events that often result in the flooding of urban areas. The runoff of a residential area in the city of Guaratinguetá, São Paulo, Brazil was estimated using the Curve-Number method proposed by USDA-NRCS. The study also investigated current land use and land cover conditions, impermeable areas with pasture and indications of the reforestation of those areas. Maps and satellite images of Residential Riverside I Neighborhood were used to characterize the area. In addition to characterizing land use and land cover, the definition of the soil type infiltration capacity, the maximum local rainfall, and the type and quality of the drainage system were also investigated. The study showed that this neighborhood, developed in 1974, has an area of 792,700 m², a population of 1361 inhabitants, and a sloping area covered with degraded pasture (Guaratinguetá-Piagui Peak located in front of the residential area. The residential area is located in a flat area near the Paraiba do Sul River, and has a poor drainage system with concrete pipes, mostly 0.60 m in diameter, with several openings that capture water and sediments from the adjacent sloping area. The Low Impact Development (LID system appears to be a viable solution for this neighborhood drainage system. It can be concluded that the drainage system of the Guaratinguetá Riverside I Neighborhood has all of the conditions and characteristics that make it suitable for the implementation of a low impact urban drainage system. Reforestation of Guaratinguetá-Piagui Peak can reduce the basin’s runoff by 50% and minimize flooding problems in the Beira Rio neighborhood.
International Nuclear Information System (INIS)
2003-01-01
According to article 6 of the French law from February 10, 2000 relative to the modernization and development of the electric public utility, the manager of the public power transportation grid (RTE) has to produce, at least every two years and under the control of the French government, a pluri-annual estimated status. Then, the energy ministry uses this status to prepare the pluri-annual planning of power production investments. The estimated status aims at establishing a medium- and long-term diagnosis of the balance between power supply and demand and at evaluating the new production capacity needs to ensure a durable security of power supplies. The hypotheses relative to the power consumption and to the evolution of the power production means and trades are presented in chapters 2 to 4. Chapter 5 details the methodology and modeling principles retained for the supply-demand balance simulations. Chapter 6 presents the probabilistic simulation results at the 2006, 2010 and 2015 prospects and indicates the volumes of reinforcement of the production parks which would warrant an acceptable level of security. Chapter 7 develops the critical problem of winter demand peaks and evokes the possibilities linked with demand reduction, market resources and use of the existing park. Finally, chapter 8 makes a synthesis of the technical conclusions and recalls the determining hypotheses that have been retained. The particular situations of western France, of the Mediterranean and Paris region, and of Corsica and overseas territories are examined in chapter 9. The simulation results for all consumption-production scenarios and the wind-power production data are presented in appendixes. (J.S.)
Estimating location without external cues.
Directory of Open Access Journals (Sweden)
Allen Cheung
2014-10-01
Full Text Available The ability to determine one's location is fundamental to spatial navigation. Here, it is shown that localization is theoretically possible without the use of external cues, and without knowledge of initial position or orientation. With only error-prone self-motion estimates as input, a fully disoriented agent can, in principle, determine its location in familiar spaces with 1-fold rotational symmetry. Surprisingly, localization does not require the sensing of any external cue, including the boundary. The combination of self-motion estimates and an internal map of the arena provide enough information for localization. This stands in conflict with the supposition that 2D arenas are analogous to open fields. Using a rodent error model, it is shown that the localization performance which can be achieved is enough to initiate and maintain stable firing patterns like those of grid cells, starting from full disorientation. Successful localization was achieved when the rotational asymmetry was due to the external boundary, an interior barrier or a void space within an arena. Optimal localization performance was found to depend on arena shape, arena size, local and global rotational asymmetry, and the structure of the path taken during localization. Since allothetic cues including visual and boundary contact cues were not present, localization necessarily relied on the fusion of idiothetic self-motion cues and memory of the boundary. Implications for spatial navigation mechanisms are discussed, including possible relationships with place field overdispersion and hippocampal reverse replay. Based on these results, experiments are suggested to identify if and where information fusion occurs in the mammalian spatial memory system.
Estimation of Poverty in Small Areas
Directory of Open Access Journals (Sweden)
Agne Bikauskaite
2014-12-01
Full Text Available A qualitative techniques of poverty estimation is needed to better implement, monitor and determine national areas where support is most required. The problem of small area estimation (SAE is the production of reliable estimates in areas with small samples. The precision of estimates in strata deteriorates (i.e. the precision decreases when the standard deviation increases, if the sample size is smaller. In these cases traditional direct estimators may be not precise and therefore pointless. Currently there are many indirect methods for SAE. The purpose of this paper is to analyze several diff erent types of techniques which produce small area estimates of poverty.
Robust DOA Estimation of Harmonic Signals Using Constrained Filters on Phase Estimates
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
In array signal processing, distances between receivers, e.g., microphones, cause time delays depending on the direction of arrival (DOA) of a signal source. We can then estimate the DOA from the time-difference of arrival (TDOA) estimates. However, many conventional DOA estimators based on TDOA...... estimates are not optimal in colored noise. In this paper, we estimate the DOA of a harmonic signal source from multi-channel phase estimates, which relate to narrowband TDOA estimates. More specifically, we design filters to apply on phase estimates to obtain a DOA estimate with minimum variance. Using...
On the relation between S-Estimators and M-Estimators of multivariate location and covariance
Lopuhaa, H.P.
1987-01-01
We discuss the relation between S-estimators and M-estimators of multivariate location and covariance. As in the case of the estimation of a multiple regression parameter, S-estimators are shown to satisfy first-order conditions of M-estimators. We show that the influence function IF (x;S F) of
Estimation of the energy needs; Estimation des besoins energetiques
Energy Technology Data Exchange (ETDEWEB)
Ailleret, [Electricite de France (EDF), Dir. General des Etudes de Recherches, 75 - Paris (France)
1955-07-01
The present report draws up the balance on the present and estimable energy consumption for the next twenty years. The present energy comes mainly of the consumption of coal, oil products and essentially hydraulic electric energy. the market development comes essentially of the development the industrial activity and of new applications tributary of the cost and the distribution of the electric energy. To this effect, the atomic energy offers good industrial perspectives in complement of the energy present resources in order to answer the new needs. (M.B.) [French] Le present rapport dresse le bilan sur la consommation energetique actuelle et previsionnelle pour les vingt prochaines annees. L'energie actuelle provient principalement consommation de charbon, de produits petroliers et d'energie electrique essentiellement hydraulique. l'evolution du marche provient essentielement du developpement l'activite industriel et de nouvelles applications tributaire du cout et de la distribution de l'energie electrique. A cet effet, l'energie atomique offre de bonne perspectives industrielles en complement des sources actuelles energetiques afin de repondre aux nouveaux besoins. (M.B.)
How Valid are Estimates of Occupational Illness?
Hilaski, Harvey J.; Wang, Chao Ling
1982-01-01
Examines some of the methods of estimating occupational diseases and suggests that a consensus on the adequacy and reliability of estimates by the Bureau of Labor Statistics and others is not likely. (SK)
State estimation for a hexapod robot
CSIR Research Space (South Africa)
Lubbe, Estelle
2015-09-01
Full Text Available This paper introduces a state estimation methodology for a hexapod robot that makes use of proprioceptive sensors and a kinematic model of the robot. The methodology focuses on providing reliable full pose state estimation for a commercially...
Access Based Cost Estimation for Beddown Analysis
National Research Council Canada - National Science Library
Pennington, Jasper E
2006-01-01
The purpose of this research is to develop an automated web-enabled beddown estimation application for Air Mobility Command in order to increase the effectiveness and enhance the robustness of beddown estimates...
Estimated annual economic loss from organ condemnation ...
African Journals Online (AJOL)
as a basis for the analysis of estimation of the economic significance of bovine .... percent involvement of each organ were used in the estimation of the financial loss from organ .... DVM thesis, Addis Ababa University, Faculty of Veterinary.
Velocity Estimate Following Air Data System Failure
National Research Council Canada - National Science Library
McLaren, Scott A
2008-01-01
.... A velocity estimator (VEST) algorithm was developed to combine the inertial and wind velocities to provide an estimate of the aircraft's current true velocity to be used for command path gain scheduling and for display in the cockpit...
On Estimating Quantiles Using Auxiliary Information
Directory of Open Access Journals (Sweden)
Berger Yves G.
2015-03-01
Full Text Available We propose a transformation-based approach for estimating quantiles using auxiliary information. The proposed estimators can be easily implemented using a regression estimator. We show that the proposed estimators are consistent and asymptotically unbiased. The main advantage of the proposed estimators is their simplicity. Despite the fact the proposed estimators are not necessarily more efficient than their competitors, they offer a good compromise between accuracy and simplicity. They can be used under single and multistage sampling designs with unequal selection probabilities. A simulation study supports our finding and shows that the proposed estimators are robust and of an acceptable accuracy compared to alternative estimators, which can be more computationally intensive.
On Estimation and Testing for Pareto Tails
Czech Academy of Sciences Publication Activity Database
Jordanova, P.; Stehlík, M.; Fabián, Zdeněk; Střelec, L.
2013-01-01
Roč. 22, č. 1 (2013), s. 89-108 ISSN 0204-9805 Institutional support: RVO:67985807 Keywords : testing against heavy tails * asymptotic properties of estimators * point estimation Subject RIV: BB - Applied Statistics, Operational Research
Estimating the NIH efficient frontier.
Directory of Open Access Journals (Sweden)
Dimitrios Bisias
Full Text Available BACKGROUND: The National Institutes of Health (NIH is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. METHODS AND FINDINGS: Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL. The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current or reduction in risk (22% to 35% vs. current are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. CONCLUSIONS: Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent
Estimating the NIH efficient frontier.
Bisias, Dimitrios; Lo, Andrew W; Watkins, James F
2012-01-01
The National Institutes of Health (NIH) is among the world's largest investors in biomedical research, with a mandate to: "…lengthen life, and reduce the burdens of illness and disability." Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions-one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Using data from 1965 to 2007, we provide estimates of the NIH "efficient frontier", the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent, repeatable, and expressly designed to reduce the burden of
Estimating the NIH Efficient Frontier
2012-01-01
Background The National Institutes of Health (NIH) is among the world’s largest investors in biomedical research, with a mandate to: “…lengthen life, and reduce the burdens of illness and disability.” Its funding decisions have been criticized as insufficiently focused on disease burden. We hypothesize that modern portfolio theory can create a closer link between basic research and outcome, and offer insight into basic-science related improvements in public health. We propose portfolio theory as a systematic framework for making biomedical funding allocation decisions–one that is directly tied to the risk/reward trade-off of burden-of-disease outcomes. Methods and Findings Using data from 1965 to 2007, we provide estimates of the NIH “efficient frontier”, the set of funding allocations across 7 groups of disease-oriented NIH institutes that yield the greatest expected return on investment for a given level of risk, where return on investment is measured by subsequent impact on U.S. years of life lost (YLL). The results suggest that NIH may be actively managing its research risk, given that the volatility of its current allocation is 17% less than that of an equal-allocation portfolio with similar expected returns. The estimated efficient frontier suggests that further improvements in expected return (89% to 119% vs. current) or reduction in risk (22% to 35% vs. current) are available holding risk or expected return, respectively, constant, and that 28% to 89% greater decrease in average years-of-life-lost per unit risk may be achievable. However, these results also reflect the imprecision of YLL as a measure of disease burden, the noisy statistical link between basic research and YLL, and other known limitations of portfolio theory itself. Conclusions Our analysis is intended to serve as a proof-of-concept and starting point for applying quantitative methods to allocating biomedical research funding that are objective, systematic, transparent
Estimation of population mean under systematic sampling
Noor-ul-amin, Muhammad; Javaid, Amjad
2017-11-01
In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.
Fast and Statistically Efficient Fundamental Frequency Estimation
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom
2016-01-01
Fundamental frequency estimation is a very important task in many applications involving periodic signals. For computational reasons, fast autocorrelation-based estimation methods are often used despite parametric estimation methods having superior estimation accuracy. However, these parametric...... a recursive solver. Via benchmarks, we demonstrate that the computation time is reduced by approximately two orders of magnitude. The proposed fast algorithm is available for download online....
Kernel bandwidth estimation for non-parametric density estimation: a comparative study
CSIR Research Space (South Africa)
Van der Walt, CM
2013-12-01
Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...
Development of Numerical Estimation in Young Children
Siegler, Robert S.; Booth, Julie L.
2004-01-01
Two experiments examined kindergartners', first graders', and second graders' numerical estimation, the internal representations that gave rise to the estimates, and the general hypothesis that developmental sequences within a domain tend to repeat themselves in new contexts. Development of estimation in this age range on 0-to-100 number lines…
Carleman estimates for some elliptic systems
International Nuclear Information System (INIS)
Eller, M
2008-01-01
A Carleman estimate for a certain first order elliptic system is proved. The proof is elementary and does not rely on pseudo-differential calculus. This estimate is used to prove Carleman estimates for the isotropic Lame system as well as for the isotropic Maxwell system with C 1 coefficients
Estimating Canopy Dark Respiration for Crop Models
Monje Mejia, Oscar Alberto
2014-01-01
Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.
Estimating uncertainty of data limited stock assessments
DEFF Research Database (Denmark)
Kokkalis, Alexandros; Eikeset, Anne Maria; Thygesen, Uffe Høgsbro
2017-01-01
-limited. Particular emphasis is put on providing uncertainty estimates of the data-limited assessment. We assess four cod stocks in the North-East Atlantic and compare our estimates of stock status (F/Fmsy) with the official assessments. The estimated stock status of all four cod stocks followed the established stock...
Another look at the Grubbs estimators
Lombard, F.; Potgieter, C.J.
2012-01-01
of the estimate is to be within reasonable bounds and if negative precision estimates are to be avoided. We show that the two instrument Grubbs estimator can be improved considerably if fairly reliable preliminary information regarding the ratio of sampling unit
Load Estimation by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Pedersen, Ivar Chr. Bjerg; Hansen, Søren Mosegaard; Brincker, Rune
2007-01-01
When performing operational modal analysis the dynamic loading is unknown, however, once the modal properties of the structure have been estimated, the transfer matrix can be obtained, and the loading can be estimated by inverse filtering. In this paper loads in frequency domain are estimated by ...
Non-Parametric Estimation of Correlation Functions
DEFF Research Database (Denmark)
Brincker, Rune; Rytter, Anders; Krenk, Steen
In this paper three methods of non-parametric correlation function estimation are reviewed and evaluated: the direct method, estimation by the Fast Fourier Transform and finally estimation by the Random Decrement technique. The basic ideas of the techniques are reviewed, sources of bias are point...
Bayesian techniques for surface fuel loading estimation
Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell
2016-01-01
A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...
International Nuclear Information System (INIS)
Okajima, Shunzo
1976-01-01
Radioactive atomic fallouts in Nishiyama district of Nagasaki Prefecture are reported on the basis of the survey since 1969. In 1969, the amount of 137 Cs in the body of 50 inhabitants in Nishiyama district was measured using human counter, and was compared with that of non-exposured group. The average value of 137 Cs (pCi/kg) was higher in inhabitants in Nishiyama district (38.5 in men and 24.9 in females) than in the controls (25.5 in men and 14.9 in females). The resurvey in 1971 showed that the amount of 137 Cs was decreased to 76% in men and 60% in females. When the amount of 137 Cs in the body was calculated from the chemical analysis of urine, it was 29.0 +- 8.2 in men and 29.4 +- 26.2 in females in Nishiyama district, and 29.9 +- 8.2 in men and 29.4 +- 11.7 in females in the controls. The content of 137 Cs in soils and crops (potato etc.) was higher in Nishiyama district than in the controls. When the internal exposure dose per year was calculated from the amount of 137 Cs in the body in 1969, it was 0.29 mrad/year in men and 0.19 mrad/year in females. Finally, the internal exposure dose immediately after the explosion was estimated. (Serizawa, K.)
Inflation and cosmological parameter estimation
Energy Technology Data Exchange (ETDEWEB)
Hamann, J.
2007-05-15
In this work, we focus on two aspects of cosmological data analysis: inference of parameter values and the search for new effects in the inflationary sector. Constraints on cosmological parameters are commonly derived under the assumption of a minimal model. We point out that this procedure systematically underestimates errors and possibly biases estimates, due to overly restrictive assumptions. In a more conservative approach, we analyse cosmological data using a more general eleven-parameter model. We find that regions of the parameter space that were previously thought ruled out are still compatible with the data; the bounds on individual parameters are relaxed by up to a factor of two, compared to the results for the minimal six-parameter model. Moreover, we analyse a class of inflation models, in which the slow roll conditions are briefly violated, due to a step in the potential. We show that the presence of a step generically leads to an oscillating spectrum and perform a fit to CMB and galaxy clustering data. We do not find conclusive evidence for a step in the potential and derive strong bounds on quantities that parameterise the step. (orig.)
Quantum rewinding via phase estimation
Tabia, Gelo Noel
2015-03-01
In cryptography, the notion of a zero-knowledge proof was introduced by Goldwasser, Micali, and Rackoff. An interactive proof system is said to be zero-knowledge if any verifier interacting with an honest prover learns nothing beyond the validity of the statement being proven. With recent advances in quantum information technologies, it has become interesting to ask if classical zero-knowledge proof systems remain secure against adversaries with quantum computers. The standard approach to show the zero-knowledge property involves constructing a simulator for a malicious verifier that can be rewinded to a previous step when the simulation fails. In the quantum setting, the simulator can be described by a quantum circuit that takes an arbitrary quantum state as auxiliary input but rewinding becomes a nontrivial issue. Watrous proposed a quantum rewinding technique in the case where the simulation's success probability is independent of the auxiliary input. Here I present a more general quantum rewinding scheme that employs the quantum phase estimation algorithm. This work was funded by institutional research grant IUT2-1 from the Estonian Research Council and by the European Union through the European Regional Development Fund.
Global Warming Estimation from MSU
Prabhakara, C.; Iacovazzi, Robert, Jr.
1999-01-01
In this study, we have developed time series of global temperature from 1980-97 based on the Microwave Sounding Unit (MSU) Ch 2 (53.74 GHz) observations taken from polar-orbiting NOAA operational satellites. In order to create these time series, systematic errors (approx. 0.1 K) in the Ch 2 data arising from inter-satellite differences are removed objectively. On the other hand, smaller systematic errors (approx. 0.03 K) in the data due to orbital drift of each satellite cannot be removed objectively. Such errors are expected to remain in the time series and leave an uncertainty in the inferred global temperature trend. With the help of a statistical method, the error in the MSU inferred global temperature trend resulting from orbital drifts and residual inter-satellite differences of all satellites is estimated to be 0.06 K decade. Incorporating this error, our analysis shows that the global temperature increased at a rate of 0.13 +/- 0.06 K decade during 1980-97.
Estimates of LLEA officer availability
International Nuclear Information System (INIS)
Berkbigler, K.P.
1978-05-01
One element in the Physical Protection of Nuclear Material in Transit Program is a determination of the number of local law enforcement agency (LLEA) officers available to respond to an attack upon a special nuclear material (SNM) carrying convoy. A computer model, COPS, has been developed at Sandia Laboratories to address this problem. Its purposes are to help identify to the SNM shipper areas along a route which may have relatively low police coverage and to aid in the comparison of alternate routes to the same location. Data bases used in COPS include population data from the Bureau of Census and police data published by the FBI. Police are assumed to be distributed in proportion to the population, with adjustable weighting factors. Example results illustrating the model's capabilities are presented for two routes between Los Angeles, CA, and Denver, CO, and for two routes between Columbia, SC, and Syracuse, NY. The estimated police distribution at points along the route is presented. Police availability as a function of time is modeled based on the time-dependent characteristics of a trip. An example demonstrating the effects of jurisdictional restrictions on the size of the response force is given. Alternate routes between two locations are compared by means of cumulative plots
Multimodal Estimation of Distribution Algorithms.
Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun
2016-02-15
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.
Chaudhuri, Probal
1992-01-01
We consider a class of $U$-statistics type estimates for multivariate location. The estimates extend some $R$-estimates to multivariate data. In particular, the class of estimates includes the multivariate median considered by Gini and Galvani (1929) and Haldane (1948) and a multivariate extension of the well-known Hodges-Lehmann (1963) estimate. We explore large sample behavior of these estimates by deriving a Bahadur type representation for them. In the process of developing these asymptoti...
Indirect estimators in US federal programs
1996-01-01
In 1991, a subcommittee of the Federal Committee on Statistical Methodology met to document the use of indirect estimators - that is, estimators which use data drawn from a domain or time different from the domain or time for which an estimate is required. This volume comprises the eight reports which describe the use of indirect estimators and they are based on case studies from a variety of federal programs. As a result, many researchers will find this book provides a valuable survey of how indirect estimators are used in practice and which addresses some of the pitfalls of these methods.
Parameter Estimation in Continuous Time Domain
Directory of Open Access Journals (Sweden)
Gabriela M. ATANASIU
2016-12-01
Full Text Available This paper will aim to presents the applications of a continuous-time parameter estimation method for estimating structural parameters of a real bridge structure. For the purpose of illustrating this method two case studies of a bridge pile located in a highly seismic risk area are considered, for which the structural parameters for the mass, damping and stiffness are estimated. The estimation process is followed by the validation of the analytical results and comparison with them to the measurement data. Further benefits and applications for the continuous-time parameter estimation method in civil engineering are presented in the final part of this paper.
Site characterization: a spatial estimation approach
International Nuclear Information System (INIS)
Candy, J.V.; Mao, N.
1980-10-01
In this report the application of spatial estimation techniques or kriging to groundwater aquifers and geological borehole data is considered. The adequacy of these techniques to reliably develop contour maps from various data sets is investigated. The estimator is developed theoretically in a simplified fashion using vector-matrix calculus. The practice of spatial estimation is discussed and the estimator is then applied to two groundwater aquifer systems and used also to investigate geological formations from borehole data. It is shown that the estimator can provide reasonable results when designed properly
A Gaussian IV estimator of cointegrating relations
DEFF Research Database (Denmark)
Bårdsen, Gunnar; Haldrup, Niels
2006-01-01
In static single equation cointegration regression modelsthe OLS estimator will have a non-standard distribution unless regressors arestrictly exogenous. In the literature a number of estimators have been suggestedto deal with this problem, especially by the use of semi-nonparametricestimators. T......In static single equation cointegration regression modelsthe OLS estimator will have a non-standard distribution unless regressors arestrictly exogenous. In the literature a number of estimators have been suggestedto deal with this problem, especially by the use of semi...... in cointegrating regressions. These instruments are almost idealand simulations show that the IV estimator using such instruments alleviatethe endogeneity problem extremely well in both finite and large samples....
Optimal estimation of the optomechanical coupling strength
Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André
2018-06-01
We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.
Bayesian estimation and tracking a practical guide
Haug, Anton J
2012-01-01
A practical approach to estimating and tracking dynamic systems in real-worl applications Much of the literature on performing estimation for non-Gaussian systems is short on practical methodology, while Gaussian methods often lack a cohesive derivation. Bayesian Estimation and Tracking addresses the gap in the field on both accounts, providing readers with a comprehensive overview of methods for estimating both linear and nonlinear dynamic systems driven by Gaussian and non-Gaussian noices. Featuring a unified approach to Bayesian estimation and tracking, the book emphasizes the derivation
Budget estimates. Fiscal year 1998
International Nuclear Information System (INIS)
1997-02-01
The U.S. Congress has determined that the safe use of nuclear materials for peaceful purposes is a legitimate and important national goal. It has entrusted the Nuclear Regulatory Commission (NRC) with the primary Federal responsibility for achieving that goal. The NRC's mission, therefore, is to regulate the Nation's civilian use of byproduct, source, and special nuclear materials to ensure adequate protection of public health and safety, to promote the common defense and security, and to protect the environment. The NRC's FY 1998 budget requests new budget authority of $481,300,000 to be funded by two appropriations - one is the NRC's Salaraies and Expenses appropriation for $476,500,000, and the other is NRC's Office of Inspector General appropriation for $4,800,000. Of the funds appropriated to the NRC's Salaries and Expenses, $17,000,000, shall be derived from the Nuclear Waste Fund and $2,000,000 shall be derived from general funds. The proposed FY 1998 appropriation legislation would also exempt the $2,000,000 for regulatory reviews and other assistance provided to the Department of Energy from the requirement that the NRC collect 100 percent of its budget from fees. The sums appropriated to the NRC's Salaries and Expenses and NRC's Office of Inspector General shall be reduced by the amount of revenues received during FY 1998 from licensing fees, inspection services, and other services and collections, so as to result in a final FY 1998 appropriation for the NRC of an estimated $19,000,000 - the amount appropriated from the Nuclear Waste Fund and from general funds. Revenues derived from enforcement actions shall be deposited to miscellaneous receipts of the Treasury
Budget estimates. Fiscal year 1998
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-02-01
The U.S. Congress has determined that the safe use of nuclear materials for peaceful purposes is a legitimate and important national goal. It has entrusted the Nuclear Regulatory Commission (NRC) with the primary Federal responsibility for achieving that goal. The NRC`s mission, therefore, is to regulate the Nation`s civilian use of byproduct, source, and special nuclear materials to ensure adequate protection of public health and safety, to promote the common defense and security, and to protect the environment. The NRC`s FY 1998 budget requests new budget authority of $481,300,000 to be funded by two appropriations - one is the NRC`s Salaraies and Expenses appropriation for $476,500,000, and the other is NRC`s Office of Inspector General appropriation for $4,800,000. Of the funds appropriated to the NRC`s Salaries and Expenses, $17,000,000, shall be derived from the Nuclear Waste Fund and $2,000,000 shall be derived from general funds. The proposed FY 1998 appropriation legislation would also exempt the $2,000,000 for regulatory reviews and other assistance provided to the Department of Energy from the requirement that the NRC collect 100 percent of its budget from fees. The sums appropriated to the NRC`s Salaries and Expenses and NRC`s Office of Inspector General shall be reduced by the amount of revenues received during FY 1998 from licensing fees, inspection services, and other services and collections, so as to result in a final FY 1998 appropriation for the NRC of an estimated $19,000,000 - the amount appropriated from the Nuclear Waste Fund and from general funds. Revenues derived from enforcement actions shall be deposited to miscellaneous receipts of the Treasury.
Optimal estimations of random fields using kriging
International Nuclear Information System (INIS)
Barua, G.
2004-01-01
Kriging is a statistical procedure of estimating the best weights of a linear estimator. Suppose there is a point or an area or a volume of ground over which we do not know a hydrological variable and wish to estimate it. In order to produce an estimator, we need some information to work on, usually available in the form of samples. There can, be an infinite number of linear unbiased estimators for which the weights sum up to one. The problem is how to determine the best weights for which the estimation variance is the least. The system of equations as shown above is generally known as the kriging system and the estimator produced is the kriging estimator. The variance of the kriging estimator can be found by substitution of the weights in the general estimation variance equation. We assume here a linear model for the semi-variogram. Applying the model to the equation, we obtain a set of kriging equations. By solving these equations, we obtain the kriging variance. Thus, for the one-dimensional problem considered, kriging definitely gives a better estimation variance than the extension variance
Monte Carlo-based tail exponent estimator
Barunik, Jozef; Vacha, Lukas
2010-11-01
In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.
Robust bearing estimation for 3-component stations
International Nuclear Information System (INIS)
CLAASSEN, JOHN P.
2000-01-01
A robust bearing estimation process for 3-component stations has been developed and explored. The method, called SEEC for Search, Estimate, Evaluate and Correct, intelligently exploits the inherent information in the arrival at every step of the process to achieve near-optimal results. In particular the approach uses a consistent framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, to construct metrics helpful in choosing the better estimates or admitting that the bearing is immeasurable, and finally to apply bias corrections when calibration information is available to yield a single final estimate. The algorithm was applied to a small but challenging set of events in a seismically active region. It demonstrated remarkable utility by providing better estimates and insights than previously available. Various monitoring implications are noted from these findings
Iterative Estimation in Turbo Equalization Process
Directory of Open Access Journals (Sweden)
MORGOS Lucian
2014-05-01
Full Text Available This paper presents the iterative estimation in turbo equalization process. Turbo equalization is the process of reception in which equalization and decoding are done together, not as separate processes. For the equalizer to work properly, it must receive before equalization accurate information about the value of the channel impulse response. This estimation of channel impulse response is done by transmission of a training sequence known at reception. Knowing both the transmitted and received sequence, it can be calculated estimated value of the estimated the channel impulse response using one of the well-known estimation algorithms. The estimated value can be also iterative recalculated based on the sequence data available at the output of the channel and estimated sequence data coming from turbo equalizer output, thereby refining the obtained results.
Weighted conditional least-squares estimation
International Nuclear Information System (INIS)
Booth, J.G.
1987-01-01
A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered
COVARIANCE ASSISTED SCREENING AND ESTIMATION.
Ke, By Tracy; Jin, Jiashun; Fan, Jianqing
2014-11-01
Consider a linear model Y = X β + z , where X = X n,p and z ~ N (0, I n ). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X ' X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage , which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening , and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.
Atmospheric Turbulence Estimates from a Pulsed Lidar
Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.
2013-01-01
Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.
Cosmochemical Estimates of Mantle Composition
Palme, H.; O'Neill, H. St. C.
2003-12-01
, and a crust. Both Daubrée and Boisse also expected that the Earth was composed of a similar sequence of concentric layers (see Burke, 1986; Marvin, 1996).At the beginning of the twentieth century Harkins at the University of Chicago thought that meteorites would provide a better estimate for the bulk composition of the Earth than the terrestrial rocks collected at the surface as we have only access to the "mere skin" of the Earth. Harkins made an attempt to reconstruct the composition of the hypothetical meteorite planet by compiling compositional data for 125 stony and 318 iron meteorites, and mixing the two components in ratios based on the observed falls of stones and irons. The results confirmed his prediction that elements with even atomic numbers are more abundant and therefore more stable than those with odd atomic numbers and he concluded that the elemental abundances in the bulk meteorite planet are determined by nucleosynthetic processes. For his meteorite planet Harkins calculated Mg/Si, Al/Si, and Fe/Si atomic ratios of 0.86, 0.079, and 0.83, very closely resembling corresponding ratios of the average solar system based on presently known element abundances in the Sun and in CI-meteorites (see Burke, 1986).If the Earth were similar compositionally to the meteorite planet, it should have a similarly high iron content, which requires that the major fraction of iron is concentrated in the interior of the Earth. The presence of a central metallic core to the Earth was suggested by Wiechert in 1897. The existence of the core was firmly established using the study of seismic wave propagation by Oldham in 1906 with the outer boundary of the core accurately located at a depth of 2,900km by Beno Gutenberg in 1913. In 1926 the fluidity of the outer core was finally accepted. The high density of the core and the high abundance of iron and nickel in meteorites led very early to the suggestion that iron and nickel are the dominant elements in the Earth's core (Brush
Entropy estimates of small data sets
Energy Technology Data Exchange (ETDEWEB)
Bonachela, Juan A; Munoz, Miguel A [Departamento de Electromagnetismo y Fisica de la Materia and Instituto de Fisica Teorica y Computacional Carlos I, Facultad de Ciencias, Universidad de Granada, 18071 Granada (Spain); Hinrichsen, Haye [Fakultaet fuer Physik und Astronomie, Universitaet Wuerzburg, Am Hubland, 97074 Wuerzburg (Germany)
2008-05-23
Estimating entropies from limited data series is known to be a non-trivial task. Naive estimations are plagued with both systematic (bias) and statistical errors. Here, we present a new 'balanced estimator' for entropy functionals (Shannon, Renyi and Tsallis) specially devised to provide a compromise between low bias and small statistical errors, for short data series. This new estimator outperforms other currently available ones when the data sets are small and the probabilities of the possible outputs of the random variable are not close to zero. Otherwise, other well-known estimators remain a better choice. The potential range of applicability of this estimator is quite broad specially for biological and digital data series. (fast track communication)
Entropy estimates of small data sets
International Nuclear Information System (INIS)
Bonachela, Juan A; Munoz, Miguel A; Hinrichsen, Haye
2008-01-01
Estimating entropies from limited data series is known to be a non-trivial task. Naive estimations are plagued with both systematic (bias) and statistical errors. Here, we present a new 'balanced estimator' for entropy functionals (Shannon, Renyi and Tsallis) specially devised to provide a compromise between low bias and small statistical errors, for short data series. This new estimator outperforms other currently available ones when the data sets are small and the probabilities of the possible outputs of the random variable are not close to zero. Otherwise, other well-known estimators remain a better choice. The potential range of applicability of this estimator is quite broad specially for biological and digital data series. (fast track communication)
Relative Pose Estimation Algorithm with Gyroscope Sensor
Directory of Open Access Journals (Sweden)
Shanshan Wei
2016-01-01
Full Text Available This paper proposes a novel vision and inertial fusion algorithm S2fM (Simplified Structure from Motion for camera relative pose estimation. Different from current existing algorithms, our algorithm estimates rotation parameter and translation parameter separately. S2fM employs gyroscopes to estimate camera rotation parameter, which is later fused with the image data to estimate camera translation parameter. Our contributions are in two aspects. (1 Under the circumstance that no inertial sensor can estimate accurately enough translation parameter, we propose a translation estimation algorithm by fusing gyroscope sensor and image data. (2 Our S2fM algorithm is efficient and suitable for smart devices. Experimental results validate efficiency of the proposed S2fM algorithm.
Nondestructive, stereological estimation of canopy surface area
DEFF Research Database (Denmark)
Wulfsohn, Dvora-Laio; Sciortino, Marco; Aaslyng, Jesper M.
2010-01-01
We describe a stereological procedure to estimate the total leaf surface area of a plant canopy in vivo, and address the problem of how to predict the variance of the corresponding estimator. The procedure involves three nested systematic uniform random sampling stages: (i) selection of plants from...... a canopy using the smooth fractionator, (ii) sampling of leaves from the selected plants using the fractionator, and (iii) area estimation of the sampled leaves using point counting. We apply this procedure to estimate the total area of a chrysanthemum (Chrysanthemum morifolium L.) canopy and evaluate both...... the time required and the precision of the estimator. Furthermore, we compare the precision of point counting for three different grid intensities with that of several standard leaf area measurement techniques. Results showed that the precision of the plant leaf area estimator based on point counting...
Resilient Distributed Estimation Through Adversary Detection
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2018-05-01
This paper studies resilient multi-agent distributed estimation of an unknown vector parameter when a subset of the agents is adversarial. We present and analyze a Flag Raising Distributed Estimator ($\\mathcal{FRDE}$) that allows the agents under attack to perform accurate parameter estimation and detect the adversarial agents. The $\\mathcal{FRDE}$ algorithm is a consensus+innovations estimator in which agents combine estimates of neighboring agents (consensus) with local sensing information (innovations). We establish that, under $\\mathcal{FRDE}$, either the uncompromised agents' estimates are almost surely consistent or the uncompromised agents detect compromised agents if and only if the network of uncompromised agents is connected and globally observable. Numerical examples illustrate the performance of $\\mathcal{FRDE}$.
ESTIMATION ACCURACY OF EXPONENTIAL DISTRIBUTION PARAMETERS
Directory of Open Access Journals (Sweden)
muhammad zahid rashid
2011-04-01
Full Text Available The exponential distribution is commonly used to model the behavior of units that have a constant failure rate. The two-parameter exponential distribution provides a simple but nevertheless useful model for the analysis of lifetimes, especially when investigating reliability of technical equipment.This paper is concerned with estimation of parameters of the two parameter (location and scale exponential distribution. We used the least squares method (LSM, relative least squares method (RELS, ridge regression method (RR, moment estimators (ME, modified moment estimators (MME, maximum likelihood estimators (MLE and modified maximum likelihood estimators (MMLE. We used the mean square error MSE, and total deviation TD, as measurement for the comparison between these methods. We determined the best method for estimation using different values for the parameters and different sample sizes
Estimating the Doppler centroid of SAR data
DEFF Research Database (Denmark)
Madsen, Søren Nørvang
1989-01-01
attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR......After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... data. For offline processors where the Doppler estimation is performed on processed data, which removes the problem of partial coverage of bright targets, the ΔE estimator and the CDE (correlation Doppler estimator) algorithm give similar performance. However, for nonhomogeneous scenes it is found...
Science yield estimation for AFTA coronagraphs
Traub, Wesley A.; Belikov, Ruslan; Guyon, Olivier; Kasdin, N. Jeremy; Krist, John; Macintosh, Bruce; Mennesson, Bertrand; Savransky, Dmitry; Shao, Michael; Serabyn, Eugene; Trauger, John
2014-08-01
We describe the algorithms and results of an estimation of the science yield for five candidate coronagraph designs for the WFIRST-AFTA space mission. The targets considered are of three types, known radial-velocity planets, expected but as yet undiscovered exoplanets, and debris disks, all around nearby stars. The results of the original estimation are given, as well as those from subsequently updated designs that take advantage of experience from the initial estimates.
Estimating Elevation Angles From SAR Crosstalk
Freeman, Anthony
1994-01-01
Scheme for processing polarimetric synthetic-aperture-radar (SAR) image data yields estimates of elevation angles along radar beam to target resolution cells. By use of estimated elevation angles, measured distances along radar beam to targets (slant ranges), and measured altitude of aircraft carrying SAR equipment, one can estimate height of target terrain in each resolution cell. Monopulselike scheme yields low-resolution topographical data.
Robust motion estimation using connected operators
Salembier Clairon, Philippe Jean; Sanson, H
1997-01-01
This paper discusses the use of connected operators for robust motion estimation The proposed strategy involves a motion estimation step extracting the dominant motion and a ltering step relying on connected operators that remove objects that do not fol low the dominant motion. These two steps are iterated in order to obtain an accurate motion estimation and a precise de nition of the objects fol lowing this motion This strategy can be applied on the entire frame or on individual connected c...
Application of spreadsheet to estimate infiltration parameters
Zakwan, Mohammad; Muzzammil, Mohammad; Alam, Javed
2016-01-01
Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach ...
Dynamic Diffusion Estimation in Exponential Family Models
Czech Academy of Sciences Publication Activity Database
Dedecius, Kamil; Sečkárová, Vladimíra
2013-01-01
Roč. 20, č. 11 (2013), s. 1114-1117 ISSN 1070-9908 R&D Projects: GA MŠk 7D12004; GA ČR GA13-13502S Keywords : diffusion estimation * distributed estimation * paremeter estimation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.639, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0396518.pdf
State energy data report 1994: Consumption estimates
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-10-01
This document provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), operated by EIA. SEDS provides State energy consumption estimates to members of Congress, Federal and State agencies, and the general public, and provides the historical series needed for EIA`s energy models. Division is made for each energy type and end use sector. Nuclear electric power is included.
Self-learning estimation of quantum states
International Nuclear Information System (INIS)
Hannemann, Th.; Reiss, D.; Balzer, Ch.; Neuhauser, W.; Toschek, P.E.; Wunderlich, Ch.
2002-01-01
We report the experimental estimation of arbitrary qubit states using a succession of N measurements on individual qubits, where the measurement basis is changed during the estimation procedure conditioned on the outcome of previous measurements (self-learning estimation). Two hyperfine states of a single trapped 171 Yb + ion serve as a qubit. It is demonstrated that the difference in fidelity between this adaptive strategy and passive strategies increases in the presence of decoherence
Estimation of Correlation Functions by Random Decrement
DEFF Research Database (Denmark)
Asmussen, J. C.; Brincker, Rune
This paper illustrates how correlation functions can be estimated by the random decrement technique. Several different formulations of the random decrement technique, estimating the correlation functions are considered. The speed and accuracy of the different formulations of the random decrement...... and the length of the correlation functions. The accuracy of the estimates with respect to the theoretical correlation functions and the modal parameters are both investigated. The modal parameters are extracted from the correlation functions using the polyreference time domain technique....
State energy data report 1994: Consumption estimates
International Nuclear Information System (INIS)
1996-10-01
This document provides annual time series estimates of State-level energy consumption by major economic sector. The estimates are developed in the State Energy Data System (SEDS), operated by EIA. SEDS provides State energy consumption estimates to members of Congress, Federal and State agencies, and the general public, and provides the historical series needed for EIA's energy models. Division is made for each energy type and end use sector. Nuclear electric power is included
UAV State Estimation Modeling Techniques in AHRS
Razali, Shikin; Zhahir, Amzari
2017-11-01
Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.
Improved diagnostic model for estimating wind energy
Energy Technology Data Exchange (ETDEWEB)
Endlich, R.M.; Lee, J.D.
1983-03-01
Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.
Outer planet probe cost estimates: First impressions
Niehoff, J.
1974-01-01
An examination was made of early estimates of outer planetary atmospheric probe cost by comparing the estimates with past planetary projects. Of particular interest is identification of project elements which are likely cost drivers for future probe missions. Data are divided into two parts: first, the description of a cost model developed by SAI for the Planetary Programs Office of NASA, and second, use of this model and its data base to evaluate estimates of probe costs. Several observations are offered in conclusion regarding the credibility of current estimates and specific areas of the outer planet probe concept most vulnerable to cost escalation.
Application of spreadsheet to estimate infiltration parameters
Directory of Open Access Journals (Sweden)
Mohammad Zakwan
2016-09-01
Full Text Available Infiltration is the process of flow of water into the ground through the soil surface. Soil water although contributes a negligible fraction of total water present on earth surface, but is of utmost importance for plant life. Estimation of infiltration rates is of paramount importance for estimation of effective rainfall, groundwater recharge, and designing of irrigation systems. Numerous infiltration models are in use for estimation of infiltration rates. The conventional graphical approach for estimation of infiltration parameters often fails to estimate the infiltration parameters precisely. The generalised reduced gradient (GRG solver is reported to be a powerful tool for estimating parameters of nonlinear equations and it has, therefore, been implemented to estimate the infiltration parameters in the present paper. Field data of infiltration rate available in literature for sandy loam soils of Umuahia, Nigeria were used to evaluate the performance of GRG solver. A comparative study of graphical method and GRG solver shows that the performance of GRG solver is better than that of conventional graphical method for estimation of infiltration rates. Further, the performance of Kostiakov model has been found to be better than the Horton and Philip's model in most of the cases based on both the approaches of parameter estimation.
Estimation of Conditional Quantile using Neural Networks
DEFF Research Database (Denmark)
Kulczycki, P.; Schiøler, Henrik
1999-01-01
The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....
Track length estimation applied to point detectors
International Nuclear Information System (INIS)
Rief, H.; Dubi, A.; Elperin, T.
1984-01-01
The concept of the track length estimator is applied to the uncollided point flux estimator (UCF) leading to a new algorithm of calculating fluxes at a point. It consists essentially of a line integral of the UCF, and although its variance is unbounded, the convergence rate is that of a bounded variance estimator. In certain applications, involving detector points in the vicinity of collimated beam sources, it has a lower variance than the once-more-collided point flux estimator, and its application is more straightforward
OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS
Energy Technology Data Exchange (ETDEWEB)
Johnson, M. D.; Chou, H. H.; Gwinn, C. R., E-mail: michaeltdh@physics.ucsb.edu, E-mail: cgwinn@physics.ucsb.edu [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)
2013-03-10
Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.
Linear Covariance Analysis and Epoch State Estimators
Markley, F. Landis; Carpenter, J. Russell
2014-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
Surface tensor estimation from linear sections
DEFF Research Database (Denmark)
Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel
From Crofton's formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....
Surface tensor estimation from linear sections
DEFF Research Database (Denmark)
Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel
2015-01-01
From Crofton’s formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....
OPTIMAL CORRELATION ESTIMATORS FOR QUANTIZED SIGNALS
International Nuclear Information System (INIS)
Johnson, M. D.; Chou, H. H.; Gwinn, C. R.
2013-01-01
Using a maximum-likelihood criterion, we derive optimal correlation strategies for signals with and without digitization. We assume that the signals are drawn from zero-mean Gaussian distributions, as is expected in radio-astronomical applications, and we present correlation estimators both with and without a priori knowledge of the signal variances. We demonstrate that traditional estimators of correlation, which rely on averaging products, exhibit large and paradoxical noise when the correlation is strong. However, we also show that these estimators are fully optimal in the limit of vanishing correlation. We calculate the bias and noise in each of these estimators and discuss their suitability for implementation in modern digital correlators.
Load Estimation from Natural input Modal Analysis
DEFF Research Database (Denmark)
Aenlle, Manuel López; Brincker, Rune; Canteli, Alfonso Fernández
2005-01-01
One application of Natural Input Modal Analysis consists in estimating the unknown load acting on structures such as wind loads, wave loads, traffic loads, etc. In this paper, a procedure to determine loading from a truncated modal model, as well as the results of an experimental testing programme...... estimation. In the experimental program a small structure subjected to vibration was used to estimate the loading from the measurements and the experimental modal space. The modal parameters were estimated by Natural Input Modal Analysis and the scaling factors of the mode shapes obtained by the mass change...
Towards Greater Harmonisation of Decommissioning Cost Estimates
International Nuclear Information System (INIS)
O'Sullivan, Patrick; ); Laraia, Michele; ); LaGuardia, Thomas S.
2010-01-01
The NEA Decommissioning Cost Estimation Group (DCEG), in collaboration with the IAEA Waste Technology Section and the EC Directorate-General for Energy and Transport, has recently studied cost estimation practices in 12 countries - Belgium, Canada, France, Germany, Italy, Japan, the Netherlands, Slovakia, Spain, Sweden, the United Kingdom and the United States. Its findings are to be published in an OECD/NEA report entitled Cost Estimation for Decommissioning: An International Overview of Cost Elements, Estimation Practices and Reporting Requirements. This booklet highlights the findings contained in the full report. (authors)
Accuracy of prehospital transport time estimation.
Wallace, David J; Kahn, Jeremy M; Angus, Derek C; Martin-Gill, Christian; Callaway, Clifton W; Rea, Thomas D; Chhatwal, Jagpreet; Kurland, Kristen; Seymour, Christopher W
2014-01-01
Estimates of prehospital transport times are an important part of emergency care system research and planning; however, the accuracy of these estimates is unknown. The authors examined the accuracy of three estimation methods against observed transport times in a large cohort of prehospital patient transports. This was a validation study using prehospital records in King County, Washington, and southwestern Pennsylvania from 2002 to 2006 and 2005 to 2011, respectively. Transport time estimates were generated using three methods: linear arc distance, Google Maps, and ArcGIS Network Analyst. Estimation error, defined as the absolute difference between observed and estimated transport time, was assessed, as well as the proportion of estimated times that were within specified error thresholds. Based on the primary results, a regression estimate was used that incorporated population density, time of day, and season to assess improved accuracy. Finally, hospital catchment areas were compared using each method with a fixed drive time. The authors analyzed 29,935 prehospital transports to 44 hospitals. The mean (± standard deviation [±SD]) absolute error was 4.8 (±7.3) minutes using linear arc, 3.5 (±5.4) minutes using Google Maps, and 4.4 (±5.7) minutes using ArcGIS. All pairwise comparisons were statistically significant (p Google Maps, and 11.6 [±10.9] minutes for ArcGIS). Estimates were within 5 minutes of observed transport time for 79% of linear arc estimates, 86.6% of Google Maps estimates, and 81.3% of ArcGIS estimates. The regression-based approach did not substantially improve estimation. There were large differences in hospital catchment areas estimated by each method. Route-based transport time estimates demonstrate moderate accuracy. These methods can be valuable for informing a host of decisions related to the system organization and patient access to emergency medical care; however, they should be employed with sensitivity to their limitations.
Cost Estimating Handbook for Environmental Restoration
International Nuclear Information System (INIS)
1993-01-01
Environmental restoration (ER) projects have presented the DOE and cost estimators with a number of properties that are not comparable to the normal estimating climate within DOE. These properties include: An entirely new set of specialized expressions and terminology. A higher than normal exposure to cost and schedule risk, as compared to most other DOE projects, due to changing regulations, public involvement, resource shortages, and scope of work. A higher than normal percentage of indirect costs to the total estimated cost due primarily to record keeping, special training, liability, and indemnification. More than one estimate for a project, particularly in the assessment phase, in order to provide input into the evaluation of alternatives for the cleanup action. While some aspects of existing guidance for cost estimators will be applicable to environmental restoration projects, some components of the present guidelines will have to be modified to reflect the unique elements of these projects. The purpose of this Handbook is to assist cost estimators in the preparation of environmental restoration estimates for Environmental Restoration and Waste Management (EM) projects undertaken by DOE. The DOE has, in recent years, seen a significant increase in the number, size, and frequency of environmental restoration projects that must be costed by the various DOE offices. The coming years will show the EM program to be the largest non-weapons program undertaken by DOE. These projects create new and unique estimating requirements since historical cost and estimating precedents are meager at best. It is anticipated that this Handbook will enhance the quality of cost data within DOE in several ways by providing: The basis for accurate, consistent, and traceable baselines. Sound methodologies, guidelines, and estimating formats. Sources of cost data/databases and estimating tools and techniques available at DOE cost professionals
L’estime de soi : un cas particulier d’estime sociale ?
Santarelli, Matteo
2016-01-01
Un des traits plus originaux de la théorie intersubjective de la reconnaissance d’Axel Honneth, consiste dans la façon dont elle discute la relation entre estime sociale et estime de soi. En particulier, Honneth présente l’estime de soi comme un reflet de l’estime sociale au niveau individuel. Dans cet article, je discute cette conception, en posant la question suivante : l’estime de soi est-elle un cas particulier de l’estime sociale ? Pour ce faire, je me concentre sur deux problèmes crucia...
Generalized Jackknife Estimators of Weighted Average Derivatives
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...
The Problems of Multiple Feedback Estimation.
Bulcock, Jeffrey W.
The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…
Spectral Estimation by the Random Dec Technique
DEFF Research Database (Denmark)
Brincker, Rune; Jensen, Jacob L.; Krenk, Steen
1990-01-01
This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...
Spectral Estimation by the Random DEC Technique
DEFF Research Database (Denmark)
Brincker, Rune; Jensen, J. Laigaard; Krenk, S.
This paper contains an empirical study of the accuracy of the Random Dec (RDD) technique. Realizations of the response from a single-degree-of-freedom system loaded by white noise are simulated using an ARMA model. The Autocorrelation function is estimated using the RDD technique and the estimated...
Least-squares variance component estimation
Teunissen, P.J.G.; Amiri-Simkooei, A.R.
2007-01-01
Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight
Fuel Burn Estimation Using Real Track Data
Chatterji, Gano B.
2011-01-01
A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.
Uncertainty Measures of Regional Flood Frequency Estimators
DEFF Research Database (Denmark)
Rosbjerg, Dan; Madsen, Henrik
1995-01-01
Regional flood frequency models have different assumptions regarding homogeneity and inter-site independence. Thus, uncertainty measures of T-year event estimators are not directly comparable. However, having chosen a particular method, the reliability of the estimate should always be stated, e...
Multisensor simultaneous vehicle tracking and shape estimation
Elfring, J.; Appeldoorn, R.P.W.; Kwakkernaat, M.R.J.A.E.
2016-01-01
This work focuses on vehicle automation applications that require both the estimation of kinematic and geometric information of surrounding vehicles, e.g., automated overtaking or merging. Rather then using one sensor that is able to estimate a vehicle's geometry from each sensor frame, e.g., a
Decommissioning Cost Estimating -The ''Price'' Approach
International Nuclear Information System (INIS)
Manning, R.; Gilmour, J.
2002-01-01
Over the past 9 years UKAEA has developed a formalized approach to decommissioning cost estimating. The estimating methodology and computer-based application are known collectively as the PRICE system. At the heart of the system is a database (the knowledge base) which holds resource demand data on a comprehensive range of decommissioning activities. This data is used in conjunction with project specific information (the quantities of specific components) to produce decommissioning cost estimates. PRICE is a dynamic cost-estimating tool, which can satisfy both strategic planning and project management needs. With a relatively limited analysis a basic PRICE estimate can be produced and used for the purposes of strategic planning. This same estimate can be enhanced and improved, primarily by the improvement of detail, to support sanction expenditure proposals, and also as a tender assessment and project management tool. The paper will: describe the principles of the PRICE estimating system; report on the experiences of applying the system to a wide range of projects from contaminated car parks to nuclear reactors; provide information on the performance of the system in relation to historic estimates, tender bids, and outturn costs
Estimation of biochemical variables using quantumbehaved particle ...
African Journals Online (AJOL)
To generate a more efficient neural network estimator, we employed the previously proposed quantum-behaved particle swarm optimization (QPSO) algorithm for neural network training. The experiment results of L-glutamic acid fermentation process showed that our established estimator could predict variables such as the ...
Estimated water use in Puerto Rico, 2010
Molina-Rivera, Wanda L.
2014-01-01
Water-use data were aggregated for the 78 municipios of the Commonwealth of Puerto Rico for 2010. Five major offstream categories were considered: public-supply water withdrawals and deliveries, domestic and industrial self-supplied water use, crop-irrigation water use, and thermoelectric-power freshwater use. One instream water-use category also was compiled: power-generation instream water use (thermoelectric saline withdrawals and hydroelectric power). Freshwater withdrawals for offstream use from surface-water [606 million gallons per day (Mgal/d)] and groundwater (118 Mgal/d) sources in Puerto Rico were estimated at 724 million gallons per day. The largest amount of freshwater withdrawn was by public-supply water facilities estimated at 677 Mgal/d. Public-supply domestic water use was estimated at 206 Mgal/d. Fresh groundwater withdrawals by domestic self-supplied users were estimated at 2.41 Mgal/d. Industrial self-supplied withdrawals were estimated at 4.30 Mgal/d. Withdrawals for crop irrigation purposes were estimated at 38.2 Mgal/d, or approximately 5 percent of all offstream freshwater withdrawals. Instream freshwater withdrawals by hydroelectric facilities were estimated at 556 Mgal/d and saline instream surface-water withdrawals for cooling purposes by thermoelectric-power facilities was estimated at 2,262 Mgal/d.
Statistical inference based on latent ability estimates
Hoijtink, H.J.A.; Boomsma, A.
The quality of approximations to first and second order moments (e.g., statistics like means, variances, regression coefficients) based on latent ability estimates is being discussed. The ability estimates are obtained using either the Rasch, oi the two-parameter logistic model. Straightforward use
Uranium mill tailings and risk estimation
International Nuclear Information System (INIS)
Marks, S.
1984-04-01
Work done in estimating projected health effects for persons exposed to mill tailings at vicinity properties is described. The effect of the reassessment of exposures at Hiroshima and Nagasaki on the risk estimates for gamma radiation is discussed. A presentation of current results in the epidemiological study of Hanford workers is included. 2 references
New U.S. Foodborne Illness Estimate
Centers for Disease Control (CDC) Podcasts
This podcast discusses CDC's report on new estimates of illnesses due to eating contaminated food in the United States. Dr. Elaine Scallan, assistant professor at the University of Colorado and former lead of the CDCs FoodNet surveillance system, shares the details from the first new comprehensive estimates of foodborne illness in the U.S. since 1999.
Estimating light-vehicle sales in Turkey
Directory of Open Access Journals (Sweden)
Ufuk Demiroğlu
2016-09-01
Full Text Available This paper is motivated by the surprising rapid growth of new light-vehicle sales in Turkey in 2015. Domestic sales grew 25%, dramatically surpassing the industry estimates of around 8%. Our approach is to inform the sales trend estimate with the information obtained from the light-vehicle stock (the number of cars and light trucks officially registered in the country, and the scrappage data. More specifically, we improve the sales trend estimate by estimating the trend of its stock. Using household data, we show that an important reason for the rapid sales growth is that an increasing share of household budgets is spent on automobile purchases. The elasticity of light-vehicle sales to cyclical changes in aggregate demand is high and robust; its estimates are around 6 with a standard deviation of about 0.5. The price elasticity of light-vehicle sales is estimated to be about 0.8, but the estimates are imprecise and not robust. We estimate the trend level of light-vehicle sales to be roughly 7 percent of the existing stock. A remarkable out-of-sample forecast performance is obtained for horizons up to nearly a decade by a regression equation using only a cyclical gap measure, the time trend and obvious policy dummies. Various specifications suggest that the strong 2015 growth of light-vehicle sales was predictable in late 2014.
TP89 - SIRZ Decomposition Spectral Estimation
Energy Technology Data Exchange (ETDEWEB)
Seetho, Isacc M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Azevedo, Steve [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, Jerel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, William D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martz, Jr., Harry E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-12-08
The primary objective of this test plan is to provide X-ray CT measurements of known materials for the purposes of generating and testing MicroCT and EDS spectral estimates. These estimates are to be used in subsequent Ze/RhoE decomposition analyses of acquired data.
Efficient Estimating Functions for Stochastic Differential Equations
DEFF Research Database (Denmark)
Jakobsen, Nina Munkholt
The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...
Estimating Conditional Distributions by Neural Networks
DEFF Research Database (Denmark)
Kulczycki, P.; Schiøler, Henrik
1998-01-01
Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...
Velocity Estimation in Medical Ultrasound [Life Sciences
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt; Villagómez Hoyos, Carlos Armando; Holbek, Simon
2017-01-01
This article describes the application of signal processing in medical ultrasound velocity estimation. Special emphasis is on the relation among acquisition methods, signal processing, and estimators employed. The description spans from current clinical systems for one-and two-dimensional (1-D an...
Varieties of Quantity Estimation in Children
Sella, Francesco; Berteletti, Ilaria; Lucangeli, Daniela; Zorzi, Marco
2015-01-01
In the number-to-position task, with increasing age and numerical expertise, children's pattern of estimates shifts from a biased (nonlinear) to a formal (linear) mapping. This widely replicated finding concerns symbolic numbers, whereas less is known about other types of quantity estimation. In Experiment 1, Preschool, Grade 1, and Grade 3…
Estimating functions for inhomogeneous Cox processes
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2006-01-01
Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples.......Estimation methods are reviewed for inhomogeneous Cox processes with tractable first and second order properties. We illustrate the various suggestions by means of data examples....
Kalman filter to update forest cover estimates
Raymond L. Czaplewski
1990-01-01
The Kalman filter is a statistical estimator that combines a time-series of independent estimates, using a prediction model that describes expected changes in the state of a system over time. An expensive inventory can be updated using model predictions that are adjusted with more recent, but less expensive and precise, monitoring data. The concepts of the Kalman...
Linearized motion estimation for articulated planes.
Datta, Ankur; Sheikh, Yaser; Kanade, Takeo
2011-04-01
In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.
Body composition estimation from selected slices
DEFF Research Database (Denmark)
Lacoste Jeanson, Alizé; Dupej, Ján; Villa, Chiara
2017-01-01
Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total...
Differences between carbon budget estimates unravelled
Rogelj, Joeri; Schaeffer, Michiel; Friedlingstein, Pierre; Gillett, Nathan P.; Vuuren, Van Detlef P.; Riahi, Keywan; Allen, Myles; Knutti, Reto
2016-01-01
Several methods exist to estimate the cumulative carbon emissions that would keep global warming to below a given temperature limit. Here we review estimates reported by the IPCC and the recent literature, and discuss the reasons underlying their differences. The most scientifically robust
Differences between carbon budget estimates unravelled
Rogelj, Joeri; Schaeffer, Michiel; Friedlingstein, Pierre; Gillett, Nathan P.; Van Vuuren, Detlef P.|info:eu-repo/dai/nl/11522016X; Riahi, Keywan; Allen, Myles; Knutti, Reto
2016-01-01
Several methods exist to estimate the cumulative carbon emissions that would keep global warming to below a given temperature limit. Here we review estimates reported by the IPCC and the recent literature, and discuss the reasons underlying their differences. The most scientifically robust
Nonparametric estimation in models for unobservable heterogeneity
Hohmann, Daniel
2014-01-01
Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.
Estimates of Uncertainty around the RBA's Forecasts
Peter Tulip; Stephanie Wallace
2012-01-01
We use past forecast errors to construct confidence intervals and other estimates of uncertainty around the Reserve Bank of Australia's forecasts of key macroeconomic variables. Our estimates suggest that uncertainty about forecasts is high. We find that the RBA's forecasts have substantial explanatory power for the inflation rate but not for GDP growth.
Cost-estimating for commercial digital printing
Keif, Malcolm G.
2007-01-01
The purpose of this study is to document current cost-estimating practices used in commercial digital printing. A research study was conducted to determine the use of cost-estimating in commercial digital printing companies. This study answers the questions: 1) What methods are currently being used to estimate digital printing? 2) What is the relationship between estimating and pricing digital printing? 3) To what extent, if at all, do digital printers use full-absorption, all-inclusive hourly rates for estimating? Three different digital printing models were identified: 1) Traditional print providers, who supplement their offset presswork with digital printing for short-run color and versioned commercial print; 2) "Low-touch" print providers, who leverage the power of the Internet to streamline business transactions with digital storefronts; 3) Marketing solutions providers, who see printing less as a discrete manufacturing process and more as a component of a complete marketing campaign. Each model approaches estimating differently. Understanding and predicting costs can be extremely beneficial. Establishing a reliable system to estimate those costs can be somewhat challenging though. Unquestionably, cost-estimating digital printing will increase in relevance in the years ahead, as margins tighten and cost knowledge becomes increasingly more critical.
Estimating Gender Wage Gaps: A Data Update
McDonald, Judith A.; Thornton, Robert J.
2016-01-01
In the authors' 2011 "JEE" article, "Estimating Gender Wage Gaps," they described an interesting class project that allowed students to estimate the current gender earnings gap for recent college graduates using data from the National Association of Colleges and Employers (NACE). Unfortunately, since 2012, NACE no longer…
Regression Equations for Birth Weight Estimation using ...
African Journals Online (AJOL)
In this study, Birth Weight has been estimated from anthropometric measurements of hand and foot. Linear regression equations were formed from each of the measured variables. These simple equations can be used to estimate Birth Weight of new born babies, in order to identify those with low birth weight and referred to ...
Estimating Loan-to-value Distributions
DEFF Research Database (Denmark)
Korteweg, Arthur; Sørensen, Morten
2016-01-01
We estimate a model of house prices, combined loan-to-value ratios (CLTVs) and trade and foreclosure behavior. House prices are only observed for traded properties and trades are endogenous, creating sample-selection problems for existing approaches to estimating CLTVs. We use a Bayesian filtering...
MINIMUM VARIANCE BETA ESTIMATION WITH DYNAMIC CONSTRAINTS,
developed (at AFETR ) and is being used to isolate the primary error sources in the beta estimation task. This computer program is additionally used to...determine what success in beta estimation can be achieved with foreseeable instrumentation accuracies. Results are included that illustrate the effects on
A method of estimating log weights.
Charles N. Mann; Hilton H. Lysons
1972-01-01
This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...
MCMC estimation of multidimensional IRT models
Beguin, Anton; Glas, Cornelis A.W.
1998-01-01
A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will
Systematic Approach for Decommissioning Planning and Estimating
International Nuclear Information System (INIS)
Dam, A. S.
2002-01-01
Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises
On parameter estimation in deformable models
DEFF Research Database (Denmark)
Fisker, Rune; Carstensen, Jens Michael
1998-01-01
Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...
Introduction to quantum-state estimation
Teo, Yong Siah
2016-01-01
Quantum-state estimation is an important field in quantum information theory that deals with the characterization of states of affairs for quantum sources. This book begins with background formalism in estimation theory to establish the necessary prerequisites. This basic understanding allows us to explore popular likelihood- and entropy-related estimation schemes that are suitable for an introductory survey on the subject. Discussions on practical aspects of quantum-state estimation ensue, with emphasis on the evaluation of tomographic performances for estimation schemes, experimental realizations of quantum measurements and detection of single-mode multi-photon sources. Finally, the concepts of phase-space distribution functions, which compatibly describe these multi-photon sources, are introduced to bridge the gap between discrete and continuous quantum degrees of freedom. This book is intended to serve as an instructive and self-contained medium for advanced undergraduate and postgraduate students to gra...
Modified Weighted Kaplan-Meier Estimator
Directory of Open Access Journals (Sweden)
Mohammad Shafiq
2007-01-01
Full Text Available In many medical studies majority of the study subjects do not reach to the event of interest during the study period. In such situations survival probabilities can be estimated for censored observation by Kaplan Meier estimator. However in case of heavy censoring these estimates are biased and over estimate the survival probabilities. For heavy censoring a new method was proposed (Bahrawar Jan, 2005 to estimate the survival probabilities by weighting the censored observations by non-censoring rate. But the main defect in this weighted method is that it gives zero weight to the last censored observation. To over come this difficulty a new weight is proposed which also gives a non-zero weight to the last censored observation.
Nonparametric Collective Spectral Density Estimation and Clustering
Maadooliat, Mehdi
2017-04-12
In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at
A Developed ESPRIT Algorithm for DOA Estimation
Fayad, Youssef; Wang, Caiyun; Cao, Qunsheng; Hafez, Alaa El-Din Sayed
2015-05-01
A novel algorithm for estimating direction of arrival (DOAE) for target, which aspires to contribute to increase the estimation process accuracy and decrease the calculation costs, has been carried out. It has introduced time and space multiresolution in Estimation of Signal Parameter via Rotation Invariance Techniques (ESPRIT) method (TS-ESPRIT) to realize subspace approach that decreases errors caused by the model's nonlinearity effect. The efficacy of the proposed algorithm is verified by using Monte Carlo simulation, the DOAE accuracy has evaluated by closed-form Cramér-Rao bound (CRB) which reveals that the proposed algorithm's estimated results are better than those of the normal ESPRIT methods leading to the estimator performance enhancement.
Another look at the Grubbs estimators
Lombard, F.
2012-01-01
We consider estimation of the precision of a measuring instrument without the benefit of replicate observations on heterogeneous sampling units. Grubbs (1948) proposed an estimator which involves the use of a second measuring instrument, resulting in a pair of observations on each sampling unit. Since the precisions of the two measuring instruments are generally different, these observations cannot be treated as replicates. Very large sample sizes are often required if the standard error of the estimate is to be within reasonable bounds and if negative precision estimates are to be avoided. We show that the two instrument Grubbs estimator can be improved considerably if fairly reliable preliminary information regarding the ratio of sampling unit variance to instrument variance is available. Our results are presented in the context of the evaluation of on-line analyzers. A data set from an analyzer evaluation is used to illustrate the methodology. © 2011 Elsevier B.V.
Self-estimates of attention performance
Directory of Open Access Journals (Sweden)
CHRISTOPH MENGELKAMP
2007-09-01
Full Text Available In research on self-estimated IQ, gender differences are often found. The present study investigates whether these findings are true for self-estimation of attention, too. A sample of 100 female and 34 male students were asked to fill in the test of attention d2. After taking the test, the students estimated their results in comparison to their fellow students. The results show that the students underestimate their percent rank compared with the actual percent rank they achieved in the test, but estimate their rank order fairly accurately. Moreover, males estimate their performance distinctly higher than females do. This last result remains true even when the real test score is statistically controlled. The results are discussed with regard to research on positive illusions and gender stereotypes.
Cost-estimating relationships for space programs
Mandell, Humboldt C., Jr.
1992-01-01
Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.
Nonparametric Collective Spectral Density Estimation and Clustering
Maadooliat, Mehdi; Sun, Ying; Chen, Tianbo
2017-01-01
In this paper, we develop a method for the simultaneous estimation of spectral density functions (SDFs) for a collection of stationary time series that share some common features. Due to the similarities among the SDFs, the log-SDF can be represented using a common set of basis functions. The basis shared by the collection of the log-SDFs is estimated as a low-dimensional manifold of a large space spanned by a pre-specified rich basis. A collective estimation approach pools information and borrows strength across the SDFs to achieve better estimation efficiency. Also, each estimated spectral density has a concise representation using the coefficients of the basis expansion, and these coefficients can be used for visualization, clustering, and classification purposes. The Whittle pseudo-maximum likelihood approach is used to fit the model and an alternating blockwise Newton-type algorithm is developed for the computation. A web-based shiny App found at
COST ESTIMATING RELATIONSHIPS IN ONSHORE DRILLING PROJECTS
Directory of Open Access Journals (Sweden)
Ricardo de Melo e Silva Accioly
2017-03-01
Full Text Available Cost estimating relationships (CERs are very important tools in the planning phases of an upstream project. CERs are, in general, multiple regression models developed to estimate the cost of a particular item or scope of a project. They are based in historical data that should pass through a normalization process before fitting a model. In the early phases they are the primary tool for cost estimating. In later phases they are usually used as an estimation validation tool and sometimes for benchmarking purposes. As in any other modeling methodology there are number of important steps to build a model. In this paper the process of building a CER to estimate drilling cost of onshore wells will be addressed.
Channel Estimation in DCT-Based OFDM
Wang, Yulin; Zhang, Gengxin; Xie, Zhidong; Hu, Jing
2014-01-01
This paper derives the channel estimation of a discrete cosine transform- (DCT-) based orthogonal frequency-division multiplexing (OFDM) system over a frequency-selective multipath fading channel. Channel estimation has been proved to improve system throughput and performance by allowing for coherent demodulation. Pilot-aided methods are traditionally used to learn the channel response. Least square (LS) and mean square error estimators (MMSE) are investigated. We also study a compressed sensing (CS) based channel estimation, which takes the sparse property of wireless channel into account. Simulation results have shown that the CS based channel estimation is expected to have better performance than LS. However MMSE can achieve optimal performance because of prior knowledge of the channel statistic. PMID:24757439
Sparse DOA estimation with polynomial rooting
DEFF Research Database (Denmark)
Xenaki, Angeliki; Gerstoft, Peter; Fernandez Grande, Efren
2015-01-01
Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve highresol......Direction-of-arrival (DOA) estimation involves the localization of a few sources from a limited number of observations on an array of sensors. Thus, DOA estimation can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve...... highresolution imaging. Utilizing the dual optimal variables of the CS optimization problem, it is shown with Monte Carlo simulations that the DOAs are accurately reconstructed through polynomial rooting (Root-CS). Polynomial rooting is known to improve the resolution in several other DOA estimation methods...
Contractor-style tunnel cost estimating
International Nuclear Information System (INIS)
Scapuzzi, D.
1990-06-01
Keeping pace with recent advances in construction technology is a challenge for the cost estimating engineer. Using an estimating style that simulates the actual construction process and is similar in style to the contractor's estimate will give a realistic view of underground construction costs. For a contractor-style estimate, a mining method is chosen; labor crews, plant and equipment are selected, and advance rates are calculated for the various phases of work which are used to determine the length of time necessary to complete each phase of work. The durations are multiplied by the cost or labor and equipment per unit of time and, along with the costs for materials and supplies, combine to complete the estimate. Variations in advance rates, ground support, labor crew size, or other areas are more easily analyzed for their overall effect on the cost and schedule of a project. 14 figs
A Method of Nuclear Software Reliability Estimation
International Nuclear Information System (INIS)
Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol
2011-01-01
A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed
Spectral Velocity Estimation in the Transverse Direction
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2013-01-01
A method for estimating the velocity spectrum for a fully transverse flow at a beam-to-flow angle of 90is described. The approach is based on the transverse oscillation (TO) method, where an oscillation across the ultrasound beam is made during receive processing. A fourth-order estimator based...... on the correlation of the received signal is derived. A Fourier transform of the correlation signal yields the velocity spectrum. Performing the estimation for short data segments gives the velocity spectrum as a function of time as for ordinary spectrograms, and it also works for a beam-to-flow angle of 90...... estimation scheme can reliably find the spectrum at 90, where a traditional estimator yields zero velocity. Measurements have been conducted with the SARUS experimental scanner and a BK 8820e convex array transducer (BK Medical, Herlev, Denmark). A CompuFlow 1000 (Shelley Automation, Inc, Toronto, Canada...
Solar constant values for estimating solar radiation
International Nuclear Information System (INIS)
Li, Huashan; Lian, Yongwang; Wang, Xianlong; Ma, Weibin; Zhao, Liang
2011-01-01
There are many solar constant values given and adopted by researchers, leading to confusion in estimating solar radiation. In this study, some solar constant values collected from literature for estimating solar radiation with the Angstroem-Prescott correlation are tested in China using the measured data between 1971 and 2000. According to the ranking method based on the t-statistic, a strategy to select the best solar constant value for estimating the monthly average daily global solar radiation with the Angstroem-Prescott correlation is proposed. -- Research highlights: → The effect of the solar constant on estimating solar radiation is investigated. → The investigation covers a diverse range of climate and geography in China. → A strategy to select the best solar constant for estimating radiation is proposed.
Parameter estimation for an expanding universe
Directory of Open Access Journals (Sweden)
Jieci Wang
2015-03-01
Full Text Available We study the parameter estimation for excitations of Dirac fields in the expanding Robertson–Walker universe. We employ quantum metrology techniques to demonstrate the possibility for high precision estimation for the volume rate of the expanding universe. We show that the optimal precision of the estimation depends sensitively on the dimensionless mass m˜ and dimensionless momentum k˜ of the Dirac particles. The optimal precision for the ratio estimation peaks at some finite dimensionless mass m˜ and momentum k˜. We find that the precision of the estimation can be improved by choosing the probe state as an eigenvector of the hamiltonian. This occurs because the largest quantum Fisher information is obtained by performing projective measurements implemented by the projectors onto the eigenvectors of specific probe states.
Assessing the performance of dynamical trajectory estimates
Bröcker, Jochen
2014-06-01
Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.
Minimum Distance Estimation on Time Series Analysis With Little Data
National Research Council Canada - National Science Library
Tekin, Hakan
2001-01-01
.... Minimum distance estimation has been demonstrated better standard approaches, including maximum likelihood estimators and least squares, in estimating statistical distribution parameters with very small data sets...
DFT-based channel estimation and noise variance estimation techniques for single-carrier FDMA
Huang, G; Nix, AR; Armour, SMD
2010-01-01
Practical frequency domain equalization (FDE) systems generally require knowledge of the channel and the noise variance to equalize the received signal in a frequency-selective fading channel. Accurate channel estimate and noise variance estimate are thus desirable to improve receiver performance. In this paper we investigate the performance of the denoise channel estimator and the approximate linear minimum mean square error (A-LMMSE) channel estimator with channel power delay profile (PDP) ...
2015-2016 Palila abundance estimates
Camp, Richard J.; Brinck, Kevin W.; Banko, Paul C.
2016-01-01
The palila (Loxioides bailleui) population was surveyed annually during 1998−2016 on Mauna Kea Volcano to determine abundance, population trend, and spatial distribution. In the latest surveys, the 2015 population was estimated at 852−1,406 birds (point estimate: 1,116) and the 2016 population was estimated at 1,494−2,385 (point estimate: 1,934). Similar numbers of palila were detected during the first and subsequent counts within each year during 2012−2016; the proportion of the total annual detections in each count ranged from 46% to 56%; and there was no difference in the detection probability due to count sequence. Furthermore, conducting repeat counts improved the abundance estimates by reducing the width of the confidence intervals between 9% and 32% annually. This suggests that multiple counts do not affect bird or observer behavior and can be continued in the future to improve the precision of abundance estimates. Five palila were detected on supplemental survey stations in the Ka‘ohe restoration area, outside the core survey area but still within Palila Critical Habitat (one in 2015 and four in 2016), suggesting that palila are present in habitat that is recovering from cattle grazing on the southwest slope. The average rate of decline during 1998−2016 was 150 birds per year. Over the 18-year monitoring period, the estimated rate of change equated to a 58% decline in the population.
Procedure for estimating permanent total enclosure costs
Energy Technology Data Exchange (ETDEWEB)
Lukey, M E; Prasad, C; Toothman, D A; Kaplan, N
1999-07-01
Industries that use add-on control devices must adequately capture emissions before delivering them to the control device. One way to capture emissions is to use permanent total enclosures (PTEs). By definition, an enclosure which meets the US Environmental Protection Agency's five-point criteria is a PTE and has a capture efficiency of 100%. Since costs play an important role in regulatory development, in selection of control equipment, and in control technology evaluations for permitting purposes, EPA has developed a Control Cost Manual for estimating costs of various items of control equipment. EPA's Manual does not contain any methodology for estimating PTE costs. In order to assist environmental regulators and potential users of PTEs, a methodology for estimating PTE costs was developed under contract with EPA, by Pacific Environmental Services, Inc. (PES) and is the subject of this paper. The methodology for estimating PTE costs follows the approach used for other control devices in the Manual. It includes procedures for sizing various components of a PTE and for estimating capital as well as annual costs. It contains verification procedures for demonstrating compliance with EPA's five-point criteria. In addition, procedures are included to determine compliance with Occupational Safety and Health Administration (OSHA) standards. Meeting these standards is an important factor in properly designing PTEs. The methodology is encoded in Microsoft Exel spreadsheets to facilitate cost estimation and PTE verification. Examples are given throughout the methodology development and in the spreadsheets to illustrate the PTE design, verification, and cost estimation procedures.
Quantifying Uncertainty in Soil Volume Estimates
International Nuclear Information System (INIS)
Roos, A.D.; Hays, D.C.; Johnson, R.L.; Durham, L.A.; Winters, M.
2009-01-01
Proper planning and design for remediating contaminated environmental media require an adequate understanding of the types of contaminants and the lateral and vertical extent of contamination. In the case of contaminated soils, this generally takes the form of volume estimates that are prepared as part of a Feasibility Study for Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites and/or as part of the remedial design. These estimates are typically single values representing what is believed to be the most likely volume of contaminated soil present at the site. These single-value estimates, however, do not convey the level of confidence associated with the estimates. Unfortunately, the experience has been that pre-remediation soil volume estimates often significantly underestimate the actual volume of contaminated soils that are encountered during the course of remediation. This underestimation has significant implications, both technically (e.g., inappropriate remedial designs) and programmatically (e.g., establishing technically defensible budget and schedule baselines). Argonne National Laboratory (Argonne) has developed a joint Bayesian/geostatistical methodology for estimating contaminated soil volumes based on sampling results, that also provides upper and lower probabilistic bounds on those volumes. This paper evaluates the performance of this method in a retrospective study that compares volume estimates derived using this technique with actual excavated soil volumes for select Formerly Utilized Sites Remedial Action Program (FUSRAP) Maywood properties that have completed remedial action by the U.S. Army Corps of Engineers (USACE) New York District. (authors)
Observer-Based Human Knee Stiffness Estimation.
Misgeld, Berno J E; Luken, Markus; Riener, Robert; Leonhardt, Steffen
2017-05-01
We consider the problem of stiffness estimation for the human knee joint during motion in the sagittal plane. The new stiffness estimator uses a nonlinear reduced-order biomechanical model and a body sensor network (BSN). The developed model is based on a two-dimensional knee kinematics approach to calculate the angle-dependent lever arms and the torques of the muscle-tendon-complex. To minimize errors in the knee stiffness estimation procedure that result from model uncertainties, a nonlinear observer is developed. The observer uses the electromyogram (EMG) of involved muscles as input signals and the segmental orientation as the output signal to correct the observer-internal states. Because of dominating model nonlinearities and nonsmoothness of the corresponding nonlinear functions, an unscented Kalman filter is designed to compute and update the observer feedback (Kalman) gain matrix. The observer-based stiffness estimation algorithm is subsequently evaluated in simulations and in a test bench, specifically designed to provide robotic movement support for the human knee joint. In silico and experimental validation underline the good performance of the knee stiffness estimation even in the cases of a knee stiffening due to antagonistic coactivation. We have shown the principle function of an observer-based approach to knee stiffness estimation that employs EMG signals and segmental orientation provided by our own IPANEMA BSN. The presented approach makes realtime, model-based estimation of knee stiffness with minimal instrumentation possible.
Parametric cost estimation for space science missions
Lillie, Charles F.; Thompson, Bruce E.
2008-07-01
Cost estimation for space science missions is critically important in budgeting for successful missions. The process requires consideration of a number of parameters, where many of the values are only known to a limited accuracy. The results of cost estimation are not perfect, but must be calculated and compared with the estimates that the government uses for budgeting purposes. Uncertainties in the input parameters result from evolving requirements for missions that are typically the "first of a kind" with "state-of-the-art" instruments and new spacecraft and payload technologies that make it difficult to base estimates on the cost histories of previous missions. Even the cost of heritage avionics is uncertain due to parts obsolescence and the resulting redesign work. Through experience and use of industry best practices developed in participation with the Aerospace Industries Association (AIA), Northrop Grumman has developed a parametric modeling approach that can provide a reasonably accurate cost range and most probable cost for future space missions. During the initial mission phases, the approach uses mass- and powerbased cost estimating relationships (CER)'s developed with historical data from previous missions. In later mission phases, when the mission requirements are better defined, these estimates are updated with vendor's bids and "bottoms- up", "grass-roots" material and labor cost estimates based on detailed schedules and assigned tasks. In this paper we describe how we develop our CER's for parametric cost estimation and how they can be applied to estimate the costs for future space science missions like those presented to the Astronomy & Astrophysics Decadal Survey Study Committees.
Energy Technology Data Exchange (ETDEWEB)
Melaina, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Penev, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2013-09-01
This report compares hydrogen station cost estimates conveyed by expert stakeholders through the Hydrogen Station Cost Calculation (HSCC) to a select number of other cost estimates. These other cost estimates include projections based upon cost models and costs associated with recently funded stations.
7 CFR 1435.301 - Annual estimates and quarterly re-estimates.
2010-01-01
... CORPORATION, DEPARTMENT OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS SUGAR PROGRAM Flexible Marketing..., estimates, and re-estimates in this subpart will use available USDA statistics and estimates of production, consumption, and stocks, taking into account, where appropriate, data supplied in reports submitted pursuant...
2010-01-01
... consumption, estimated annual operating cost, and energy efficiency rating, and of water use rate. 305.5... RULE CONCERNING DISCLOSURES REGARDING ENERGY CONSUMPTION AND WATER USE OF CERTAIN HOME APPLIANCES AND... § 305.5 Determinations of estimated annual energy consumption, estimated annual operating cost, and...
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...
Development of realtime cognitive state estimator
International Nuclear Information System (INIS)
Takahashi, Makoto; Kitamura, Masashi; Yoshikaea, Hidekazu
2004-01-01
The realtime cognitive state estimator based on the set of physiological measures has been developed in order to provide valuable information on the human behavior during the interaction through the Man-Machine Interface. The artificial neural network has been adopted to categorize the cognitive states by using the qualitative physiological data pattern as the inputs. The laboratory experiments, in which the subjects' cognitive states were intentionally controlled by the task presented, were performed to obtain training data sets for the neural network. The developed system has been shown to be capable of estimating cognitive state with higher accuracy and realtime estimation capability has also been confirmed through the data processing experiments. (author)
The MIRD method of estimating absorbed dose
International Nuclear Information System (INIS)
Weber, D.A.
1991-01-01
The estimate of absorbed radiation dose from internal emitters provides the information required to assess the radiation risk associated with the administration of radiopharmaceuticals for medical applications. The MIRD (Medical Internal Radiation Dose) system of dose calculation provides a systematic approach to combining the biologic distribution data and clearance data of radiopharmaceuticals and the physical properties of radionuclides to obtain dose estimates. This tutorial presents a review of the MIRD schema, the derivation of the equations used to calculate absorbed dose, and shows how the MIRD schema can be applied to estimate dose from radiopharmaceuticals used in nuclear medicine
Implementing Estimation of Capacity for Freeway Sections
Directory of Open Access Journals (Sweden)
Chang-qiao Shao
2011-01-01
Full Text Available Based on the stochastic concept for freeway capacity, the procedure of capacity estimation is developed. Due to the fact that it is impossible to observe the value of the capacity and to obtain the probability distribution of the capacity, the product-limit method is used in this paper to estimate the capacity. In order to implement estimation of capacity using this technology, the lifetime table based on statistical methods for lifetime data analysis is introduced and the corresponding procedure is developed. Simulated data based on freeway sections in Beijing, China, were analyzed and the results indicate that the methodology and procedure are applicable and validated.
Semi-parametric estimation for ARCH models
Directory of Open Access Journals (Sweden)
Raed Alzghool
2018-03-01
Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator