2012-04-26
...: Representative Average Unit Costs of Energy'', dated March 10, 2011, 76 FR 13168. May 29, 2012, the cost figures...: Representative Average Unit Costs of Energy AGENCY: Office of Energy Efficiency and Renewable Energy, Department... forecasting the representative average unit costs of five residential energy sources for the year...
16 CFR Appendix K to Part 305 - Representative Average Unit Energy Costs
2010-01-01
... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Representative Average Unit Energy Costs K... CONGRESS RULE CONCERNING DISCLOSURES REGARDING ENERGY CONSUMPTION AND WATER USE OF CERTAIN HOME APPLIANCES AND OTHER PRODUCTS REQUIRED UNDER THE ENERGY POLICY AND CONSERVATION ACT (âAPPLIANCE LABELING...
VT Biodiversity Project - Representative Landscapes boundary lines
Vermont Center for Geographic Information — (Link to Metadata) This coverage represents the results of an analysis of landscape diversity in Vermont. Polygons in the dataset represent as much as possible (in a...
Representing Participation in ICT4D Projects
Singh, J. P.; Flyverbom, Mikkel
2016-01-01
identify two dimensions to participation and ICT4D: whether participation 1) is hierarchical/top-down or agent-driven/bottom-up, and 2) involves conflict or cooperation. Based on these dimensions we articulate four ideal types of discourse that permeate ICT and development efforts: stakeholder......, depending on the context of their implementation, are permeated by multiple discourses about participation. Our four ideal types of participation discourses are, therefore, useful starting points to discuss the intricate dynamics of participation in ICT4D projects....
Averaging methods for extracting representative waveforms from motor unit action potential trains.
Malanda, Armando; Navallas, Javier; Rodriguez-Falces, Javier; Rodriguez-Carreño, Ignacio; Gila, Luis
2015-08-01
In the context of quantitative electromyography (EMG), it is of major interest to obtain a waveform that faithfully represents the set of potentials that constitute a motor unit action potential (MUAP) train. From this waveform, various parameters can be determined in order to characterize the MUAP for diagnostic analysis. The aim of this work was to conduct a thorough, in-depth review, evaluation and comparison of state-of-the-art methods for composing waveforms representative of MUAP trains. We evaluated nine averaging methods: Ensemble (EA), Median (MA), Weighted (WA), Five-closest (FCA), MultiMUP (MMA), Split-sweep median (SSMA), Sorted (SA), Trimmed (TA) and Robust (RA) in terms of three general-purpose signal processing figures of merit (SPMF) and seven clinically-used MUAP waveform parameters (MWP). The convergence rate of the methods was assessed as the number of potentials per MUAP train (NPM) required to reach a level of performance that was not significantly improved by increasing this number. Test material comprised 78 MUAP trains obtained from the tibialis anterioris of seven healthy subjects. Error measurements related to all SPMF and MWP parameters except MUAP amplitude descended asymptotically with increasing NPM for all methods. MUAP amplitude showed a consistent bias (around 4% for EA and SA and 1-2% for the rest). MA, TA and SSMA had the lowest SPMF and MWP error figures. Therefore, these methods most accurately preserve and represent MUAP physiological information of utility in clinical medical practice. The other methods, particularly WA, performed noticeably worse. Convergence rate was similar for all methods, with NPM values averaged among the nine methods, which ranged from 10 to 40, depending on the waveform parameter evaluated.
VT Biodiversity Project - Representative Landscapes in Vermont polygons
Vermont Center for Geographic Information — (Link to Metadata) This coverage represents the results of an analysis of landscape diversity in Vermont. Polygons in the dataset represent as much as possible (in a...
Evaluation of Representative Smart Grid Investment Project Technologies: Demand Response
Fuller, Jason C.; Prakash Kumar, Nirupama; Bonebrake, Christopher A.
2012-02-14
This document is one of a series of reports estimating the benefits of deploying technologies similar to those implemented on the Smart Grid Investment Grant (SGIG) projects. Four technical reports cover the various types of technologies deployed in the SGIG projects, distribution automation, demand response, energy storage, and renewables integration. A fifth report in the series examines the benefits of deploying these technologies on a national level. This technical report examines the impacts of a limited number of demand response technologies and implementations deployed in the SGIG projects.
Basem Azab
Full Text Available Several studies reported the negative impact of elevated neutrophil/lymphocyte ratio (NLR on outcomes in many surgical and medical conditions. Previous studies used arbitrary NLR cut-off points according to the average of the populations under study. There is no data on the average NLR in the general population. The aim of this study is to explore the average values of NLR and according to race in adult non-institutional United States individuals by using national data.The National Health and Nutrition Examination Survey (NHANES of aggregated cross-sectional data collected from 2007 to 2010 was analyzed; data extracted included markers of systemic inflammation (neutrophil count, lymphocyte count, and NLR, demographic variables and other comorbidities. Subjects who were prescribed steroids, chemotherapy, immunomodulators and antibiotics were excluded. Adjusted linear regression models were used to examine the association between demographic and clinical characteristics and neutrophil counts, lymphocyte counts, and NLR.Overall 9427 subjects are included in this study. The average value of neutrophils is 4.3 k cells/mL, of lymphocytes 2.1k cells/mL; the average NLR is 2.15. Non-Hispanic Black and Hispanic participants have significantly lower mean NLR values (1.76, 95% CI 1.71-1.81 and 2.08, 95% CI 2.04-2.12 respectively when compared to non-Hispanic Whites (2.24, 95% CI 2.19-2.28-p<0.0001. Subjects who reported diabetes, cardiovascular disease, and smoking had significantly higher NLR than subjects who did not. Racial differences regarding the association of smoking and BMI with NLR were observed.This study is providing preliminary data on racial disparities in a marker of inflammation, NLR, that has been associated with several chronic diseases outcome, suggesting that different cut-off points should be set according to race. It also suggests that racial differences exist in the inflammatory response to environmental and behavioral risk factors.
Lei Jiang
2013-01-01
Full Text Available The temporal scaling properties of the daily 0 cm average ground surface temperature (AGST records obtained from four selected sites over China are investigated using multifractal detrended fluctuation analysis (MF-DFA method. Results show that the AGST records at all four locations exhibit strong persistence features and different scaling behaviors. The differences of the generalized Hurst exponents are very different for the AGST series of each site reflecting the different scaling behaviors of the fluctuation. Furthermore, the strengths of multifractal spectrum are different for different weather stations and indicate that the multifractal behaviors vary from station to station over China.
Average projection type weighted Cramér-von Mises statistics for testing some distributions
CUI; Hengjian(崔恒建)
2002-01-01
This paper addresses the problem of testing goodness-of-fit for several important multivariate distributions: (Ⅰ) Uniform distribution on p-dimensional unit sphere; (Ⅱ) multivariate standard normal distribution; and (Ⅲ) multivariate normal distribution with unknown mean vector and covariance matrix. The average projection type weighted Cramér-yon Mises test statistic as well as estimated and weighted Cramér-von Mises statistics for testing distributions (Ⅰ), (Ⅱ) and (Ⅲ) are constructed via integrating projection direction on the unit sphere, and the asymptotic distributions and the expansions of those test statistics under the null hypothesis are also obtained. Furthermore, the approach of this paper can be applied to testing goodness-of-fit for elliptically contoured distributions.
Dongxu Ren
2016-04-01
Full Text Available A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.
Ren, Dongxu; Zhao, Huiying; Zhang, Chupeng; Yuan, Daocheng; Xi, Jianpu; Zhu, Xueliang; Ban, Xinxing; Dong, Longchao; Gu, Yawen; Jiang, Chunye
2016-04-14
A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method's theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.
The Mercury Project: A High Average Power, Gas-Cooled Laser For Inertial Fusion Energy Development
Bayramian, A; Armstrong, P; Ault, E; Beach, R; Bibeau, C; Caird, J; Campbell, R; Chai, B; Dawson, J; Ebbers, C; Erlandson, A; Fei, Y; Freitas, B; Kent, R; Liao, Z; Ladran, T; Menapace, J; Molander, B; Payne, S; Peterson, N; Randles, M; Schaffers, K; Sutton, S; Tassano, J; Telford, S; Utterback, E
2006-11-03
Hundred-joule, kilowatt-class lasers based on diode-pumped solid-state technologies, are being developed worldwide for laser-plasma interactions and as prototypes for fusion energy drivers. The goal of the Mercury Laser Project is to develop key technologies within an architectural framework that demonstrates basic building blocks for scaling to larger multi-kilojoule systems for inertial fusion energy (IFE) applications. Mercury has requirements that include: scalability to IFE beamlines, 10 Hz repetition rate, high efficiency, and 10{sup 9} shot reliability. The Mercury laser has operated continuously for several hours at 55 J and 10 Hz with fourteen 4 x 6 cm{sup 2} ytterbium doped strontium fluoroapatite (Yb:S-FAP) amplifier slabs pumped by eight 100 kW diode arrays. The 1047 nm fundamental wavelength was converted to 523 nm at 160 W average power with 73% conversion efficiency using yttrium calcium oxy-borate (YCOB).
Status of HiLASE project: High average power pulsed DPSSL systems for research and industry
Mocek T.
2013-11-01
Full Text Available We introduce the Czech national R&D project HiLASE which focuses on strategic development of advanced high-repetition rate, diode pumped solid state laser (DPSSL systems that may find use in research, high-tech industry and in the future European large-scale facilities such as HiPER and ELI. Within HiLASE we explore two major concepts: thin-disk and cryogenically cooled multislab amplifiers capable of delivering average output powers above 1 kW level in picosecond-to-nanosecond pulsed regime. In particular, we have started a programme of technology development to demonstrate the scalability of multislab concept up to the kJ level at repetition rate of 1–10 Hz.
Status of HiLASE project: High average power pulsed DPSSL systems for research and industry
Mocek, T.; Divoky, M.; Smrz, M.; Sawicka, M.; Chyla, M.; Sikocinski, P.; Vohnikova, H.; Severova, P.; Lucianetti, A.; Novak, J.; Rus, B.
2013-11-01
We introduce the Czech national R&D project HiLASE which focuses on strategic development of advanced high-repetition rate, diode pumped solid state laser (DPSSL) systems that may find use in research, high-tech industry and in the future European large-scale facilities such as HiPER and ELI. Within HiLASE we explore two major concepts: thin-disk and cryogenically cooled multislab amplifiers capable of delivering average output powers above 1 kW level in picosecond-to-nanosecond pulsed regime. In particular, we have started a programme of technology development to demonstrate the scalability of multislab concept up to the kJ level at repetition rate of 1-10 Hz.
Averaged 30 year climate change projections mask opportunities for species establishment
Serra-Diaz, Josep M.; Franklin, Janet; Sweet, Lynn C.; McCullough, Ian M.; Syphard, Alexandra D.; Regan, Helen M.; Flint, Lorraine E.; Flint, Alan L.; Dingman, John; Moritz, Max A.; Redmond, Kelly T.; Hannah, Lee; Davis, Frank W.
2016-01-01
Survival of early life stages is key for population expansion into new locations and for persistence of current populations (Grubb 1977, Harper 1977). Relative to adults, these early life stages are very sensitive to climate fl uctuations (Ropert-Coudert et al. 2015), which often drive episodic or ‘event-limited’ regeneration (e.g. pulses) in long-lived plant species (Jackson et al. 2009). Th us, it is diffi cult to mechanistically associate 30-yr climate norms to dynamic processes involved in species range shifts (e.g. seedling survival). What are the consequences of temporal aggregation for estimating areas of potential establishment? We modeled seedling survival for three widespread tree species in California, USA ( Quercus douglasii, Q. kelloggii , Pinus sabiniana ) by coupling a large-scale, multi-year common garden experiment to high-resolution downscaled grids of climatic water defi cit and air temperature (Flint and Flint 2012, Supplementary material Appendix 1). We projected seedling survival for nine climate change projections in two mountain landscapes spanning wide elevation and moisture gradients. We compared areas with windows of opportunity for seedling survival – defi ned as three consecutive years of seedling survival in our species, a period selected based on studies of tree niche ontogeny (Supplementary material Appendix 1) – to areas of 30-yr averaged estimates of seedling survival. We found that temporal aggregation greatly underestimated the potential for species establishment (e.g. seedling survival) under climate change scenarios.
Overview of the HiLASE project: high average power pulsed DPSSL systems for research and industry
M.Divoky; M.Smrz; M.Chyla; P.Sikocinski; P.Severova; O.Novak; J.Huynh; S.S.Nagisetty; T.Miura; J.Pila; O.Slezak; M.Sawicka; V.Jambunathan; J.Vanda; A.Endo; A.Lucianetti; D.Rostohar; P.D.Mason; P.J.Phillips; K.Ertel; S.Banerjee; C.Hernandez-Gomez; J.L.Collier; T.Mocek
2014-01-01
An overview of the Czech national R&D project HiLASE(High average power pulsed laser) is presented. The project focuses on the development of advanced high repetition rate, diode pumped solid state laser(DPSSL) systems with energies in the range from mJ to 100 J and repetition rates in the range from 10 Hz to 100 kHz. Some applications of these lasers in research and hi-tech industry are also presented.
Grade Point Average: Report of the GPA Pilot Project 2013-14
Higher Education Academy, 2015
2015-01-01
This report is published as the result of a range of investigations and debates involving many universities and colleges and a series of meetings, presentations, discussions and consultations. Interest in a grade point average (GPA) system was originally initiated by a group of interested universities, progressing to the systematic investigation…
Future Projection of Droughts over South Korea Using Representative Concentration Pathways (RCPs
Byung Sik Kim
2014-01-01
Full Text Available The Standardized Precipitation Index (SPI, a method widely used to analyze droughts related to climate change, does not consider variables related to temperature and is limited because it cannot consider changes in hydrological balance, such as evapotranspiration from climate change. If we were to consider only the future increase in precipitation from climate change, droughts may decrease. However, because usable water can diminish from an increase in evapotranspiration, it is important to research on projected droughts considering the amount of evapotranspiration along with projecting and evaluating potential droughts considering the impact of climate change. As such, this study evaluated the occurrence of droughts using the Standardized Precipitation Evapotranspiration Index (SPEI as a newly conceptualized drought index that is similar to SPI but includes the temperature variability. We extracted simulated future precipitation and temperature data (2011 - 2099 from the Representative Concentration Pathway (RCP climate change scenario of IPCC AR5 to evaluate the impact of future climate change on the occurrence of droughts of South Korea. We analyzed the ratio of evapotranspiration to precipitation of meteorological observatories nationwide. In addition, we calculated the SPEI related to drought in the process to evaluate the future occurrence of droughts of South Korea. To confirm validity of SPEI results, extreme indices were analyzed. This resulted in the notion that as we go further into the future, the precipitation increases. But because of an increase in evapotranspiration also from a rise in temperature and continued dryness, the severity of droughts is projected to exacerbate.
Myers, S C; Rodgers, A J; Schultz, C A; Walter, W R
1998-06-18
Short-period regional P/S amplitude ratios hold much promise for discriminating low magnitude explosions from earthquakes in a Comprehensive Test Ban Treaty monitoring context. However, propagation effects lead to variability in regional phase amplitudes that if not accounted for can reduce or eliminate the ability of P/S ratios to discriminate the seismic source. lo this study, several representations of short-period regional P/S amplitude ratios are compared in order to determine which methodology best accounts for the effect of heterogeneous structure on P/S amplitudes. These methodologies are: I) distance corrections, including azimuthal subdivision of the data; 2) path specific crustal waveguide parameter regressions; 3) cap-averaging (running mean smoothing); and 4) kriging. The "predictability" of each method is established by cross-validation (leave-one-out) analysis. We apply these techniques to represent Pn/Lg, Pg/Lg and Pn/Sn observations in three frequency bands (0.75-6.0 Hz) at station ABKT (Alibek, Turkmenistan), site of a primary seismic station of the It~temational Monitoring System (IMS). Paths to ABKT sample diverse crustal stmctores (e.g. various topographic, sedimentary and geologic structures), leading to great variability in the observed P/S amplitude ratios. Subdivision of the data be back-azimuth leads to stronger distance trends than that for the entire data set. This observation alone indicates that path propagation effects due to laterally varying shucture are important for the P/S ratios recorded at ABKT. For these data to be useful for isolating source characteristics, the scatter needs to be reduced by accounting for the path effects and the resulting P/S ratio distribution needs to Gaussian for spatial interpolation and discrimination strategies to be most effective. Each method reduces the scatter of the P/S ratios with varying degrees of success, however kriging has the distinct advantages of providing the greatest variance
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Arnold, Jeffrey; Clark, Martyn; Gutmann, Ethan; Wood, Andy; Nijssen, Bart; Rasmussen, Roy
2016-04-01
The United States Army Corps of Engineers (USACE) has had primary responsibility for multi-purpose water resource operations on most of the major river systems in the U.S. for more than 200 years. In that time, the USACE projects and programs making up those operations have proved mostly robust against the range of natural climate variability encountered over their operating life spans. However, in some watersheds and for some variables, climate change now is known to be shifting the hydroclimatic baseline around which that natural variability occurs and changing the range of that variability as well. This makes historical stationarity an inappropriate basis for assessing continued project operations under climate-changed futures. That means new hydroclimatic projections are required at multiple scales to inform decisions about specific threats and impacts, and for possible adaptation responses to limit water-resource vulnerabilities and enhance operational resilience. However, projections of possible future hydroclimatologies have myriad complex uncertainties that require explicit guidance for interpreting and using them to inform those decisions about climate vulnerabilities and resilience. Moreover, many of these uncertainties overlap and interact. Recent work, for example, has shown the importance of assessing the uncertainties from multiple sources including: global model structure [Meehl et al., 2005; Knutti and Sedlacek, 2013]; internal climate variability [Deser et al., 2012; Kay et al., 2014]; climate downscaling methods [Gutmann et al., 2012; Mearns et al., 2013]; and hydrologic models [Addor et al., 2014; Vano et al., 2014; Mendoza et al., 2015]. Revealing, reducing, and representing these uncertainties is essential for defining the plausible quantitative climate change narratives required to inform water-resource decision-making. And to be useful, such quantitative narratives, or storylines, of climate change threats and hydrologic impacts must sample
Malloch, Douglas C.; Michael, William B.
1981-01-01
This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…
Malloch, Douglas C.; Michael, William B.
1981-01-01
This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…
Ronda, Elena; López-Jacob, M José; Paredes-Carbonell, Joan J; López, Pilar; Boix, Pere; García, Ana M
2014-01-01
This article describes the experience of knowledge translation between researchers of the ITSAL (immigration, work and health) project and representatives of organizations working with immigrants to discuss the results obtained in the project and future research lines. A meeting was held, attended by three researchers and 18 representatives from 11 institutions. Following a presentation of the methodology and results of the project, the participants discussed the results presented and research areas of interest, thus confirming matches between the two sides and obtaining proposals of interest for the ITSAL project. We understand the process described as an approach to social validation of some of the main results of this project. This experience has allowed us to open a channel of communication with the target population of the study, in line with the necessary two-way interaction between researchers and users. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.
Basch, Charles E; Basch, Corey H; Ruggles, Kelly V; Rajan, Sonali
2014-12-11
Consistency, quality, and duration of sleep are important determinants of health. We describe sleep patterns among demographically defined subgroups from the Youth Risk Behavior Surveillance System reported in 4 successive biennial representative samples of American high school students (2007 to 2013). Across the 4 waves of data collection, 6.2% to 7.7% of females and 8.0% to 9.4% of males reported obtaining 9 or more hours of sleep. Insufficient duration of sleep is pervasive among American high school students. Despite substantive public health implications, intervention research on this topic has received little attention.
Tuffner, Francis K.; Bonebrake, Christopher A.
2012-02-14
This document is one of a series of reports estimating the benefits of deploying technologies similar to those implemented on the Smart Grid Investment Grant (SGIG) projects. Four technical reports cover the various types of technologies deployed in the SGIG projects, distribution automation, demand response, energy storage, and renewables integration. A fifth report in the series examines the benefits of deploying these technologies on a national level. This technical report examines the impacts of energy storage technologies deployed in the SGIG projects.
Sea-level projections representing deeply uncertain ice-sheet contributions
Bakker, Alexander M R; Ruckert, Kelsey L; Keller, Klaus
2016-01-01
Future sea-level rise poses nontrivial risks for many coastal communities. Managing these risks often relies on consensus projections like those provided by the IPCC. Yet, there is a growing awareness that the surrounding uncertainties may be much larger than typically perceived. Recently published sea-level projections appear widely divergent and highly sensitive to non-trivial model choices and the West Antarctic Ice Sheet (WAIS) may be much less stable than previously believed, enabling a rapid disintegration. In response, some agencies have already announced to update their projections accordingly. Here, we present a set of probabilistic sea-level projections that approximate deeply uncertain WAIS contributions. The projections aim to inform robust decisions by clarifying the sensitivity to non-trivial or controversial assumptions. We show that the deeply uncertain WAIS contribution can dominate other uncertainties within decades. These deep uncertainties call for the development of robust adaptive strate...
Singh, Ruchi; Vyakaranam, Bharat GNVSR
2012-02-14
This document is one of a series of reports estimating the benefits of deploying technologies similar to those implemented on the Smart Grid Investment Grant (SGIG) projects. Four technical reports cover the various types of technologies deployed in the SGIG projects, distribution automation, demand response, energy storage, and renewables integration. A fifth report in the series examines the benefits of deploying these technologies on a national level. This technical report examines the impacts of addition of renewable resources- solar and wind in the distribution system as deployed in the SGIG projects.
Smart grid and households: How are household consumers represented in experimental projects?
Hansen, Meiken; Borup, Mads
2017-01-01
This study contributes a comparative analysis of 11 Danish smart grid experimental projects with household involvement. The analysis describes the scripts for the future smart grid interaction investigated in the examined projects, the approaches to user representation, and the project findings...... concerning consumers and smart grids. Three main dimensions of the scripts are identified and discussed: economic incentives, automation, and information/visualisation. The methods employed for the development of user representations are primarily technical and techno-economic. While our analysis confirms...... previous findings that economic rationales and automation are central elements of smart grid scripts, the analysis also shows that there is considerable variation in the details of the scripts investigated. Our findings suggest that it may be useful for future smart grid projects to be more systematic...
Smart grids and households: how are household consumers represented in experimental projects?
Hansen, Meiken; Borup, Mads
2017-01-01
This study contributes a comparative analysis of 11 Danish smart grid experimental projects with household involvement. The analysis describes the scripts for the future smart grid interaction investigated in the examined projects, the approaches to user representation, and the project findings...... concerning consumers and smart grids. Three main dimensions of the scripts are identified and discussed: economic incentives, automation, and information/visualisation. The methods employed for the development of user representations are primarily technical and techno-economic. While our analysis confirms...... previous findings that economic rationales and automation are central elements of smart grid scripts, the analysis also shows that there is considerable variation in the details of the scripts investigated. Our findings suggest that it may be useful for future smart grid projects to be more systematic...
Runesson, Björn; Gasparini, Alessandro; Qureshi, Abdul Rashid; Norin, Olof; Evans, Marie; Barany, Peter; Wettermark, Björn; Elinder, Carl Gustaf; Carrero, Juan Jesús
2015-01-01
Background We here describe the construction of the Stockholm CREAtinine Measurement (SCREAM) cohort and assess its coverage/representativeness of the Stockholm county in Sweden. SCREAM has the principal aims to estimate the burden and consequences of chronic kidney disease (CKD) and to identify inappropriate drug use (prescription of nephrotoxic, contraindicated or ill-dosed drugs). Methods SCREAM is a repository of laboratory data of individuals, residing or accessing healthcare in the regi...
Smart grid and households: How are household consumers represented in experimental projects?
Hansen, Meiken; Borup, Mads
This paper investigates how smart grid experimental projects in Denmark envision the future role of the private consumers in the energy system. The smart grid development in Denmark can be characterised as compound where several diverse actors are trying to shape the future of the energy system...... (Nyborg & Røpke, 2011). There are many visions of the content of a future smart grid. An active role of the users of electricity is a central difference between the current electricity system and the future smart grid. Analyses have shown that users are currently getting increasing attention in smart grid...
Constructing and Representing: a New Project for 3d Surveying of Yazilikaya - HATTUŠA
Repola, L.; Marazzi, M.; Tilia, S.
2017-05-01
Within the cooperation project between the University Suor Orsola Benincasa of Naples and the archaeological mission in Hattuša of the German Archaeological Institute of Istanbul, directed by Andreas Schachner, in agreement with the Turkish Ministry of Culture and Tourism, the workgroup of the University of Naples, has carried out, in September 2015, a first survey campaign of the whole rocky site of Yazılıkaya. The experimentation has been finalized at constructing a global 3D territorial and monumental model of the site, capable that is, through the application of differing scanning procedures, according to the different components (topography, rocky complex, the cultural spaces therein, complex of sculptural reliefs, inscriptions accompanying the divine representations), of virtually reproducing in detail, for safegaurd, exhibition and study purposes (in particular from an epigraphical and historic-artistic point of view) all the aspects characterizing the artefact and not completely visible to the naked eye today.
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
Najam UL MABOOD
2017-06-01
Full Text Available Advancement in technology has reshaped the businesses across the globe forcing companies to perform tasks and activities in the form of projects. Stakeholder behavior, stakeholder management, strategic fit, role and task clarity are some of the factors that redesign the project success. The current study examine the impact of strategic fit and role clarity on the Average project success and further it enlightens the moderating role of Market turbulence on the relationship between the aforementioned independent and dependent variables. The population of the study comprises of telecom sector of Pakistan. The Data was collected from 201 project team members working on diverse project in Telecom companies of Rawalpindi and Islamabad. The Data was gathered through a questionnaires measured on Likert scale adopted from the study of Beringer, Jonas & Kock (2013. Each Questionnaire comprises of 3 items to measure each variable. SPSS 20.0 Version was used to analyze the data by applying Pearson correlation and multiple regression analysis technique. Findings depicted that role clarity and strategic fit contributed significantly in enhancing success of a project. Results further evidenced that market turbulence negatively moderated the relationship of independent variables on Average project success. The study at the end highlights recommendations for the future researchers.
Cover, Keith S
2008-01-01
While the multiexponential nature of T2 decays measured in vivo is well known, characterizing T2 decays by a single time constant is still very useful when differentiating among structures and pathologies in MRI images. A novel, robust, fast and very simple method is presented for both estimating and displaying the average time constant for the T2 decay of each pixel from a multiecho MRI sequence. The average time constant is calculated from the average of the values measured from the T2 decay over many echoes. For a monoexponential decay, the normalized decay average varies monotonically with the time constant. Therefore, it is simple to map any normalized decay average to an average time constant. This method takes advantage of the robustness of the normalized decay average to both artifacts and multiexponential decays. Color intensity projections (CIPs) were used to display 32 echoes acquired at a 10ms spacing as a single color image. The brightness of each pixel in each color image was determined by the i...
Letourneau, A.; Chabod, S.; Marie, F.; Ridikas, D.; Toussaint, J.C.; Veyssiere, C. [CEA/DSM/DAPNIA Saclay, Gif-sur-Yvette (France); Blandin, C. [CEA/DEN/DER/SPEX Cadarache - Saint-Paul-lez-Durances (France); Mutti, P. [Inst. Laue-Langevin, Grenoble (France)
2003-07-01
In the framework of nuclear waste transmutation studies, the Mini-INCA project has been initiated at CEA/DSM with objectives to determine optimal conditions for transmutation and incineration of minor actinides (MA) in high intensity neutron fluxes. Our experimental tools based on alpha- and gamma-spectroscopy of the samples and the development of micro fission chambers could gather either microscopic information on nuclear reactions (total or partial cross sections for neutron capture and/or fission reactions) or macroscopic information on transmutation and incineration potentials. Neutron capture cross sections of selected actinides ({sup 241}Am, {sup 242}Am, {sup 242}Pu, {sup 237}Np) have already been measured at ILL, showing some discrepancies when compared to evaluated data libraries but in overall good agreement with recent data. The studies and possibilities offer by the MEGAPIE project to assess neutronic performances of a 1 MW spallation target and the incineration of MA in a representative neutron flux of a spallation source are also discussed. (orig.)
Roebben, Gert; Kestens, Vikram; Varga, Zoltan; Charoud-Got, Jean; Ramaye, Yannic; Gollwitzer, Christian; Bartczak, Dorota; Geißler, Daniel; Noble, James; Mazoua, Stéphane; Meeus, Nele; Corbisier, Philippe; Palmai, Marcell; Mihály, Judith; Krumrey, Michael; Davies, Julie; Resch-Genger, Ute; Kumarswami, Neelam; Minelli, Caterina; Sikora, Aneta; Goenaga-Infante, Heidi
2015-10-01
This paper describes the production and characteristics of the nanoparticle test materials prepared for common use in the collaborative research project NanoChOp (Chemical and optical characterisation of nanomaterials in biological systems), in casu suspensions of silica nanoparticles and CdSe/CdS/ZnS quantum dots. This paper is the first to illustrate how to assess whether nanoparticle test materials meet the requirements of a 'reference material' (ISO Guide 30:2015) or rather those of the recently defined category of 'representative test material' (ISO TS 16195:2013). The NanoChOp test materials were investigated with small-angle X-ray scattering (SAXS), dynamic light scattering (DLS) and centrifugal liquid sedimentation (CLS) to establish whether they complied with the required monomodal particle size distribution. The presence of impurities, aggregates, agglomerates and viable microorganisms in the suspensions was investigated with DLS, CLS, optical and electron microscopy and via plating on nutrient agar. Suitability of surface functionalization was investigated with attenuated total reflection Fourier transform infrared spectrometry (ATR-FTIR) and via the capacity of the nanoparticles to be fluorescently labeled or to bind antibodies. Between-unit homogeneity and stability were investigated in terms of particle size and zeta potential. This paper shows that only based on the outcome of a detailed characterization process one can raise the status of a test material to representative test material or reference material, and how this status depends on its intended use.
Stack, Sue; Watson, Jane; Hindley, Sue; Samson, Pauline; Devlin, Robyn
2010-01-01
This paper reports on the experiences of a group of teachers engaged in an action research project to develop critical numeracy classrooms. The teachers initially explored how contexts in the media could be used as bases for activities to encourage student discernment and critical thinking about the appropriate use of the underlying mathematical…
Westervelt, D. M.; Mauzerall, D. L.; Horowitz, L. W.; Naik, V.
2014-12-01
It is widely expected that global emissions of atmospheric aerosols and their precursors will decrease strongly throughout the remainder of the 21st century, due to emission reduction policies enacted based on human health concerns. However, the resulting decrease in atmospheric aerosol burden will have unintended climate consequences. Since aerosols generally exert a net cooling influence on the climate, their removal will lead to an unmasking of global warming as well as other changes to the climate system. Aerosol and precursor global emissions decrease by as much as 80% by the year 2100, according to projections in four Representative Concentration Pathway (RCP) scenarios. We use the Geophysical Fluid Dynamics Laboratory Climate Model version 3 (GFDL CM3) to simulate future climate over the 21st century with and without aerosol emission changes projected by the RCPs in order to isolate the radiative forcing and climate response due to the aerosol reductions. We find that up to 1 W m-2 of radiative forcing may be unmasked globally by 2100 due to reductions in aerosol and precursor emissions, leading to average global temperature increases up to 1 K and global precipitation rate increases up to 0.09 mm d-1 (3%). Regionally and locally, climate impacts are much larger, as RCP8.5 projects a 2.1 K warming over China, Japan, and Korea due to reduced aerosol emissions. Our results highlight the importance of crafting emissions control policies with both climate and air pollution benefits in mind. The expected unmasking of additional global warming from aerosol reductions highlights the importance of robust greenhouse gas mitigation policies and may require more aggressive policies than anticipated.
任留成; 吕泗洲
2013-01-01
A kind of new map projection, called multi-level combined projection, was designed in this paper, which was suitable for the geographic grid system of China. It was also the hierarchy grid system partitioned by the latitude 1°, 10°, et al. The basic idea here was to divide the ellipsoid averagely to some level along with latitude according to the theory of differential geometry, and then to establish the projection model for each level. Therefore a new kind map projection was obtained. This kind of map projection could be subdivision according to the different grid scale, and could be developed to a kind of dynamic map projection which was appropriated for multi-resolution grid model. It is show by the distortion computation that the map projection is conformal, and the area distortion and length distortion is also small. Especially in the high latitude area, the distortions are apparently decreased comparing with Mercator projection.%针对中国地理格网(1°、10°等多级格网系统)的分割方法,设计了一种适合该格网系统的新型地图投影——分层组合投影.从微分几何的观点出发,把地球椭球按等纬度分割成若干层圆台,分别建立每个圆台的投影模型,即可得到一种地图投影.这种投影还可根据格网间隔的不同进行细分,从而发展成为一种适合多分辨率格网模型的动态地图投影.通过对该投影进行变形计算表明,该投影可以保持等角,而且面积和长度变形都很小,特别是在高纬度地区,与Mercator投影相比变形明显减小.
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average winter total precipitation and projected change in precipitation for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average summer temperature in and projected change in temperature for for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average winter total precipitation and projected change in precipitation for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average winter total precipitation and projected change in precipitation for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average winter total precipitation and projected change in precipitation for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average summer temperature in and projected change in temperature for for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average summer temperature in and projected change in temperature for for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average winter temperature in and projected change in temperature for for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average summer total precipitation and projected change in precipitation for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual total precipitation and projected change in precipitation for the northern portion of Alaska. The Alaska portion of the Arctic...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average winter temperature in and projected change in temperature for for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average summer total precipitation and projected change in precipitation for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average winter temperature in and projected change in temperature for for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average summer temperature in and projected change in temperature for for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average summer total precipitation and projected change in precipitation for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual total precipitation and projected change in precipitation for the northern portion of Alaska. The Alaska portion of the Arctic...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual temperature in and projected change in temperature for for the northern portion of Alaska. The Alaska portion of the Arctic LCC's...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average winter temperature in and projected change in temperature for for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual total precipitation and projected change in precipitation for the northern portion of Alaska. The Alaska portion of the Arctic...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual temperature in and projected change in temperature for for the northern portion of Alaska. The Alaska portion of the Arctic LCC's...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual temperature in and projected change in temperature for for the northern portion of Alaska. The Alaska portion of the Arctic LCC's...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average summer total precipitation and projected change in precipitation for the northern portion of Alaska. For the purposes of these maps,...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual temperature in and projected change in temperature for for the northern portion of Alaska. The Alaska portion of the Arctic LCC's...
Sherba, J.; Sleeter, B. M.
2015-12-01
The Intergovernmental Panel on Climate Change (IPCC) Representative Concentration Pathways (RCPs) include global land-use change projections for four global emissions scenarios. These projections are potentially useful for driving regional-scale models needed for informing land-use and management interactions. Here, we applied global gridded RCP land-use projections within a regional-scale state-and-transition simulation model (STSM) projecting land-use change in the conterminous United States. First, we cross-walked RCP land-use transition classes to land-use classes more relevant for modeling at the regional scale. Coarse grid RCP land-use transition values were then downscaled to EPA Level III ecoregion boundaries using historical land-use transition data from the USGS Land Cover Trends (LCT) dataset. Downscaled transitions were aggregated to the ecoregion level. Ecoregions were chosen because they represent areas with consistent land-use patterns that have proven useful for studying land-use and management interactions. Ecoregion-level RCP projections were applied in a state-and-transition simulation model (STSM) projecting land-use change between 2005 and 2100 at the 1-km scale. Resulting RCP-based STSM projections were compared to STSM projections created using scenario projections from the Special Report on Emissions Scenarios (SRES) and the USGS LCT dataset. While most land-use trajectories appear plausible, some transitions such as forest harvest are unreasonable in the context of historical land-use patterns and the socio-economic drivers of change outlined for each scenario. This effort provides a method for using the RCP land-use projections in a wide range of regional scale models. However, further investigation is needed into the performance of RCP land-use projections at the regional scale.
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
The Oryza Map Alignment Project (OMAP) provides the first comprehensive experimental system for understanding the evolution, physiology and biochemistry of a full genus in plants or animals. We have constructed twelve deep-coverage BAC libraries that are representative of both diploid and tetraploid...
Ohira, Shingo; Ueda, Yoshihiro; Hashimoto, Misaki; Miyazaki, Masayoshi; Isono, Masaru; Kamikaseda, Hiroshi; Masaoka, Akira; Takashina, Masaaki; Koizumi, Masahiko; Teshima, Teruki
2016-01-01
The aim of the this study was to validate the use of an average intensity projection (AIP) for volumetric-modulated arc therapy for stereotactic body radiation therapy (VMAT-SBRT) planning for a moving lung tumor located near the diaphragm. VMAT-SBRT plans were created using AIPs reconstructed from 10 phases of 4DCT images that were acquired with a target phantom moving with amplitudes of 5, 10, 20 and 30 mm. To generate a 4D dose distribution, the static dose for each phase was recalculated and the doses were accumulated by using the phantom position known for each phase. For 10 patients with lung tumors, a deformable registration was used to generate 4D dose distributions. Doses to the target volume obtained from the AIP plan and the 4D plan were compared, as were the doses obtained from each plan to the organs at risk (OARs). In both phantom and clinical study, dose discrepancies for all parameters of the dose volume (D(min), D(99), D(max), D(1) and D(mean)) to the target were planning CT image for predicting 4D dose, but doses to the OARs with large respiratory motion were underestimated with the AIP approach.
U.S. Geological Survey, Department of the Interior — Projected Hazard: Geographic extent of projected coastal flooding, low-lying vulnerable areas, and maxium/minimum flood potential associated with the sea-level rise...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Geographic extent of projected coastal flooding, low-lying vulnerable areas, and maxium/minimum flood potential (flood uncertainty) associated with...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Geographic extent of projected coastal flooding, low-lying vulnerable areas, and maxium/minimum flood potential associated with the sea-level rise...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Geographic extent of projected coastal flooding, low-lying vulnerable areas, and maxium/minimum flood potential (flood uncertainty) associated with...
Huetos, O; Bartolomé, M; Aragonés, N; Cervantes-Amat, M; Esteban, M; Ruiz-Moraga, M; Pérez-Gómez, B; Calvo, E; Vila, M; Castaño, A
2014-09-15
This manuscript presents the levels of six indicator polychlorinated biphenyl (PCB) congeners (IUPAC nos. 28, 52, 101, 138, 153 and 180) in the serum of 1880 individuals from a representative sample of the Spanish working population recruited between March 2009 and July 2010. Three out of the six PCBs studied (180, 153 and 138) were quantified in more than 99% of participants. PCB 180 was the highest contributor, followed by PCBs 153 and 138, with relative abundances of 42.6%, 33.2% and 24.2%, respectively. In contrast, PCBs 28 and 52 were detected in only 1% of samples, whereas PCB 101 was detectable in 6% of samples. The geometric mean (GM) for ΣPCBs138/153/180 was 135.4 ng/g lipid (95% CI: 121.3-151.2 ng/g lipid) and the 95th percentile was 482.2 ng/g lipid. Men had higher PCB blood concentrations than women (GMs 138.9 and 129.9 ng/g lipid respectively). As expected, serum PCB levels increased with age and frequency of fish consumption, particularly in those participants younger than 30 years of age. The highest levels we found were for participants from the Basque Country, whereas the lowest concentrations were found for those from the Canary Islands. The Spanish population studied herein had similar levels to those found previously in Greece and southern Italy, lower levels than those in France and central Europe, and higher PCB levels than those in the USA, Canada and New Zealand. This paper provides the first baseline information regarding PCB exposure in the Spanish adult population on a national scale. The results will allow us to establish reference levels, follow temporal trends and identify high-exposure groups, as well as monitor implementation of the Stockholm Convention in Spain.
R. M. Ortega
2012-06-01
Full Text Available Introducción: La adecuación de la ingesta de calcio de la población infantil española ha sido objeto de debate y controversia, pues algunos estudios señalan que puede ser inadecuada en un porcentaje variable de escolares, mientras que algunos documentos insisten en el peligro de una ingesta excesiva en un amplio porcentaje de la población escolar. Objetivos: Valorar la ingesta de calcio y las fuentes alimentarias de este nutriente en una muestra representativa de niños españoles, analizando también la adecuación del aporte a la cobertura de las ingestas recomendadas. Métodos: Se estudiaron 903 escolares (de 7 a 11 años de diez provincias españolas: Tarragona, Cáceres, Burgos, Guadalajara, Valencia, Salamanca, Córdoba, Vizcaya, Lugo y Madrid, que constituyen una muestra representativa de la población española de dicha edad. La ingesta de energía y nutrientes se determinó utilizando un registro del consumo de alimentos durante 3 días, incluyendo un domingo. El aporte de calcio se comparó con las Ingestas Recomendadas (IR marcadas para dicho mineral. Los parámetros antropométricos estudiados fueron el peso y la talla, lo que permitió calcular el índice de masa corporal (IMC. Resultados: En el colectivo estudiado (55,3% de niñas y 44,7% de niños, un 30,7% presentó exceso de peso (sobrepeso-23,3% y obesidad-7,4%. La ingesta de calcio de los niños estudiados (859,9 ± 249,2 mg/día supuso un 79,5% de lo recomendado, observándose la existencia de un 76,7% de niños con ingestas menores de las recomendadas y un 40,1 con ingestas Introduction: There is controversy about the adequacy of calcium intake to that recommended in Spanish schoolchildren. Some studies indicate that the intake is inadequate in a variable percentage of children, while others insist on the danger of an excessive intake in a huge percentage of this population. Aim: To assess calcium intake and food sources of this nutrient in a representative sample of
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived water levels (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm Modeling...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived ocean current velocities (in meters per second) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived ocean current velocities (in meters per second) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The...
U.S. Geological Survey, Department of the Interior — Geographic extent of projected coastal flooding, low-lying vulnerable areas, and maxium/minimum flood potential (flood uncertainty) associated with the sea-level...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived total water levels (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived water levels (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm Modeling...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived ocean current velocities (in meters per second) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived significant wave height (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived total water levels (in meters) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The Coastal Storm...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Model-derived ocean current velocities (in meters per second) for the given storm condition and sea-level rise (SLR) scenario. Model Summary: The...
U.S. Geological Survey, Department of the Interior — Geographic extent of projected coastal flooding, low-lying vulnerable areas, and maxium/minimum flood potential (flood uncertainty) associated with the sea-level...
U.S. Geological Survey, Department of the Interior — Geographic extent of projected coastal flooding, low-lying vulnerable areas, and maxium/minimum flood potential (flood uncertainty) associated with the sea-level...
U.S. Geological Survey, Department of the Interior — Geographic extent of projected coastal flooding, low-lying vulnerable areas, and maxium/minimum flood potential (flood uncertainty) associated with the sea-level...
A Comparative Study of Representative MOOCs Projects in China and Overseas%国内外代表性MOOCs项目比较研究
李艳; 张慕华
2014-01-01
In recent years, massive open online courses ( MOOCs) have attracted world wide extensive attentions in the field of higher education. Various types of media made a large number of reports on MOOCs. Many companies also showed their interests in developing and implementing MOOCs. The United States took the lead in diffusing MOOCs practice and put forward famous projects such as Udemy, Udacity, Coursera, edX and etc. During the similar time period, some European companies, together with local higher education institutions, launched their own MOOCs pro-jects. The Ministry of Education in China and leading Chinese universities are also actively exploring MOOCs with its own features. This study aims to provide useful experiences and lessons for developing MOOCs in China through reviewing the origin and connotation of MOOCs and comparing 13 representative MOOCs projects in the world. This comparative a-nalysis is conducted from three dimensions:contents, tools, and practice. The 13 selected representative MOOCs pro-jects involved in the study are Udemy、Coursera、Udacity、edX、Canvas Network, FutureLearn、OpenupEd、iversity、ALI-SON, OpenLearning and Open2Study, ewant and Xuetang online. Among these projects, five are from America, four from Europe, two from Australia, and two from China. In the dimension of content, the study compared the 13 pro-jects according to three sub-dimensions:number and types of courses provided by each project, manner of course or-ganization and the main features of course design of these projects. In the dimension of tools, the study compared the 13 projects according to three sub-dimensions: learning management system, tool development, and usage of social software. In the dimension of practice, the study compared the 13 projects according to four sub-dimensions:operators and co-operators of each project, the operating modes of each project, types of copyright protection adopted in the pro-jects, and the availability and the types of
Arctic Landscape Conservation Cooperative — This raster, created in 2010, is output from the Geophysical Institute Permafrost Lab (GIPL) model and represents simulated mean annual ground temperature (MAGT) in...
Arctic Landscape Conservation Cooperative — This raster, created in 2010, is output from the Geophysical Institute Permafrost Lab (GIPL) model and represents simulated active layer thickness (ALT) in meters...
Maximilien Brice
2010-01-01
28 May 2010 - Representatives of the Netherlands School of Public Administration guided in the ATLAS visitor centre by ATLAS Collaboration Member and NIKHEF G. Bobbink and ATLAS Magnet Project Leader H.ten Kate.
Röhl Johannes
2011-08-01
Full Text Available Abstract Dispositions and tendencies feature significantly in the biomedical domain and therefore in representations of knowledge of that domain. They are not only important for specific applications like an infectious disease ontology, but also as part of a general strategy for modelling knowledge about molecular interactions. But the task of representing dispositions in some formal ontological systems is fraught with several problems, which are partly due to the fact that Description Logics can only deal well with binary relations. The paper will discuss some of the results of the philosophical debate about dispositions, in order to see whether the formal relations needed to represent dispositions can be broken down to binary relations. Finally, we will discuss problems arising from the possibility of the absence of realizations, of multi-track or multi-trigger dispositions and offer suggestions on how to deal with them.
Representing Development presents the different social representations that have formed the idea of development in Western thinking over the past three centuries. Offering an acute perspective on the current state of developmental science and providing constructive insights into future pathways...... and development, addressing their contemporary enactments and reflecting on future theoretical and empirical directions. The first section of the book provides an historical account of early representations of development that, having come from life science, has shaped the way in which developmental science has...... approached development. Section two focuses upon the contemporary issues of developmental psychology, neuroscience and developmental science at large. The final section offers a series of commentaries pointing to the questions opened by the previous chapters, looking to outline the future lines...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual temperature, projected air temperature, and projected change in air temperature for for the northern portion of Alaska. The...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual temperature, projected air temperature, and projected change in air temperature for for the northern portion of Alaska. The...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual temperature, projected air temperature, and projected change in air temperature for for the northern portion of Alaska. The...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual temperature, projected air temperature, and projected change in air temperature for for the northern portion of Alaska. The...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual total precipitation, projected total precipitation, and relative change in total precipitation for the northern portion of...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual total precipitation, projected total precipitation, and relative change in total precipitation for the northern portion of...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual total precipitation, projected total precipitation, and relative change in total precipitation for the northern portion of...
Arctic Landscape Conservation Cooperative — Baseline (1961-1990) average annual total precipitation, projected total precipitation, and relative change in total precipitation for the northern portion of...
VanderHorst, Veronique G.J.M.; Holstege, Gert
1995-01-01
The nucleus retroambiguus (NRA) projects to distinct brainstem and cervical and thoracic cord motoneuronal cell groups. The present paper describes NRA projections to distinct motoneuronal cell groups in the lumbar enlargement. Lumbosacral injections of wheat germ agglutinin-horseradish peroxidase
VANDERHORST, VGJM; HOLSTEGE, G
1995-01-01
The nucleus retroambiguus (NRA) projects to distinct brainstem and cervical and thoracic cord motoneuronal cell groups. The present paper describes NRA projections to distinct motoneuronal cell groups in the lumbar enlargement. Lumbosacral injections of wheat germ agglutinin-horseradish peroxidase
The modulated average structure of mullite.
Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X
2015-06-01
Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real
Siegel, Irving H.
The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)
U.S. Geological Survey, Department of the Interior — Models project the Arctic Ocean will become undersaturated with respect to carbonate minerals in the next decade. Recent field results indicate parts may already be...
U.S. Geological Survey, Department of the Interior — Models project the Arctic Ocean will become undersaturated with respect to carbonate minerals in the next decade. Recent field results indicate parts may already be...
U.S. Geological Survey, Department of the Interior — Models project the Arctic Ocean will become undersaturated with respect to carbonate minerals in the next decade. Recent field results indicate parts may already be...
U.S. Geological Survey, Department of the Interior — Models project the Arctic Ocean will become undersaturated with respect to carbonate minerals in the next decade. Recent field results indicate parts may already be...
王俊
2014-01-01
A large number of water conservancy and hydropower enterprises provide assistant construction in foreign countries or participate in construction of water conservancy and hydropower projects in foreign countries with the continuous development of China 's water conservancy and hydropower industry. Meanwhile, many construction site design representatives (design representatives)correspondingly go abroad to construct projects in foreign countries.They should combine characteristics of foreign water conservancy and hydropower projects to improve their business skills and personal qualities as design engineers,thereby meeting the requirements of foreign project construction on design representatives. Conditions and features of design representatives participating in foreign water conservancy and hydropower projects are combined for describing and proposing experience and suggestions in the paper,which is provided as reference for design engineers,especially design representatives.%随着我国水利水电行业的不断发展，大量水利水电企业走出国门援建或参与建设国外水利水电工程，同时工地现场设计代表人员（设代）也相应大量走出去参建国外工程。作为设计工程师，应结合国外水利水电工程的特点相应提高自身的业务能力和个人素养，满足国外工程建设对设代的要求。本文对设代参建国外水利水电工程的情况和特点加以阐述并提出经验与建议，供设计工程师特别是设代参考。
Young, Vershawn Ashanti
2004-01-01
"Your Average Nigga" contends that just as exaggerating the differences between black and white language leaves some black speakers, especially those from the ghetto, at an impasse, so exaggerating and reifying the differences between the races leaves blacks in the impossible position of either having to try to be white or forever struggling to…
Covariant approximation averaging
Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2014-01-01
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
Average excitation potentials of air and aluminium
Bogaardt, M.; Koudijs, B.
1951-01-01
By means of a graphical method the average excitation potential I may be derived from experimental data. Average values for Iair and IAl have been obtained. It is shown that in representing range/energy relations by means of Bethe's well known formula, I has to be taken as a continuously changing fu
Negative Average Preference Utilitarianism
Roger Chao
2012-03-01
Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.
Averaged Lema\\^itre-Tolman-Bondi dynamics
Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried
2016-01-01
We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.
Perturbation resilience and superiorization methodology of averaged mappings
He, Hongjin; Xu, Hong-Kun
2017-04-01
We first prove the bounded perturbation resilience for the successive fixed point algorithm of averaged mappings, which extends the string-averaging projection and block-iterative projection methods. We then apply the superiorization methodology to a constrained convex minimization problem where the constraint set is the intersection of fixed point sets of a finite family of averaged mappings.
Heimbach, Fred; Russ, Anja; Schimmer, Maren; Born, Katrin
2016-11-01
Monitoring studies at the landscape level are complex, expensive and difficult to conduct. Many aspects have to be considered to avoid confounding effects which is probably the reason why they are not regularly performed in the context of risk assessments of plant protection products to pollinating insects. However, if conducted appropriately their contribution is most valuable. In this paper we identify the requirements of a large-scale monitoring study for the assessment of side-effects of clothianidin seed-treated winter oilseed rape on three species of pollinating insects (Apis mellifera, Bombus terrestris and Osmia bicornis) and present how these requirements were implemented. Two circular study sites were delineated next to each other in northeast Germany and comprised almost 65 km(2) each. At the reference site, study fields were drilled with clothianidin-free OSR seeds while at the test site the oilseed rape seeds contained a coating with 10 g clothianidin and 2 g beta-cyfluthrin per kg seeds (Elado®). The comparison of environmental conditions at the study sites indicated that they are as similar as possible in terms of climate, soil, land use, history and current practice of agriculture as well as in availability of oilseed rape and non-crop bee forage. Accordingly, local environmental conditions were considered not to have had any confounding effect on the results of the monitoring of the bee species. Furthermore, the study area was found to be representative for other oilseed rape cultivation regions in Europe.
Zvolanek, Kristina; Ma, Rongtao; Zhou, Christina; Liang, Xiaoying; Wang, Shuo; Verma, Vivek; Zhu, Xiaofeng; Zhang, Qinghui; Driewer, Joseph; Lin, Chi; Zhen, Weining; Wahl, Andrew; Zhou, Su-Min; Zheng, Dandan
2017-05-01
Inhomogeneity dose modeling and respiratory motion description are two critical technical challenges for lung stereotactic body radiotherapy, an important treatment modality for small size primary and secondary lung tumors. Recent studies revealed lung density-dependent target dose differences between Monte Carlo (Type-C) algorithm and earlier algorithms. Therefore, this study aimed to investigate the equivalence of the two most popular CT datasets for treatment planning, free breathing (FB) and average intensity projection (AIP) CTs, using Type-C algorithms, and comparing with two older generation algorithms (Type-A and Type-B). Twenty patients (twenty-one lesions) were planned using a Type-A algorithm on the FB CT. Lung was contoured separately on FB and AIP CTs and compared. Dose comparison was obtained between the two CTs using four commercial dose algorithms including one Type-A (Pencil Beam Convolution - PBC), one Type-B (Analytical Anisotropic Algorithm - AAA), and two Type-C algorithms (Voxel Monte Carlo - VMC and Acuros External Beam - AXB). For each algorithm, the dosimetric parameters of the target (PTV, Dmin , Dmax , Dmean , D95, and D90) and lung (V5, V10, V20, V30, V35, and V40) were compared between the two CTs using the Wilcoxon signed rank test. Correlation between dosimetric differences and density differences for each algorithm were studied using linear regression and Spearman correlation, in which both global and local density differences were evaluated. Although the lung density differences on FB and AIP CTs were statistically significant (P = 0.003), the magnitude was small at 1.21 ± 1.45%. Correspondingly, for the two Type-C algorithms, target and lung dosimetric differences were small in magnitude and statistically insignificant (P > 0.05) for all but one instance, similar to the findings for the older generation algorithms. Nevertheless, a significant correlation was shown between the dosimetric and density differences for Type-C and Type
U.S. Geological Survey, Department of the Interior — Projected Hazard: Maximum depth of flooding surface (in cm) in the region landward of the present day shoreline that is inundated for the storm condition and...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Maximum depth of flooding surface (in cm) in the region landward of the present day shoreline that is inundated for the storm condition and...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Maximum depth of flooding surface (in cm) in the region landward of the present day shoreline that is inundated for the storm condition and...
U.S. Geological Survey, Department of the Interior — Projected Hazard: Maximum depth of flooding surface (in cm) in the region landward of the present day shoreline that is inundated for the storm condition and...
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
Physical Theories with Average Symmetry
Alamino, Roberto C.
2013-01-01
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violat...
Average Convexity in Communication Situations
Slikker, M.
1998-01-01
In this paper we study inheritance properties of average convexity in communication situations. We show that the underlying graph ensures that the graphrestricted game originating from an average convex game is average convex if and only if every subgraph associated with a component of the underlyin
Sampling Based Average Classifier Fusion
Jian Hou
2014-01-01
fusion algorithms have been proposed in literature, average fusion is almost always selected as the baseline for comparison. Little is done on exploring the potential of average fusion and proposing a better baseline. In this paper we empirically investigate the behavior of soft labels and classifiers in average fusion. As a result, we find that; by proper sampling of soft labels and classifiers, the average fusion performance can be evidently improved. This result presents sampling based average fusion as a better baseline; that is, a newly proposed classifier fusion algorithm should at least perform better than this baseline in order to demonstrate its effectiveness.
Average Light Intensity Inside a Photobioreactor
Herby Jean
2011-01-01
Full Text Available For energy production, microalgae are one of the few alternatives with high potential. Similar to plants, algae require energy acquired from light sources to grow. This project uses calculus to determine the light intensity inside of a photobioreactor filled with algae. Under preset conditions along with estimated values, we applied Lambert-Beer's law to formulate an equation to calculate how much light intensity escapes a photobioreactor and determine the average light intensity that was present inside the reactor.
Physical Theories with Average Symmetry
Alamino, Roberto C
2013-01-01
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.
Quantized average consensus with delay
Jafarian, Matin; De Persis, Claudio
2012-01-01
Average consensus problem is a special case of cooperative control in which the agents of the network asymptotically converge to the average state (i.e., position) of the network by transferring information via a communication topology. One of the issues of the large scale networks is the cost of co
Gaussian moving averages and semimartingales
Basse-O'Connor, Andreas
2008-01-01
In the present paper we study moving averages (also known as stochastic convolutions) driven by a Wiener process and with a deterministic kernel. Necessary and sufficient conditions on the kernel are provided for the moving average to be a semimartingale in its natural filtration. Our results...... are constructive - meaning that they provide a simple method to obtain kernels for which the moving average is a semimartingale or a Wiener process. Several examples are considered. In the last part of the paper we study general Gaussian processes with stationary increments. We provide necessary and sufficient...
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.
Dovekie - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Razorbill - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Mining Representative Subset Based on Fuzzy Clustering
ZHOU Hongfang; FENG Boqin; L(U) Lintao
2007-01-01
Two new concepts-fuzzy mutuality and average fuzzy entropy are presented. Then based on these concepts, a new algorithm-RSMA (representative subset mining algorithm) is proposed, which can abstract representative subset from massive data.To accelerate the speed of producing representative subset, an improved algorithm-ARSMA(accelerated representative subset mining algorithm) is advanced, which adopt combining putting forward with backward strategies. In this way, the performance of the algorithm is improved. Finally we make experiments on real datasets and evaluate the representative subset. The experiment shows that ARSMA algorithm is more excellent than RandomPick algorithm either on effectiveness or efficiency.
李祚泳; 张正健; 余春雪
2012-01-01
Traditional projection pursuit regression represented with matrix, which is applied in water quality evaluation for multi-index, affects not only learning efficient of optimized parameter matrix element, but also optimal effects. The present work set the proper reference values and transformed forms for each index. Therefore, the different in the same grade standard values with different index could be weakened after the normal transformation, the normalized values of different indexes were e-quivalent to a certain normalized index. Therefore, it is only necessary to set up the models of NV-PPR (2) and NV-PPR(3) suited to 2 indexes and 3 indexes, respectively, for each normalized index values. Meanwhile, the optimization of the parameter matrix elements of model were iterated by monkey-king genetic algorithm. Furthermore, the multi-index NV-PPR model could be represented into the combinations of some NV-PPR (2) and (or) NV-PPR (3) models. The practicality of models was verified virtually. The results showed that the projection pursuit regression model of water quality evaluation based on normalized index transform exhibited the characteristics of simplicity in form, convenience during calculation, university as well as commonness.%传统的投影寻踪回归(PPR)的矩阵表示法用于水质评价,当指标较多时,不仅优化参数矩阵元的学习效率低,而且优化效果亦受到影响.若适当设置3类水体(地表水、地下水和富营养化水体)各指标的参照值及指标值的规范变换式,使不同指标的同级标准的规范值差异不大,从而可以认为用规范值表示的不同指标皆与某个规范指标“等效”.因此,只需构造并优化得出对各指标规范值都共同适用的2个指标变量的NV-PPR(2)和3个指标变量的NV-PPR(3)模型,对于指标变量较多的NV-PPR建模,只需将其分解为若干个NV-PPR(2)和(或)NV-PPR(3)的组合表示即可.对模型的实用性进行的效
A, Ramachandran; Praveen, Dhanya; R, Jaganathan; D, RajaLakshmi; K, Palanivelu
2017-01-01
India's dependence on a climate sensitive sector like agriculture makes it highly vulnerable to its impacts. However, agriculture is highly heterogeneous across the country owing to regional disparities in exposure, sensitivity, and adaptive capacity. It is essential to know and quantify the possible impacts of changes in climate on crop yield for successful agricultural management and planning at a local scale. The Hadley Centre Global Environment Model version 2-Earth System (HadGEM-ES) was employed to generate regional climate projections for the study area using the Regional Climate Model (RCM) RegCM4.4. The dynamics in potential impacts at the sub-district level were evaluated using the Representative Concentration Pathway 4.5 (RCPs). The aim of this study was to simulate the crop yield under a plausible change in climate for the coastal areas of South India through the end of this century. The crop simulation model, the Decision Support System for Agrotechnology Transfer (DSSAT) 4.5, was used to understand the plausible impacts on the major crop yields of rice, groundnuts, and sugarcane under the RCP 4.5 trajectory. The findings reveal that under the RCP 4.5 scenario there will be decreases in the major C3 and C4 crop yields in the study area. This would affect not only the local food security, but the livelihood security as well. This necessitates timely planning to achieve sustainable crop productivity and livelihood security. On the other hand, this situation warrants appropriate adaptations and policy intervention at the sub-district level for achieving sustainable crop productivity in the future.
List of Accredited Representatives
Department of Veterans Affairs — VA accreditation is for the sole purpose of providing representation services to claimants before VA and does not imply that a representative is qualified to provide...
Harrison, J M
1994-01-01
The anthropocentric approach to the study of animal behavior uses representative nonhuman animals to understand human behavior. This approach raises problems concerning the comparison of the behavior of two different species. The datum of behavior analysis is the behavior of humans and representative animal phenotypes. The behavioral phenotype is the product of the ontogeny and phylogeny of each species, and this requires that contributions of genotype as well as behavioral history to experim...
[Advance directives. Representatives' opinions].
Busquets I Font, J M; Hernando Robles, P; Font I Canals, R; Diestre Ortin, G; Quintana, S
The use and usefulness of Advance Directives has led to a lot of controversy about their validity and effectiveness. Those areas are unexplored in our country from the perspective of representatives. To determine the opinion of the representatives appointed in a registered Statement of Advance Directives (SAD) on the use of this document. Telephone survey of representatives of 146 already dead people and who, since February 2012, had registered a SAD document. More the two-thirds (98) of respondents recalled that the SAD was consulted, with 86 (58.9%) saying that their opinion as representative was consulted, and 120 (82.1%) believe that the patient's will was respected. Of those interviewed, 102 (69.9%) believe that patients who had previously planned their care using a SAD had a good death, with 33 (22.4%) saying it could have been better, and 10 (6.9%) believe they suffered greatly. The SAD were mostly respected and consulted, and possibly this is related to the fact that most of the representatives declare that the death of those they represented was perceived as comfortable. It would be desirable to conduct further studies addressed at health personnel in order to know their perceptions regarding the use of Advance Directives in the process of dying. Copyright © 2016 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.
Averaged Electroencephalic Audiometry in Infants
Lentz, William E.; McCandless, Geary A.
1971-01-01
Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)
Ergodic averages via dominating processes
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary ...
Representing properties locally.
Solomon, K O; Barsalou, L W
2001-09-01
Theories of knowledge such as feature lists, semantic networks, and localist neural nets typically use a single global symbol to represent a property that occurs in multiple concepts. Thus, a global symbol represents mane across HORSE, PONY, and LION. Alternatively, perceptual theories of knowledge, as well as distributed representational systems, assume that properties take different local forms in different concepts. Thus, different local forms of mane exist for HORSE, PONY, and LION, each capturing the specific form that mane takes in its respective concept. Three experiments used the property verification task to assess whether properties are represented globally or locally (e.g., Does a PONY have mane?). If a single global form represents a property, then verifying it in any concept should increase its accessibility and speed its verification later in any other concept. Verifying mane for PONY should benefit as much from having verified mane for LION earlier as from verifying mane for HORSE. If properties are represented locally, however, verifying a property should only benefit from verifying a similar form earlier. Verifying mane for PONY should only benefit from verifying mane for HORSE, not from verifying mane for LION. Findings from three experiments strongly supported local property representation and ruled out the interpretation that object similarity was responsible (e.g., the greater overall similarity between HORSE and PONY than between LION and PONY). The findings further suggest that property representation and verification are complicated phenomena, grounded in sensory-motor simulations.
2011-03-10
... of Energy Efficiency and Renewable Energy Energy Conservation Program for Consumer Products... pursuant to the Energy Policy and Conservation Act. The five sources are electricity, natural gas, No. 2... of the Energy Policy and Conservation Act (Act) requires that DOE prescribe test procedures for...
Representing and Performing Businesses
Boll, Karen
2014-01-01
and MacKenzie’s idea of performativity. Based on these two approaches, the article demonstrates that the segmentation model represents and performs the businesses as it makes up certain new ways to be a business and as the businesses can be seen as moving targets. Inspired by MacKenzie the argument......This article investigates a segmentation model used by the Danish Tax and Customs Administration to classify businesses’ motivational postures. The article uses two different conceptualisations of performativity to analyse what the model’s segmentations do: Hacking’s notion of making up people...... is that the segmentation model embodies cleverness in that it simultaneously alters what it represents and then represents this altered reality to confirm the accuracy of its own model of the businesses’ postures. Despite the cleverness of the model, it also has a blind spot. The model assumes a world wherein everything...
High average power supercontinuum sources
J C Travers
2010-11-01
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.
Dependability in Aggregation by Averaging
Jesus, Paulo; Almeida, Paulo Sérgio
2010-01-01
Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a funda...
Measuring Complexity through Average Symmetry
Alamino, Roberto C.
2015-01-01
This work introduces a complexity measure which addresses some conflicting issues between existing ones by using a new principle - measuring the average amount of symmetry broken by an object. It attributes low (although different) complexity to either deterministic or random homogeneous densities and higher complexity to the intermediate cases. This new measure is easily computable, breaks the coarse graining paradigm and can be straightforwardly generalised, including to continuous cases an...
Mirror averaging with sparsity priors
Dalalyan, Arnak
2010-01-01
We consider the problem of aggregating the elements of a (possibly infinite) dictionary for building a decision procedure, that aims at minimizing a given criterion. Along with the dictionary, an independent identically distributed training sample is available, on which the performance of a given procedure can be tested. In a fairly general set-up, we establish an oracle inequality for the Mirror Averaging aggregate based on any prior distribution. This oracle inequality is applied in the context of sparse coding for different problems of statistics and machine learning such as regression, density estimation and binary classification.
Geomagnetic effects on the average surface temperature
Ballatore, P.
Several results have previously shown as the solar activity can be related to the cloudiness and the surface solar radiation intensity (Svensmark and Friis-Christensen, J. Atmos. Sol. Terr. Phys., 59, 1225, 1997; Veretenenkoand Pudovkin, J. Atmos. Sol. Terr. Phys., 61, 521, 1999). Here, the possible relationships between the averaged surface temperature and the solar wind parameters or geomagnetic activity indices are investigated. The temperature data used are the monthly SST maps (generated at RAL and available from the related ESRIN/ESA database) that represent the averaged surface temperature with a spatial resolution of 0.5°x0.5° and cover the entire globe. The interplanetary data and the geomagnetic data are from the USA National Space Science Data Center. The time interval considered is 1995-2000. Specifically, possible associations and/or correlations of the average temperature with the interplanetary magnetic field Bz component and with the Kp index are considered and differentiated taking into account separate geographic and geomagnetic planetary regions.
Unscrambling The "Average User" Of Habbo Hotel
Mikael Johnson
2007-01-01
Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.
Spatial averaging infiltration model for layered soil
HU HePing; YANG ZhiYong; TIAN FuQiang
2009-01-01
To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial heterogeneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overestimate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hydrological and land surface process modeling in a promising way.
Spatial averaging infiltration model for layered soil
无
2009-01-01
To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial hetero- geneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overes- timate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hy- drological and land surface process modeling in a promising way.
Pritychenko, B.
2010-07-19
Present contribution represents a significant improvement of our previous calculation of Maxwellian-averaged cross sections and astrophysical reaction rates. Addition of newly-evaluated neutron reaction libraries, such as ROSFOND and Low-Fidelity Covariance Project, and improvements in data processing techniques allowed us to extend it for entire range of sprocess nuclei, calculate Maxwellian-averaged cross section uncertainties for the first time, and provide additional insights on all currently available neutron-induced reaction data. Nuclear reaction calculations using ENDF libraries and current Java technologies will be discussed and new results will be presented.
Dowell, David H.; /SLAC; Power, John G.; /Argonne
2012-09-05
There has been significant progress in the development of high-power facilities in recent years yet major challenges remain. The task of WG4 was to identify which facilities were capable of addressing the outstanding R&D issues presently preventing high-power operation. To this end, information from each of the facilities represented at the workshop was tabulated and the results are presented herein. A brief description of the major challenges is given, but the detailed elaboration can be found in the other three working group summaries.
Representing distance, consuming distance
Larsen, Gunvor Riber
to mobility and its social context. Such an understanding can be approached through representations, as distance is being represented in various ways, most noticeably in maps and through the notions of space and Otherness. The question this talk subsequently asks is whether these representations of distance...... are being consumed in the contemporary society, in the same way as places, media, cultures and status are being consumed (Urry 1995, Featherstone 2007). An exploration of distance and its representations through contemporary consumption theory could expose what role distance plays in forming...... are present in theoretical and empirical elaborations on mobility, but these remain largely implicit and unchallenged (Bauman 1998). This talk will endeavour to unmask distance as a theoretical entity by exploring ways in which distance can be understood and by discussing distance through its representations...
Representativity of TMA studies.
Sauter, Guido
2010-01-01
The smaller the portion of a tumor sample that is analyzed becomes, the higher is the risk of missing important histological or molecular features that might be present only in a subset of tumor cells. Many researchers have, therefore, suggested using larger tissue cores or multiple cores from the same donor tissue to enhance the representativity of TMA studies. However, numerous studies comparing the results of TMA studies with the findings from conventional large sections have shown that all well-established associations between molecular markers and tumor phenotype or patient prognosis can be reproduced with TMAs even if only one single 0.6 mm tissue spot is analyzed. Moreover, the TMA technology has proven to be superior to large section analysis in finding new clinically relevant associations. The high number of samples that are typically included in TMA studies, and the unprecedented degree of standardization during TMA experiments and analysis often give TMA studies an edge over traditional large-section studies.
Separability criteria with angular and Hilbert space averages
Fujikawa, Kazuo; Oh, C. H.; Umetsu, Koichiro; Yu, Sixia
2016-05-01
The practically useful criteria of separable states ρ =∑kwkρk in d = 2 × 2 are discussed. The equality G(a , b) = 4 [ - ] = 0 for any two projection operators P(a) and P(b) provides a necessary and sufficient separability criterion in the case of a separable pure state ρ = | ψ > Werner state is applied to two photon systems, it is shown that the Hilbert space average can judge its inseparability but not the geometrical angular average.
A procedure to average 3D anatomical structures.
Subramanya, K; Dean, D
2000-12-01
Creating a feature-preserving average of three dimensional anatomical surfaces extracted from volume image data is a complex task. Unlike individual images, averages present right-left symmetry and smooth surfaces which give insight into typical proportions. Averaging multiple biological surface images requires careful superimposition and sampling of homologous regions. Our approach to biological surface image averaging grows out of a wireframe surface tessellation approach by Cutting et al. (1993). The surface delineating wires represent high curvature crestlines. By adding tile boundaries in flatter areas the 3D image surface is parametrized into anatomically labeled (homology mapped) grids. We extend the Cutting et al. wireframe approach by encoding the entire surface as a series of B-spline space curves. The crestline averaging algorithm developed by Cutting et al. may then be used for the entire surface. Shape preserving averaging of multiple surfaces requires careful positioning of homologous surface regions such as these B-spline space curves. We test the precision of this new procedure and its ability to appropriately position groups of surfaces in order to produce a shape-preserving average. Our result provides an average that well represents the source images and may be useful clinically as a deformable model or for animation.
Do Diurnal Aerosol Changes Affect Daily Average Radiative Forcing?
Kassianov, Evgueni I.; Barnard, James C.; Pekour, Mikhail S.; Berg, Larry K.; Michalsky, Joseph J.; Lantz, K.; Hodges, G. B.
2013-06-17
Strong diurnal variability of aerosol has been observed frequently for many urban/industrial regions. How this variability may alter the direct aerosol radiative forcing (DARF), however, is largely unknown. To quantify changes in the time-averaged DARF, we perform an assessment of 29 days of high temporal resolution ground-based data collected during the Two-Column Aerosol Project (TCAP) on Cape Cod, which is downwind of metropolitan areas. We demonstrate that strong diurnal changes of aerosol loading (about 20% on average) have a negligible impact on the 24-h average DARF, when daily averaged optical properties are used to find this quantity. However, when there is a sparse temporal sampling of aerosol properties, which may preclude the calculation of daily averaged optical properties, large errors (up to 100%) in the computed DARF may occur. We describe a simple way of reducing these errors, which suggests the minimal temporal sampling needed to accurately find the forcing.
Unbiased Cultural Transmission in Time-Averaged Archaeological Assemblages
Madsen, Mark E
2012-01-01
Unbiased models are foundational in the archaeological study of cultural transmission. Applications have as- sumed that archaeological data represent synchronic samples, despite the accretional nature of the archaeological record. I document the circumstances under which time-averaging alters the distribution of model predictions. Richness is inflated in long-duration assemblages, and evenness is "flattened" compared to unaveraged samples. Tests of neutrality, employed to differentiate biased and unbiased models, suffer serious problems with Type I error under time-averaging. Finally, the time-scale over which time-averaging alters predictions is determined by the mean trait lifetime, providing a way to evaluate the impact of these effects upon archaeological samples.
Characteristics of phase-averaged equations for modulated wave groups
Klopman, G.; Petit, H.A.H.; Battjes, J.A.
2000-01-01
The project concerns the influence of long waves on coastal morphology. The modelling of the combined motion of the long waves and short waves in the horizontal plane is done by phase-averaging over the short wave motion and using intra-wave modelling for the long waves, see e.g. Roelvink (1993). Th
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS....12 On average. On average means a rolling average of production or imports during the last two...
Using Group Theory to Obtain Eigenvalues of Nonsymmetric Systems by Symmetry Averaging
Marion L. Ellzey
2009-08-01
Full Text Available If the Hamiltonian in the time independent Schrödinger equation, HΨ = EΨ, is invariant under a group of symmetry transformations, the theory of group representations can help obtain the eigenvalues and eigenvectors of H. A finite group that is not a symmetry group of H is nevertheless a symmetry group of an operator Hsym projected from H by the process of symmetry averaging. In this case H = Hsym + HR where HR is the nonsymmetric remainder. Depending on the nature of the remainder, the solutions for the full operator may be obtained by perturbation theory. It is shown here that when H is represented as a matrix [H] over a basis symmetry adapted to the group, the reduced matrix elements of [Hsym] are simple averages of certain elements of [H], providing a substantial enhancement in computational efficiency. A series of examples are given for the smallest molecular graphs. The first is a two vertex graph corresponding to a heteronuclear diatomic molecule. The symmetrized component then corresponds to a homonuclear system. A three vertex system is symmetry averaged in the first case to Cs and in the second case to the nonabelian C3v. These examples illustrate key aspects of the symmetry-averaging process.
Simple Moving Average: A Method of Reporting Evolving Complication Rates.
Harmsen, Samuel M; Chang, Yu-Hui H; Hattrup, Steven J
2016-09-01
Surgeons often cite published complication rates when discussing surgery with patients. However, these rates may not truly represent current results or an individual surgeon's experience with a given procedure. This study proposes a novel method to more accurately report current complication trends that may better represent the patient's potential experience: simple moving average. Reverse shoulder arthroplasty (RSA) is an increasingly popular and rapidly evolving procedure with highly variable reported complication rates. The authors used an RSA model to test and evaluate the usefulness of simple moving average. This study reviewed 297 consecutive RSA procedures performed by a single surgeon and noted complications in 50 patients (16.8%). Simple moving average for total complications as well as minor, major, acute, and chronic complications was then calculated using various lag intervals. These findings showed trends toward fewer total, major, and chronic complications over time, and these trends were represented best with a lag of 75 patients. Average follow-up within this lag was 26.2 months. Rates for total complications decreased from 17.3% to 8% at the most recent simple moving average. The authors' traditional complication rate with RSA (16.8%) is consistent with reported rates. However, the use of simple moving average shows that this complication rate decreased over time, with current trends (8%) markedly lower, giving the senior author a more accurate picture of his evolving complication trends with RSA. Compared with traditional methods, simple moving average can be used to better reflect current trends in complication rates associated with a surgical procedure and may better represent the patient's potential experience. [Orthopedics.2016; 39(5):e869-e876.].
Level sets of multiple ergodic averages
Ai-Hua, Fan; Ma, Ji-Hua
2011-01-01
We propose to study multiple ergodic averages from multifractal analysis point of view. In some special cases in the symbolic dynamics, Hausdorff dimensions of the level sets of multiple ergodic average limit are determined by using Riesz products.
Accurate Switched-Voltage voltage averaging circuit
金光, 一幸; 松本, 寛樹
2006-01-01
Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.
Spectral averaging techniques for Jacobi matrices
del Rio, Rafael; Schulz-Baldes, Hermann
2008-01-01
Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.
Average-Time Games on Timed Automata
Jurdzinski, Marcin; Trivedi, Ashutosh
2009-01-01
An average-time game is played on the infinite graph of configurations of a finite timed automaton. The two players, Min and Max, construct an infinite run of the automaton by taking turns to perform a timed transition. Player Min wants to minimise the average time per transition and player Max wants to maximise it. A solution of average-time games is presented using a reduction to average-price game on a finite graph. A direct consequence is an elementary proof of determinacy for average-tim...
WIDTHS AND AVERAGE WIDTHS OF SOBOLEV CLASSES
刘永平; 许贵桥
2003-01-01
This paper concerns the problem of the Kolmogorov n-width, the linear n-width, the Gel'fand n-width and the Bernstein n-width of Sobolev classes of the periodicmultivariate functions in the space Lp(Td) and the average Bernstein σ-width, averageKolmogorov σ-widths, the average linear σ-widths of Sobolev classes of the multivariatequantities.
Stochastic averaging of quasi-Hamiltonian systems
朱位秋
1996-01-01
A stochastic averaging method is proposed for quasi-Hamiltonian systems (Hamiltonian systems with light dampings subject to weakly stochastic excitations). Various versions of the method, depending on whether the associated Hamiltonian systems are integrable or nonintegrable, resonant or nonresonant, are discussed. It is pointed out that the standard stochastic averaging method and the stochastic averaging method of energy envelope are special cases of the stochastic averaging method of quasi-Hamiltonian systems and that the results obtained by this method for several examples prove its effectiveness.
NOAA Average Annual Salinity (3-Zone)
California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
Do regional climate models represent regional climate?
Maraun, Douglas; Widmann, Martin
2014-05-01
When using climate change scenarios - either from global climate models or further downscaled - to assess localised real world impacts, one has to ensure that the local simulation indeed correctly represents the real world local climate. Representativeness has so far mainly been discussed as a scale issue: simulated meteorological variables in general represent grid box averages, whereas real weather is often expressed by means of point values. As a result, in particular simulated extreme values are not directly comparable with observed local extreme values. Here we argue that the issue of representativeness is more general. To illustrate this point, assume the following situations: first, the (GCM or RCM) simulated large scale weather, e.g., the mid-latitude storm track, might be systematically distorted compared to observed weather. If such a distortion at the synoptic scale is strong, the simulated local climate might be completely different from the observed. Second, the orography even of high resolution RCMs is only a coarse model of true orography. In particular in mountain ranges the simulated mesoscale flow might therefore considerably deviate from the observed flow, leading to systematically displaced local weather. In both cases, the simulated local climate does not represent observed local climate. Thus, representativeness also encompasses representing a particular location. We propose to measure this aspect of representativeness for RCMs driven with perfect boundary conditions as the correlation between observations and simulations at the inter-annual scale. In doing so, random variability generated by the RCMs is largely averaged out. As an example, we assess how well KNMIs RACMO2 RCM at 25km horizontal resolution represents winter precipitation in the gridded E-OBS data set over the European domain. At a chosen grid box, RCM precipitation might not be representative of observed precipitation, in particular in the rain shadow of major moutain ranges
Average Transmission Probability of a Random Stack
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Average sampling theorems for shift invariant subspaces
无
2000-01-01
The sampling theorem is one of the most powerful results in signal analysis. In this paper, we study the average sampling on shift invariant subspaces, e.g. wavelet subspaces. We show that if a subspace satisfies certain conditions, then every function in the subspace is uniquely determined and can be reconstructed by its local averages near certain sampling points. Examples are given.
Testing linearity against nonlinear moving average models
de Gooijer, J.G.; Brännäs, K.; Teräsvirta, T.
1998-01-01
Lagrange multiplier (LM) test statistics are derived for testing a linear moving average model against an additive smooth transition moving average model. The latter model is introduced in the paper. The small sample performance of the proposed tests are evaluated in a Monte Carlo study and compared
Averaging Einstein's equations : The linearized case
Stoeger, William R.; Helmi, Amina; Torres, Diego F.
2007-01-01
We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW situ
Averaging Einstein's equations : The linearized case
Stoeger, William R.; Helmi, Amina; Torres, Diego F.
We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW
Average Transmission Probability of a Random Stack
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Yucca Mountain Climate Technical Support Representative
Sharpe, Saxon E
2007-10-23
The primary objective of Project Activity ORD-FY04-012, “Yucca Mountain Climate Technical Support Representative,” was to provide the Office of Civilian Radioactive Waste Management (OCRWM) with expertise on past, present, and future climate scenarios and to support the technical elements of the Yucca Mountain Project (YMP) climate program. The Climate Technical Support Representative was to explain, defend, and interpret the YMP climate program to the various audiences during Site Recommendation and License Application. This technical support representative was to support DOE management in the preparation and review of documents, and to participate in comment response for the Final Environmental Impact Statement, the Site Recommendation Hearings, the NRC Sufficiency Comments, and other forums as designated by DOE management. Because the activity was terminated 12 months early and experience a 27% reduction in budget, it was not possible to complete all components of the tasks as originally envisioned. Activities not completed include the qualification of climate datasets and the production of a qualified technical report. The following final report is an unqualified summary of the activities that were completed given the reduced time and funding.
New results on averaging theory and applications
Cândido, Murilo R.; Llibre, Jaume
2016-08-01
The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.
Analogue Divider by Averaging a Triangular Wave
Selvam, Krishnagiri Chinnathambi
2017-08-01
A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.
Representing Boolean Functions by Decision Trees
Chikalov, Igor
2011-01-01
A Boolean or discrete function can be represented by a decision tree. A compact form of decision tree named binary decision diagram or branching program is widely known in logic design [2, 40]. This representation is equivalent to other forms, and in some cases it is more compact than values table or even the formula [44]. Representing a function in the form of decision tree allows applying graph algorithms for various transformations [10]. Decision trees and branching programs are used for effective hardware [15] and software [5] implementation of functions. For the implementation to be effective, the function representation should have minimal time and space complexity. The average depth of decision tree characterizes the expected computing time, and the number of nodes in branching program characterizes the number of functional elements required for implementation. Often these two criteria are incompatible, i.e. there is no solution that is optimal on both time and space complexity. © Springer-Verlag Berlin Heidelberg 2011.
Average-passage flow model development
Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark
1989-01-01
A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.
FREQUENTIST MODEL AVERAGING ESTIMATION: A REVIEW
Haiying WANG; Xinyu ZHANG; Guohua ZOU
2009-01-01
In applications, the traditional estimation procedure generally begins with model selection.Once a specific model is selected, subsequent estimation is conducted under the selected model without consideration of the uncertainty from the selection process. This often leads to the underreporting of variability and too optimistic confidence sets. Model averaging estimation is an alternative to this procedure, which incorporates model uncertainty into the estimation process. In recent years, there has been a rising interest in model averaging from the frequentist perspective, and some important progresses have been made. In this paper, the theory and methods on frequentist model averaging estimation are surveyed. Some future research topics are also discussed.
Averaging of Backscatter Intensities in Compounds
Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.
2002-01-01
Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752
Experimental Demonstration of Squeezed State Quantum Averaging
Lassen, Mikael; Sabuncu, Metin; Filip, Radim; Andersen, Ulrik L
2010-01-01
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The harmonic mean protocol can be used to efficiently stabilize a set of fragile squeezed light sources with statistically fluctuating noise levels. The averaged variances are prepared probabilistically by means of linear optical interference and measurement induced conditioning. We verify that the implemented harmonic mean outperforms the standard arithmetic mean strategy. The effect of quantum averaging is experimentally tested both for uncorrelated and partially correlated noise sources with sub-Poissonian shot noise or super-Poissonian shot noise characteristics.
The Average Lower Connectivity of Graphs
Ersin Aslan
2014-01-01
Full Text Available For a vertex v of a graph G, the lower connectivity, denoted by sv(G, is the smallest number of vertices that contains v and those vertices whose deletion from G produces a disconnected or a trivial graph. The average lower connectivity denoted by κav(G is the value (∑v∈VGsvG/VG. It is shown that this parameter can be used to measure the vulnerability of networks. This paper contains results on bounds for the average lower connectivity and obtains the average lower connectivity of some graphs.
Cosmic inhomogeneities and averaged cosmological dynamics.
Paranjape, Aseem; Singh, T P
2008-10-31
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.
Changing mortality and average cohort life expectancy
Schoen, Robert; Canudas-Romo, Vladimir
2005-01-01
of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL) has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure......, the average cohort life expectancy (ACLE), to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate...
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
Appeals Council Requests - Average Processing Time
Social Security Administration — This dataset provides annual data from 1989 through 2015 for the average processing time (elapsed time in days) for dispositions by the Appeals Council (AC) (both...
Average Vegetation Growth 1990 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1990 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1997 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1997 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1992 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2001 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1995 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1995 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2000 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2000 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1998 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1998 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1994 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1994 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Average Vegetation Growth 1996 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1996 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2005 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2005 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1993 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1993 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Spacetime Average Density (SAD) Cosmological Measures
Page, Don N
2014-01-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmolo...
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
Rotational averaging of multiphoton absorption cross sections
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Rotational averaging of multiphoton absorption cross sections
Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth
2014-11-01
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Monthly snow/ice averages (ISCCP)
National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets in...
Average Annual Precipitation (PRISM model) 1961 - 1990
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...
Symmetric Euler orientation representations for orientational averaging.
Mayerhöfer, Thomas G
2005-09-01
A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures.
Cosmic Inhomogeneities and the Average Cosmological Dynamics
Paranjape, Aseem; Singh, T. P.
2008-01-01
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a `dark energy'. However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the \\emph{in}homogeneous Universe, the averaged \\emph{homogeneous} Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic ini...
Average Bandwidth Allocation Model of WFQ
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
Monthly Near-Surface Air Temperature Averages
National Aeronautics and Space Administration — Global surface temperatures in 2010 tied 2005 as the warmest on record. The International Satellite Cloud Climatology Project (ISCCP) was established in 1982 as part...
Averaged null energy condition and quantum inequalities in curved spacetime
Kontou, Eleni-Alexandra
2015-01-01
The Averaged Null Energy Condition (ANEC) states that the integral along a complete null geodesic of the projection of the stress-energy tensor onto the tangent vector to the geodesic cannot be negative. ANEC can be used to rule out spacetimes with exotic phenomena, such as closed timelike curves, superluminal travel and wormholes. We prove that ANEC is obeyed by a minimally-coupled, free quantum scalar field on any achronal null geodesic (not two points can be connected with a timelike curve) surrounded by a tubular neighborhood whose curvature is produced by a classical source. To prove ANEC we use a null-projected quantum inequality, which provides constraints on how negative the weighted average of the renormalized stress-energy tensor of a quantum field can be. Starting with a general result of Fewster and Smith, we first derive a timelike projected quantum inequality for a minimally-coupled scalar field on flat spacetime with a background potential. Using that result we proceed to find the bound of a qu...
Averaged controllability of parameter dependent conservative semigroups
Lohéac, Jérôme; Zuazua, Enrique
2017-02-01
We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.
Marc Treib: Representing Landscape Architecture
Braae, Ellen Marie
2008-01-01
The editor of Representing Landscape Architecture, Marc Treib, argues that there is good reason to evaluate the standard practices of representation that landscape architects have been using for so long. In the rush to the promised land of computer design these practices are now in danger of being...... left by the wayside. The 14 often both fitting and well crafted contributions of this publication offer an approach to how landscape architecture has been and is currently represented; in the design study, in presentation, in criticism, and in the creation of landscape architecture....
Representative process sampling - in practice
Esbensen, Kim; Friis-Pedersen, Hans Henrik; Julius, Lars Petersen
2007-01-01
Didactic data sets representing a range of real-world processes are used to illustrate "how to do" representative process sampling and process characterisation. The selected process data lead to diverse variogram expressions with different systematics (no range vs. important ranges; trends and....../or periodicity; different nugget effects and process variations ranging from less than one lag to full variogram lag). Variogram data analysis leads to a fundamental decomposition into 0-D sampling vs. 1-D process variances, based on the three principal variogram parameters: range, sill and nugget effect...
Average Temperatures in the Southwestern United States, 2000-2015 Versus Long-Term Average
U.S. Environmental Protection Agency — This indicator shows how the average air temperature from 2000 to 2015 has differed from the long-term average (1895–2015). To provide more detailed information,...
Leach's Storm Petrel - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Wilson's Storm Petrel - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Surf Scoter - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Cory's Shearwater - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Great Shearwater - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Roseate Tern - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Red Phalarope - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Herring Gull - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Common Eider - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Common Loon - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Northern Gannet - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Common Tern - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Pomarine Jaeger - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Sooty Shearwater - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Northern Fulmar - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Black Scoter - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Bonaparte's Gull - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Laughing Gull - Avian Average Annual Abundance
National Oceanic and Atmospheric Administration, Department of Commerce — The data represent predicted number of individuals of each listed seabird species per standardized survey segment (15 minute travel time at 10 knots = approx. 2.5...
Cosmic structure, averaging and dark energy
Wiltshire, David L
2013-01-01
These lecture notes review the theoretical problems associated with coarse-graining the observed inhomogeneous structure of the universe at late epochs, of describing average cosmic evolution in the presence of growing inhomogeneity, and of relating average quantities to physical observables. In particular, a detailed discussion of the timescape scenario is presented. In this scenario, dark energy is realized as a misidentification of gravitational energy gradients which result from gradients in the kinetic energy of expansion of space, in the presence of density and spatial curvature gradients that grow large with the growth of structure. The phenomenology and observational tests of the timescape model are discussed in detail, with updated constraints from Planck satellite data. In addition, recent results on the variation of the Hubble expansion on < 100/h Mpc scales are discussed. The spherically averaged Hubble law is significantly more uniform in the rest frame of the Local Group of galaxies than in t...
Books average previous decade of economic misery.
R Alexander Bentley
Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
High Average Power Yb:YAG Laser
Zapata, L E; Beach, R J; Payne, S A
2001-05-23
We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.
A singularity theorem based on spatial averages
J M M Senovilla
2007-07-01
Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.
Average: the juxtaposition of procedure and context
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS
K. L. Goluoglu
2000-06-09
The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.
An approximate analytical approach to resampling averages
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...
Grassmann Averages for Scalable Robust PCA
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the
Fixed Points of Averages of Resolvents: Geometry and Algorithms
Bauschke, Heinz H; Wylie, Calvin J S
2011-01-01
To provide generalized solutions if a given problem admits no actual solution is an important task in mathematics and the natural sciences. It has a rich history dating back to the early 19th century when Carl Friedrich Gauss developed the method of least squares of a system of linear equations - its solutions can be viewed as fixed points of averaged projections onto hyperplanes. A powerful generalization of this problem is to find fixed points of averaged resolvents (i.e., firmly nonexpansive mappings). This paper concerns the relationship between the set of fixed points of averaged resolvents and certain fixed point sets of compositions of resolvents. It partially extends recent work for two mappings on a question of C. Byrne. The analysis suggests a reformulation in a product space. Furthermore, two new algorithms are presented. A complete convergence proof that is based on averaged mappings is provided for the first algorithm. The second algorithm, which currently has no convergence proof, iterates a map...
Diversity and representativeness: two key factors
Staff Association
2013-01-01
In the past few weeks many of you have filled out the questionnaire for preparing the upcoming Five-yearly review. Similarly, Staff Association members have elected their delegates to the Staff Council for the coming two years. Once again we would like to thank all those who have taken the time and effort to make their voice heard on these two occasions. Elections to the Staff Council Below we publish the new Staff Council with its forty delegates who will represent in 2014 and 2015 all CERN staff in the discussions with Management and Member States in the various bodies and committees. Therefore it is important that the Staff Council represents as far as possible the diversity of the CERN population. By construction, the election process with its electoral colleges and two-step voting procedure guarantees that all Departments, even the small ones, and the various staff categories are correctly represented. Figure 1 shows the participation rate in the elections. The average rate is just above 52 %, with ...
Parameterized Traveling Salesman Problem: Beating the Average
Gutin, G.; Patel, V.
2016-01-01
In the traveling salesman problem (TSP), we are given a complete graph Kn together with an integer weighting w on the edges of Kn, and we are asked to find a Hamilton cycle of Kn of minimum weight. Let h(w) denote the average weight of a Hamilton cycle of Kn for the weighting w. Vizing in 1973 asked
On averaging methods for partial differential equations
Verhulst, F.
2001-01-01
The analysis of weakly nonlinear partial differential equations both qualitatively and quantitatively is emerging as an exciting eld of investigation In this report we consider specic results related to averaging but we do not aim at completeness The sections and contain important material which
Discontinuities and hysteresis in quantized average consensus
Ceragioli, Francesca; Persis, Claudio De; Frasca, Paolo
2011-01-01
We consider continuous-time average consensus dynamics in which the agents’ states are communicated through uniform quantizers. Solutions to the resulting system are defined in the Krasowskii sense and are proven to converge to conditions of ‘‘practical consensus’’. To cope with undesired chattering
Bayesian Averaging is Well-Temperated
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation...
A Functional Measurement Study on Averaging Numerosity
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Generalized Jackknife Estimators of Weighted Average Derivatives
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic li...
Bootstrapping Density-Weighted Average Derivatives
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...
Quantum Averaging of Squeezed States of Light
Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...
Bayesian Model Averaging for Propensity Score Analysis
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
A dynamic analysis of moving average rules
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type
Average utility maximization: A preference foundation
A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)
2014-01-01
textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen
High average-power induction linacs
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.
1989-03-15
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs.
High Average Power Optical FEL Amplifiers
Ben-Zvi, I; Litvinenko, V
2005-01-01
Historically, the first demonstration of the FEL was in an amplifier configuration at Stanford University. There were other notable instances of amplifying a seed laser, such as the LLNL amplifier and the BNL ATF High-Gain Harmonic Generation FEL. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance a 100 kW average power FEL. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting energy recovery linacs combine well with the high-gain FEL amplifier to produce unprecedented average power FELs with some advantages. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Li...
Independence, Odd Girth, and Average Degree
Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter;
2011-01-01
We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...
Full averaging of fuzzy impulsive differential inclusions
Natalia V. Skripnik
2010-09-01
Full Text Available In this paper the substantiation of the method of full averaging for fuzzy impulsive differential inclusions is studied. We extend the similar results for impulsive differential inclusions with Hukuhara derivative (Skripnik, 2007, for fuzzy impulsive differential equations (Plotnikov and Skripnik, 2009, and for fuzzy differential inclusions (Skripnik, 2009.
Materials for high average power lasers
Marion, J.E.; Pertica, A.J.
1989-01-01
Unique materials properties requirements for solid state high average power (HAP) lasers dictate a materials development research program. A review of the desirable laser, optical and thermo-mechanical properties for HAP lasers precedes an assessment of the development status for crystalline and glass hosts optimized for HAP lasers. 24 refs., 7 figs., 1 tab.
A dynamic analysis of moving average rules
C. Chiarella; X.Z. He; C.H. Hommes
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type use
Discrete Averaging Relations for Micro to Macro Transition
Liu, Chenchen; Reina, Celia
2016-05-01
The well-known Hill's averaging theorems for stresses and strains as well as the so-called Hill-Mandel principle of macrohomogeneity are essential ingredients for the coupling and the consistency between the micro and macro scales in multiscale finite element procedures (FE$^2$). We show in this paper that these averaging relations hold exactly under standard finite element discretizations, even if the stress field is discontinuous across elements and the standard proofs based on the divergence theorem are no longer suitable. The discrete averaging results are derived for the three classical types of boundary conditions (affine displacement, periodic and uniform traction boundary conditions) using the properties of the shape functions and the weak form of the microscopic equilibrium equations. The analytical proofs are further verified numerically through a simple finite element simulation of an irregular representative volume element undergoing large deformations. Furthermore, the proofs are extended to include the effects of body forces and inertia, and the results are consistent with those in the smooth continuum setting. This work provides a solid foundation to apply Hill's averaging relations in multiscale finite element methods without introducing an additional error in the scale transition due to the discretization.
Representative process sampling - in practice
Esbensen, Kim; Friis-Pedersen, Hans Henrik; Julius, Lars Petersen
2007-01-01
/or periodicity; different nugget effects and process variations ranging from less than one lag to full variogram lag). Variogram data analysis leads to a fundamental decomposition into 0-D sampling vs. 1-D process variances, based on the three principal variogram parameters: range, sill and nugget effect......Didactic data sets representing a range of real-world processes are used to illustrate "how to do" representative process sampling and process characterisation. The selected process data lead to diverse variogram expressions with different systematics (no range vs. important ranges; trends and...... presented cases of variography either solved the initial problems or served to understand the reasons and causes behind the specific process structures revealed in the variograms. Process Analytical Technologies (PAT) are not complete without process TOS....
Judgments of and by Representativeness
1981-05-15
probably agree that John Updike is a more representative American writer than Norman Mailer. rlearly, such a judgment does not have a frequentistic...example, in an early study we presented people with the following description, " John is 27 years old, with an outgoing personality. At college he was an...outstanding athlete but did not show much ability or interest in in- tellectual matters". We found that John was judged to be more likely to be "a gym
10 CFR 170.20 - Average cost per professional staff-hour.
2010-01-01
... Provisions § 170.20 Average cost per professional staff-hour. Fees for permits, licenses, amendments, renewals, special projects, 10 CFR part 55 re-qualification and replacement examinations and tests, other... 10 Energy 2 2010-01-01 2010-01-01 false Average cost per professional staff-hour. 170.20...
ANTINOMY OF THE MODERN AVERAGE PROFESSIONAL EDUCATION
A. A. Listvin
2017-01-01
of ways of their decision and options of the valid upgrade of the SPE system answering to the requirements of economy. The inefficiency of the concept of one-leveled SPE and its non-competitiveness against the background of development of an applied bachelor degree at the higher school is shown. It is offered to differentiate programs of basic level for training of skilled workers and the program of the increased level for training of specialists of an average link (technicians, technologists on the basis of basic level for forming of a single system of continuous professional training and effective functioning of regional systems of professional education. Such system will help to eliminate disproportions in a triad «a worker – a technician – an engineer», and will increase the quality of professional education. Furthermore, it is indicated the need of polyprofessional education wherein the integrated educational structures differing in degree of formation of split-level educational institutions on the basis of network interaction, convergence and integration are required. According to the author, in the regions it is necessary to develop two types of organizations and SPE organizations: territorial multi-profile colleges with flexible variable programs and the organizations realizing educational programs of applied qualifications in specific industries (metallurgical, chemical, construction, etc. according to the specifics of economy of territorial subjects.Practical significance. The results of the research can be useful to specialists of management of education, heads and pedagogical staff of SPE institutions, and also representatives of regional administrations and employers while organizing the multilevel network system of training of skilled workers and experts of middle ranking.
Data mining for average images in a digital hand atlas
Zhang, Aifeng; Cao, Fei; Pietka, Ewa; Liu, Brent J.; Huang, H. K.
2004-04-01
Bone age assessment is a procedure performed in pediatric patients to quickly evaluate parameters of maturation and growth from a left hand and wrist radiograph. Pietka and Cao have developed a Computer-aided diagnosis (CAD) method of bone age assessment based on a digital hand atlas. The aim of this paper is to extend their work by automatically select the best representative image from a group of normal children based on specific bony features that reflect skeletal maturity. The group can be of any ethnic origin and gender from one year to 18 year old in the digital atlas. This best representative image is defined as the "average" image of the group that can be augmented to Piekta and Cao's method to facilitate in the bone age assessment process.
Averaged Extended Tree Augmented Naive Classifier
Aaron Meehan
2015-07-01
Full Text Available This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN, which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN and Averaged One-Dependence Estimator (AODE classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400--407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305--320]. The application of the trajectory averaging estimator to other stochastic approximation MCMC algorithms, for example, a stochastic approximation MLE al...
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
Time-average dynamic speckle interferometry
Vladimirov, A. P.
2014-05-01
For the study of microscopic processes occurring at structural level in solids and thin biological objects, a method of dynamic speckle interferometry successfully applied. However, the method has disadvantages. The purpose of the report is to acquaint colleagues with the method of averaging in time in dynamic speckle - interferometry of microscopic processes, allowing eliminating shortcomings. The main idea of the method is the choice the averaging time, which exceeds the characteristic time correlation (relaxation) the most rapid process. The method theory for a thin phase and the reflecting object is given. The results of the experiment on the high-cycle fatigue of steel and experiment to estimate the biological activity of a monolayer of cells, cultivated on a transparent substrate is given. It is shown that the method allows real-time visualize the accumulation of fatigue damages and reliably estimate the activity of cells with viruses and without viruses.
Recent activities of the Seismology Division Early Career Representative(s)
Agius, Matthew; Van Noten, Koen; Ermert, Laura; Mai, P. Martin; Krawczyk, CharLotte
2016-04-01
The European Geosciences Union is a bottom-up-organisation, in which its members are represented by their respective scientific divisions, committees and council. In recent years, EGU has embarked on a mission to reach out for its numerous 'younger' members by giving awards to outstanding young scientists and the setting up of Early Career Scientists (ECS) representatives. The division representative's role is to engage in discussions that concern students and early career scientists. Several meetings between all the division representatives are held throughout the year to discuss ideas and Union-wide issues. One important impact ECS representatives have had on EGU is the increased number of short courses and workshops run by ECS during the annual General Assembly. Another important contribution of ECS representatives was redefining 'Young Scientist' to 'Early Career Scientist', which avoids discrimination due to age. Since 2014, the Seismology Division has its own ECS representative. In an effort to more effectively reach out for young seismologists, a blog and a social media page dedicated to seismology have been set up online. With this dedicated blog, we'd like to give more depth to the average browsing experience by enabling young researchers to explore various seismology topics in one place while making the field more exciting and accessible to the broader community. These pages are used to promote the latest research especially of young seismologists and to share interesting seismo-news. Over the months the pages proved to be popular, with hundreds of views every week and an increased number of followers. An online survey was conducted to learn more about the activities and needs of early career seismologists. We present the results from this survey, and the work that has been carried out over the last two years, including detail of what has been achieved so far, and what we would like the ECS representation for Seismology to achieve. Young seismologists are
Average Annual Rainfall over the Globe
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
Endogenous average cost based access pricing
Fjell, Kenneth; Foros, Øystein; Pal, Debashis
2006-01-01
We consider an industry where a downstream competitor requires access to an upstream facility controlled by a vertically integrated and regulated incumbent. The literature on access pricing assumes the access price to be exogenously fixed ex-ante. We analyze an endogenous average cost based access pricing rule, where both firms realize the interdependence among their quantities and the regulated access price. Endogenous access pricing neutralizes the artificial cost advantag...
The Ghirlanda-Guerra identities without averaging
Chatterjee, Sourav
2009-01-01
The Ghirlanda-Guerra identities are one of the most mysterious features of spin glasses. We prove the GG identities in a large class of models that includes the Edwards-Anderson model, the random field Ising model, and the Sherrington-Kirkpatrick model in the presence of a random external field. Previously, the GG identities were rigorously proved only `on average' over a range of temperatures or under small perturbations.
Average Annual Rainfall over the Globe
Agrawal, D. C.
2013-01-01
The atmospheric recycling of water is a very important phenomenon on the globe because it not only refreshes the water but it also redistributes it over land and oceans/rivers/lakes throughout the globe. This is made possible by the solar energy intercepted by the Earth. The half of the globe facing the Sun, on the average, intercepts 1.74 ×…
U. S. Department of Energy project book
1980-01-01
This book covers representative projects in each program within the Department of Energy. The projects included were selected to provide an insight into the wide spectrum of projects authorized and under way in the Department. The projects described do not cover all projects authorized - they are merely representative. Descriptions, goals, and status are given for 29 energy projects, 4 scientific projects, and 5 defense projects. (RWR)
On Backus average for generally anisotropic layers
Bos, Len; Slawinski, Michael A; Stanoev, Theodore
2016-01-01
In this paper, following the Backus (1962) approach, we examine expressions for elasticity parameters of a homogeneous generally anisotropic medium that is long-wave-equivalent to a stack of thin generally anisotropic layers. These expressions reduce to the results of Backus (1962) for the case of isotropic and transversely isotropic layers. In over half-a-century since the publications of Backus (1962) there have been numerous publications applying and extending that formulation. However, neither George Backus nor the authors of the present paper are aware of further examinations of mathematical underpinnings of the original formulation; hence, this paper. We prove that---within the long-wave approximation---if the thin layers obey stability conditions then so does the equivalent medium. We examine---within the Backus-average context---the approximation of the average of a product as the product of averages, and express it as a proposition in terms of an upper bound. In the presented examination we use the e...
A simple algorithm for averaging spike trains.
Julienne, Hannah; Houghton, Conor
2013-02-25
Although spike trains are the principal channel of communication between neurons, a single stimulus will elicit different spike trains from trial to trial. This variability, in both spike timings and spike number can obscure the temporal structure of spike trains and often means that computations need to be run on numerous spike trains in order to extract features common across all the responses to a particular stimulus. This can increase the computational burden and obscure analytical results. As a consequence, it is useful to consider how to calculate a central spike train that summarizes a set of trials. Indeed, averaging responses over trials is routine for other signal types. Here, a simple method for finding a central spike train is described. The spike trains are first mapped to functions, these functions are averaged, and a greedy algorithm is then used to map the average function back to a spike train. The central spike trains are tested for a large data set. Their performance on a classification-based test is considerably better than the performance of the medoid spike trains.
Changing mortality and average cohort life expectancy
Robert Schoen
2005-10-01
Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.
Disk-averaged synthetic spectra of Mars
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Disk-averaged synthetic spectra of Mars
Tinetti, G; Fong, W; Meadows, V S; Snively, H; Velusamy, T; Crisp, David; Fong, William; Meadows, Victoria S.; Snively, Heather; Tinetti, Giovanna; Velusamy, Thangasamy
2004-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and ESA Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earth-sized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of the planet Mars to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPF-C) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model which uses observational data as input to generate a database of spatially-resolved synthetic spectra for a range of illumination conditions (phase angles) and viewing geometries. Results presented here include disk averaged synthetic spectra, light-cur...
Disk-averaged synthetic spectra of Mars
Tinetti, Giovanna; Meadows, Victoria S.; Crisp, David; Fong, William; Velusamy, Thangasamy; Snively, Heather
2005-01-01
The principal goal of the NASA Terrestrial Planet Finder (TPF) and European Space Agency's Darwin mission concepts is to directly detect and characterize extrasolar terrestrial (Earthsized) planets. This first generation of instruments is expected to provide disk-averaged spectra with modest spectral resolution and signal-to-noise. Here we use a spatially and spectrally resolved model of a Mars-like planet to study the detectability of a planet's surface and atmospheric properties from disk-averaged spectra. We explore the detectability as a function of spectral resolution and wavelength range, for both the proposed visible coronograph (TPFC) and mid-infrared interferometer (TPF-I/Darwin) architectures. At the core of our model is a spectrum-resolving (line-by-line) atmospheric/surface radiative transfer model. This model uses observational data as input to generate a database of spatially resolved synthetic spectra for a range of illumination conditions and viewing geometries. The model was validated against spectra recorded by the Mars Global Surveyor-Thermal Emission Spectrometer and the Mariner 9-Infrared Interferometer Spectrometer. Results presented here include disk-averaged synthetic spectra, light curves, and the spectral variability at visible and mid-infrared wavelengths for Mars as a function of viewing angle, illumination, and season. We also considered the differences in the spectral appearance of an increasingly ice-covered Mars, as a function of spectral resolution, signal-to-noise and integration time for both TPF-C and TPFI/ Darwin.
Representing culture in interstellar messages
Vakoch, Douglas A.
2008-09-01
As scholars involved with the Search for Extraterrestrial Intelligence (SETI) have contemplated how we might portray humankind in any messages sent to civilizations beyond Earth, one of the challenges they face is adequately representing the diversity of human cultures. For example, in a 2003 workshop in Paris sponsored by the SETI Institute, the International Academy of Astronautics (IAA) SETI Permanent Study Group, the International Society for the Arts, Sciences and Technology (ISAST), and the John Templeton Foundation, a varied group of artists, scientists, and scholars from the humanities considered how to encode notions of altruism in interstellar messages art_science/2003>. Though the group represented 10 countries, most were from Europe and North America, leading to the group's recommendation that subsequent discussions on the topic should include more globally representative perspectives. As a result, the IAA Study Group on Interstellar Message Construction and the SETI Institute sponsored a follow-up workshop in Santa Fe, New Mexico, USA in February 2005. The Santa Fe workshop brought together scholars from a range of disciplines including anthropology, archaeology, chemistry, communication science, philosophy, and psychology. Participants included scholars familiar with interstellar message design as well as specialists in cross-cultural research who had participated in the Symposium on Altruism in Cross-cultural Perspective, held just prior to the workshop during the annual conference of the Society for Cross-cultural Research . The workshop included discussion of how cultural understandings of altruism can complement and critique the more biologically based models of altruism proposed for interstellar messages at the 2003 Paris workshop. This paper, written by the chair of both the Paris and Santa Fe workshops, will explore the challenges of communicating concepts of altruism that draw on both biological and cultural models.
Semantic Representatives of the Concept
Elena N. Tsay
2013-01-01
Full Text Available In the article concept as one of the principle notions of cognitive linguistics is investigated. Considering concept as culture phenomenon, having language realization and ethnocultural peculiarities, the description of the concept “happiness” is presented. Lexical and semantic paradigm of the concept of happiness correlates with a great number of lexical and semantic variants. In the work semantic representatives of the concept of happiness, covering supreme spiritual values are revealed and semantic interpretation of their functioning in the Biblical discourse is given.
Representative mass reduction in sampling
Petersen, Lars; Esbensen, Harry Kim; Dahl, Casper Kierulf
2004-01-01
We here present a comprehensive survey of current mass reduction principles and hardware available in the current market. We conduct a rigorous comparison study of the performance of 17 field and/or laboratory instruments or methods which are quantitatively characterized (and ranked) for accuracy...... always be representative in the full Theory of Sampling (TOS) sense. This survey also allows empirical verification of the merits of the famous ??Gy?s formula?? for order-of-magnitude estimation of the Fundamental Sampling Error (FSE).......We here present a comprehensive survey of current mass reduction principles and hardware available in the current market. We conduct a rigorous comparison study of the performance of 17 field and/or laboratory instruments or methods which are quantitatively characterized (and ranked) for accuracy...... dividers, the Boerner Divider, the ??spoon method??, alternate/fractional shoveling and grab sampling. Only devices based on riffle splitting principles (static or rotational) passes the ultimate representativity test (with minor, but significant relative differences). Grab sampling, the overwhelmingly...
De Luca, G.; Magnus, J.R.
2011-01-01
This article is concerned with the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals which implement, respectively, the exact Bayesian Model Averaging (BMA) estimator and the Weighted Average Least Squa
A sixth order averaged vector field method
Li, Haochen; Wang, Yushun; Qin, Mengzhao
2014-01-01
In this paper, based on the theory of rooted trees and B-series, we propose the concrete formulas of the substitution law for the trees of order =5. With the help of the new substitution law, we derive a B-series integrator extending the averaged vector field (AVF) method to high order. The new integrator turns out to be of order six and exactly preserves energy for Hamiltonian systems. Numerical experiments are presented to demonstrate the accuracy and the energy-preserving property of the s...
Phase-averaged transport for quasiperiodic Hamiltonians
Bellissard, J; Schulz-Baldes, H
2002-01-01
For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.
Sparsity averaging for radio-interferometric imaging
Carrillo, Rafael E; Wiaux, Yves
2014-01-01
We propose a novel regularization method for compressive imaging in the context of the compressed sensing (CS) theory with coherent and redundant dictionaries. Natural images are often complicated and several types of structures can be present at once. It is well known that piecewise smooth images exhibit gradient sparsity, and that images with extended structures are better encapsulated in wavelet frames. Therefore, we here conjecture that promoting average sparsity or compressibility over multiple frames rather than single frames is an extremely powerful regularization prior.
Fluctuations of wavefunctions about their classical average
Benet, L [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Flores, J [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Hernandez-Saldana, H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Izrailev, F M [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Leyvraz, F [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Seligman, T H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico)
2003-02-07
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.
The average free volume model for liquids
Yu, Yang
2014-01-01
In this work, the molar volume thermal expansion coefficient of 59 room temperature ionic liquids is compared with their van der Waals volume Vw. Regular correlation can be discerned between the two quantities. An average free volume model, that considers the particles as hard core with attractive force, is proposed to explain the correlation in this study. A combination between free volume and Lennard-Jones potential is applied to explain the physical phenomena of liquids. Some typical simple liquids (inorganic, organic, metallic and salt) are introduced to verify this hypothesis. Good agreement from the theory prediction and experimental data can be obtained.
Fluctuations of wavefunctions about their classical average
Bénet, L; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H
2003-01-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.
Grassmann Averages for Scalable Robust PCA
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA do not scale beyond small-to-medium sized datasets. To address this, we introduce the Grassmann Average (GA), whic...
Representing Conversations for Scalable Overhearing
Gutnik, G; 10.1613/jair.1829
2011-01-01
Open distributed multi-agent systems are gaining interest in the academic community and in industry. In such open settings, agents are often coordinated using standardized agent conversation protocols. The representation of such protocols (for analysis, validation, monitoring, etc) is an important aspect of multi-agent applications. Recently, Petri nets have been shown to be an interesting approach to such representation, and radically different approaches using Petri nets have been proposed. However, their relative strengths and weaknesses have not been examined. Moreover, their scalability and suitability for different tasks have not been addressed. This paper addresses both these challenges. First, we analyze existing Petri net representations in terms of their scalability and appropriateness for overhearing, an important task in monitoring open multi-agent systems. Then, building on the insights gained, we introduce a novel representation using Colored Petri nets that explicitly represent legal joint conv...
Project 2010 Project Management
Happy, Robert
2010-01-01
The ideal on-the-job reference guide for project managers who use Microsoft Project 2010. This must-have guide to using Microsoft Project 2010 is written from a real project manager's perspective and is packed with information you can use on the job. The book explores using Project 2010 during phases of project management, reveals best practices, and walks you through project flow from planning through tracking to closure. This valuable book follows the processes defined in the PMBOK Guide, Fourth Edition , and also provides exam prep for Microsoft's MCTS: Project 2010 certification.: Explains
Calculating ensemble averaged descriptions of protein rigidity without sampling.
Luis C González
Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.
Estimating a weighted average of stratum-specific parameters.
Brumback, Babette A; Winner, Larry H; Casella, George; Ghosh, Malay; Hall, Allyson; Zhang, Jianyi; Chorba, Lorna; Duncan, Paul
2008-10-30
This article investigates estimators of a weighted average of stratum-specific univariate parameters and compares them in terms of a design-based estimate of mean-squared error (MSE). The research is motivated by a stratified survey sample of Florida Medicaid beneficiaries, in which the parameters are population stratum means and the weights are known and determined by the population sampling frame. Assuming heterogeneous parameters, it is common to estimate the weighted average with the weighted sum of sample stratum means; under homogeneity, one ignores the known weights in favor of precision weighting. Adaptive estimators arise from random effects models for the parameters. We propose adaptive estimators motivated from these random effects models, but we compare their design-based performance. We further propose selecting the tuning parameter to minimize a design-based estimate of mean-squared error. This differs from the model-based approach of selecting the tuning parameter to accurately represent the heterogeneity of stratum means. Our design-based approach effectively downweights strata with small weights in the assessment of homogeneity, which can lead to a smaller MSE. We compare the standard random effects model with identically distributed parameters to a novel alternative, which models the variances of the parameters as inversely proportional to the known weights. We also present theoretical and computational details for estimators based on a general class of random effects models. The methods are applied to estimate average satisfaction with health plan and care among Florida beneficiaries just prior to Medicaid reform.
Detrending moving average algorithm for multifractals
Gu, Gao-Feng; Zhou, Wei-Xing
2010-07-01
The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-10-01
The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.
Averaged null energy condition from causality
Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein
2017-07-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.
MACHINE PROTECTION FOR HIGH AVERAGE CURRENT LINACS
Jordan, Kevin; Allison, Trent; Evans, Richard; Coleman, James; Grippo, Albert
2003-05-01
A fully integrated Machine Protection System (MPS) is critical to efficient commissioning and safe operation of all high current accelerators. The Jefferson Lab FEL [1,2] has multiple electron beam paths and many different types of diagnostic insertion devices. The MPS [3] needs to monitor both the status of these devices and the magnet settings which define the beam path. The matrix of these devices and beam paths are programmed into gate arrays, the output of the matrix is an allowable maximum average power limit. This power limit is enforced by the drive laser for the photocathode gun. The Beam Loss Monitors (BLMs), RF status, and laser safety system status are also inputs to the control matrix. There are 8 Machine Modes (electron path) and 8 Beam Modes (average power limits) that define the safe operating limits for the FEL. Combinations outside of this matrix are unsafe and the beam is inhibited. The power limits range from no beam to 2 megawatts of electron beam power.
Intensity contrast of the average supergranule
Langfellner, J; Gizon, L
2016-01-01
While the velocity fluctuations of supergranulation dominate the spectrum of solar convection at the solar surface, very little is known about the fluctuations in other physical quantities like temperature or density at supergranulation scale. Using SDO/HMI observations, we characterize the intensity contrast of solar supergranulation at the solar surface. We identify the positions of ${\\sim}10^4$ outflow and inflow regions at supergranulation scales, from which we construct average flow maps and co-aligned intensity and magnetic field maps. In the average outflow center, the maximum intensity contrast is $(7.8\\pm0.6)\\times10^{-4}$ (there is no corresponding feature in the line-of-sight magnetic field). This corresponds to a temperature perturbation of about $1.1\\pm0.1$ K, in agreement with previous studies. We discover an east-west anisotropy, with a slightly deeper intensity minimum east of the outflow center. The evolution is asymmetric in time: the intensity excess is larger 8 hours before the reference t...
Local average height distribution of fluctuating interfaces
Smith, Naftali R.; Meerson, Baruch; Sasorov, Pavel V.
2017-01-01
Height fluctuations of growing surfaces can be characterized by the probability distribution of height in a spatial point at a finite time. Recently there has been spectacular progress in the studies of this quantity for the Kardar-Parisi-Zhang (KPZ) equation in 1 +1 dimensions. Here we notice that, at or above a critical dimension, the finite-time one-point height distribution is ill defined in a broad class of linear surface growth models unless the model is regularized at small scales. The regularization via a system-dependent small-scale cutoff leads to a partial loss of universality. As a possible alternative, we introduce a local average height. For the linear models, the probability density of this quantity is well defined in any dimension. The weak-noise theory for these models yields the "optimal path" of the interface conditioned on a nonequilibrium fluctuation of the local average height. As an illustration, we consider the conserved Edwards-Wilkinson (EW) equation, where, without regularization, the finite-time one-point height distribution is ill defined in all physical dimensions. We also determine the optimal path of the interface in a closely related problem of the finite-time height-difference distribution for the nonconserved EW equation in 1 +1 dimension. Finally, we discuss a UV catastrophe in the finite-time one-point distribution of height in the (nonregularized) KPZ equation in 2 +1 dimensions.
Asymptotic Time Averages and Frequency Distributions
Muhammad El-Taha
2016-01-01
Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t, t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.
PSCAD Modules Representing PV Generator
Muljadi, E.; Singh, M.; Gevorgian, V.
2013-08-01
Photovoltaic power plants (PVPs) have been growing in size, and the installation time is very short. With the cost of photovoltaic (PV) panels dropping in recent years, it can be predicted that in the next 10 years the contribution of PVPs to the total number of renewable energy power plants will grow significantly. In this project, the National Renewable Energy Laboratory (NREL) developed a dynamic modeling of the modules to be used as building blocks to develop simulation models of single PV arrays, expanded to include Maximum Power Point Tracker (MPPT), expanded to include PV inverter, or expanded to cover an entire PVP. The focus of the investigation and complexity of the simulation determines the components that must be included in the simulation. The development of the PV inverter was covered in detail, including the control diagrams. Both the current-regulated voltage source inverter and the current-regulated current source inverter were developed in PSCAD. Various operations of the PV inverters were simulated under normal and abnormal conditions. Symmetrical and unsymmetrical faults were simulated, presented, and discussed. Both the three-phase analysis and the symmetrical component analysis were included to clarify the understanding of unsymmetrical faults. The dynamic model validation was based on the testing data provided by SCE. Testing was conducted at SCE with the focus on the grid interface behavior of the PV inverter under different faults and disturbances. The dynamic model validation covers both the symmetrical and unsymmetrical faults.
Asymmetric network connectivity using weighted harmonic averages
Morrison, Greg; Mahadevan, L.
2011-02-01
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Averaged Null Energy Condition from Causality
Hartman, Thomas; Tajdini, Amirhossein
2016-01-01
Unitary, Lorentz-invariant quantum field theories in flat spacetime obey microcausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, $\\int du T_{uu}$, must be positive. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to $n$-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form $\\int du X_{uuu\\cdots u} \\geq 0$. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment ...
Average Gait Differential Image Based Human Recognition
Jinyan Chen
2014-01-01
Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.
Geographic Gossip: Efficient Averaging for Sensor Networks
Dimakis, Alexandros G; Wainwright, Martin J
2007-01-01
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log ...
Bivariate phase-rectified signal averaging
Schumann, Aicko Y; Bauer, Axel; Schmidt, Georg
2008-01-01
Phase-Rectified Signal Averaging (PRSA) was shown to be a powerful tool for the study of quasi-periodic oscillations and nonlinear effects in non-stationary signals. Here we present a bivariate PRSA technique for the study of the inter-relationship between two simultaneous data recordings. Its performance is compared with traditional cross-correlation analysis, which, however, does not work well for non-stationary data and cannot distinguish the coupling directions in complex nonlinear situations. We show that bivariate PRSA allows the analysis of events in one signal at times where the other signal is in a certain phase or state; it is stable in the presence of noise and impassible to non-stationarities.
Voice of the People: Representative Government in the United States.
Bliss, Pam, Ed.; Heinz, Ann, Ed.; Kaplan, Howard, Ed.; Landman, James, Ed.
2001-01-01
This magazine aims to help high school teachers of civics, government, history, law, and law-related education program developers educate students about legal issues. This issue focuses on voting. It contains 11 articles: (1) "The Project of Democracy" (A. Keyssar) demonstrates how the story of the right to vote represents a slow and fitful…
A Predictive Likelihood Approach to Bayesian Averaging
Tomáš Jeřábek
2015-01-01
Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.
Industrial Applications of High Average Power FELS
Shinn, Michelle D
2005-01-01
The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...
A new approach for Bayesian model averaging
TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun
2012-01-01
Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
A high-average-power FEL for industrial applications
Dylla, H.F.; Benson, S.; Bisognano, J.
1995-12-31
CEBAF has developed a comprehensive conceptual design of an industrial user facility based on a kilowatt UV (150-1000 nm) and IR (2-25 micron) FEL driven by a recirculating, energy-recovering 200 MeV superconducting radio-frequency (SRF) accelerator. FEL users{endash}CEBAF`s partners in the Laser Processing Consortium, including AT&T, DuPont, IBM, Northrop-Grumman, 3M, and Xerox{endash}plan to develop applications such as polymer surface processing, metals and ceramics micromachining, and metal surface processing, with the overall effort leading to later scale-up to industrial systems at 50-100 kW. Representative applications are described. The proposed high-average-power FEL overcomes limitations of conventional laser sources in available power, cost-effectiveness, tunability and pulse structure. 4 refs., 3 figs., 2 tabs.
Average gluon and quark jet multiplicities at higher orders
Bolzoni, Paolo; Kniehl, Bernd A. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Kotikov, Anatoly V. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Joint Institute of Nuclear Research, Moscow (Russian Federation). Bogoliubov Lab. of Theoretical Physics
2013-05-15
We develop a new formalism for computing and including both the perturbative and nonperturbative QCD contributions to the scale evolution of average gluon and quark jet multiplicities. The new method is motivated by recent progress in timelike small-x resummation obtained in the MS factorization scheme. We obtain next-to-next-to-leading-logarithmic (NNLL) resummed expressions, which represent generalizations of previous analytic results. Our expressions depend on two nonperturbative parameters with clear and simple physical interpretations. A global fit of these two quantities to all available experimental data sets that are compatible with regard to the jet algorithms demonstrates by its goodness how our results solve a longstanding problem of QCD. We show that the statistical and theoretical uncertainties both do not exceed 5% for scales above 10 GeV. We finally propose to use the jet multiplicity data as a new way to extract the strong-coupling constant. Including all the available theoretical input within our approach, we obtain {alpha}{sub s}{sup (5)}(M{sub Z})=0.1199{+-}0.0026 in the MS scheme in an approximation equivalent to next-to-next-to-leading order enhanced by the resummations of ln(x) terms through the NNLL level and of ln Q{sup 2} terms by the renormalization group, in excellent agreement with the present world average.
Interpreting Sky-Averaged 21-cm Measurements
Mirocha, Jordan
2015-01-01
Within the first ~billion years after the Big Bang, the intergalactic medium (IGM) underwent a remarkable transformation, from a uniform sea of cold neutral hydrogen gas to a fully ionized, metal-enriched plasma. Three milestones during this epoch of reionization -- the emergence of the first stars, black holes (BHs), and full-fledged galaxies -- are expected to manifest themselves as extrema in sky-averaged ("global") measurements of the redshifted 21-cm background. However, interpreting these measurements will be complicated by the presence of strong foregrounds and non-trivialities in the radiative transfer (RT) modeling required to make robust predictions.I have developed numerical models that efficiently solve the frequency-dependent radiative transfer equation, which has led to two advances in studies of the global 21-cm signal. First, frequency-dependent solutions facilitate studies of how the global 21-cm signal may be used to constrain the detailed spectral properties of the first stars, BHs, and galaxies, rather than just the timing of their formation. And second, the speed of these calculations allows one to search vast expanses of a currently unconstrained parameter space, while simultaneously characterizing the degeneracies between parameters of interest. I find principally that (1) physical properties of the IGM, such as its temperature and ionization state, can be constrained robustly from observations of the global 21-cm signal without invoking models for the astrophysical sources themselves, (2) translating IGM properties to galaxy properties is challenging, in large part due to frequency-dependent effects. For instance, evolution in the characteristic spectrum of accreting BHs can modify the 21-cm absorption signal at levels accessible to first generation instruments, but could easily be confused with evolution in the X-ray luminosity star-formation rate relation. Finally, (3) the independent constraints most likely to aide in the interpretation
Study about chest width average performances in Romanian Hucul horse breed – Goral bloodline
Marius Maftei
2010-05-01
Full Text Available Study of average performances in a population have a huge importance because, regarding a population, the average of phenotypic value is equal with average of genotypic value. So, the studies of the average value of characters offer us an idea about the population genetic level. The biological material is represented by 87 hucul horse from Goral bloodline divided in 5 stallion families analyzed at 18, 30 and 42 months old, owned by Lucina hucul stood farm. The average performances for chest width are presented in paper. We can observe a good growth from one age to another and a small differences between sexes. The average performances of the character are between characteristic limits of the breed.
10 CFR 63.332 - Representative volume.
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Representative volume. 63.332 Section 63.332 Energy... Protection Standards § 63.332 Representative volume. (a) The representative volume is the volume of ground... radionuclides released from the Yucca Mountain disposal system that will be in the representative volume....
The Dopaminergic Midbrain Mediates an Effect of Average Reward on Pavlovian Vigor.
Rigoli, Francesco; Chew, Benjamin; Dayan, Peter; Dolan, Raymond J
2016-09-01
Dopamine plays a key role in motivation. Phasic dopamine response reflects a reinforcement prediction error (RPE), whereas tonic dopamine activity is postulated to represent an average reward that mediates motivational vigor. However, it has been hard to find evidence concerning the neural encoding of average reward that is uncorrupted by influences of RPEs. We circumvented this difficulty in a novel visual search task where we measured participants' button pressing vigor in a context where information (underlying an RPE) about future average reward was provided well before the average reward itself. Despite no instrumental consequence, participants' pressing force increased for greater current average reward, consistent with a form of Pavlovian effect on motivational vigor. We recorded participants' brain activity during task performance with fMRI. Greater average reward was associated with enhanced activity in dopaminergic midbrain to a degree that correlated with the relationship between average reward and pressing vigor. Interestingly, an opposite pattern was observed in subgenual cingulate cortex, a region implicated in negative mood and motivational inhibition. These findings highlight a crucial role for dopaminergic midbrain in representing aspects of average reward and motivational vigor.
Hearing Office Average Processing Time Ranking Report, February 2016
Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...
Characterizing individual painDETECT symptoms by average pain severity
Sadosky A
2016-07-01
Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain
Studies into the averaging problem: Macroscopic gravity and precision cosmology
Wijenayake, Tharake S.
2016-08-01
With the tremendous improvement in the precision of available astrophysical data in the recent past, it becomes increasingly important to examine some of the underlying assumptions behind the standard model of cosmology and take into consideration nonlinear and relativistic corrections which may affect it at percent precision level. Due to its mathematical rigor and fully covariant and exact nature, Zalaletdinov's macroscopic gravity (MG) is arguably one of the most promising frameworks to explore nonlinearities due to inhomogeneities in the real Universe. We study the application of MG to precision cosmology, focusing on developing a self-consistent cosmology model built on the averaging framework that adequately describes the large-scale Universe and can be used to study real data sets. We first implement an algorithmic procedure using computer algebra systems to explore new exact solutions to the MG field equations. After validating the process with an existing isotropic solution, we derive a new homogeneous, anisotropic and exact solution. Next, we use the simplest (and currently only) solvable homogeneous and isotropic model of MG and obtain an observable function for cosmological expansion using some reasonable assumptions on light propagation. We find that the principal modification to the angular diameter distance is through the change in the expansion history. We then linearize the MG field equations and derive a framework that contains large-scale structure, but the small scale inhomogeneities have been smoothed out and encapsulated into an additional cosmological parameter representing the averaging effect. We derive an expression for the evolution of the density contrast and peculiar velocities and integrate them to study the growth rate of large-scale structure. We find that increasing the magnitude of the averaging term leads to enhanced growth at late times. Thus, for the same matter content, the growth rate of large scale structure in the MG model
2010-07-01
... average and corporate pool average sulfur level determined? 80.205 Section 80.205 Protection of... ADDITIVES Gasoline Sulfur Gasoline Sulfur Standards § 80.205 How is the annual refinery or importer average and corporate pool average sulfur level determined? (a) The annual refinery or importer average...
The monthly-averaged and yearly-averaged cosine effect factor of a heliostat field
Al-Rabghi, O.M.; Elsayed, M.M. (King Abdulaziz Univ., Jeddah (Saudi Arabia). Dept. of Thermal Engineering)
1992-01-01
Calculations are carried out to determine the dependence of the monthly-averaged and the yearly-averaged daily cosine effect factor on the pertinent parameters. The results are plotted on charts for each month and for the full year. These results cover latitude angles between 0 and 45[sup o]N, for fields with radii up to 50 tower height. In addition, the results are expressed in mathematical correlations to facilitate using them in computer applications. A procedure is outlined to use the present results to preliminary layout the heliostat field, and to predict the rated MW[sub th] reflected by the heliostat field during a period of a month, several months, or a year. (author)
Lagrangian averages, averaged Lagrangians, and the mean effects of fluctuations in fluid dynamics.
Holm, Darryl D.
2002-06-01
We begin by placing the generalized Lagrangian mean (GLM) equations for a compressible adiabatic fluid into the Euler-Poincare (EP) variational framework of fluid dynamics, for an averaged Lagrangian. This is the Lagrangian averaged Euler-Poincare (LAEP) theorem. Next, we derive a set of approximate small amplitude GLM equations (glm equations) at second order in the fluctuating displacement of a Lagrangian trajectory from its mean position. These equations express the linear and nonlinear back-reaction effects on the Eulerian mean fluid quantities by the fluctuating displacements of the Lagrangian trajectories in terms of their Eulerian second moments. The derivation of the glm equations uses the linearized relations between Eulerian and Lagrangian fluctuations, in the tradition of Lagrangian stability analysis for fluids. The glm derivation also uses the method of averaged Lagrangians, in the tradition of wave, mean flow interaction. Next, the new glm EP motion equations for incompressible ideal fluids are compared with the Euler-alpha turbulence closure equations. An alpha model is a GLM (or glm) fluid theory with a Taylor hypothesis closure. Such closures are based on the linearized fluctuation relations that determine the dynamics of the Lagrangian statistical quantities in the Euler-alpha equations. Thus, by using the LAEP theorem, we bridge between the GLM equations and the Euler-alpha closure equations, through the small-amplitude glm approximation in the EP variational framework. We conclude by highlighting a new application of the GLM, glm, and alpha-model results for Lagrangian averaged ideal magnetohydrodynamics. (c) 2002 American Institute of Physics.
Milford Visual Communications Project.
Milford Exempted Village Schools, OH.
This study discusses a visual communications project designed to develop activities to promote visual literacy at the elementary and secondary school levels. The project has four phases: (1) perception of basic forms in the environment, what these forms represent, and how they inter-relate; (2) discovery and communication of more complex…
Zulauf, W.E. [Sao Paolos Environmental Secretariat, Sao Paolo (Brazil); Goelho, A.S.R. [Riocell, S.A. (Brazil); Saber, A. [IEA-Instituto de Estudos Avancados (Brazil)] [and others
1995-12-31
The project FLORAM was formulated at the `Institute for Advanced Studies` of the University of Sao Paulo. It aims at decreasing the level of carbon dioxide in the atmosphere and thus curbing the green-house effect by way of a huge effort of forestation and reforestation. The resulting forests when the trees mature, will be responsible for the absorption of about 6 billion tons of excess carbon. It represents 5 % of the total amount of CO{sub 2} which is in excess in the earth`s atmosphere and represents 5 % of the available continental surfaces which can be forested as well. Therefore, if similar projects are implemented throughout the world, in theory all the exceeding CO{sub 2}, responsible for the `greenhouse effect`, (27 % or 115 billion tons of carbon) would be absorbed. Regarding this fact, there would be a 400 million hectar increase of growing forests. FLORAM in Brazil aims to plant 20.000.000 ha in 2 years at a cost of 20 billion dollars. If it reaches its goals that will mean that Brazil will have reforested an area almost half as big as France. (author)
Estimation of the average visibility in central Europe
Horvath, Helmuth
Visibility has been obtained from spectral extinction coefficients measured with the University of Vienna Telephotometer or size distributions determined with an Aerosol Spectrometer. By measuring the extinction coefficient in different directions, possible influences of local sources could be determined easily. A region, undisturbed by local sources usually had a variation of extinction coefficient of less than 10% in different directions. Generally good visibility outside population centers in Europe is considered as 40-50 km. These values have been found independent of the location in central Europe, thus this represents the average European "clean" air. Under rare occasions (normally rapid change of air mass) the visibility can be 100-150 km. In towns, the visibility is a factor of approximately 2 lower. In comparison to this the visibility in remote regions of North and South America is larger by a factor of 2-4. Obviously the lower visibility in Europe is caused by its higher population density. Since the majority of visibility reducing particulate emissions come from small sources such as cars or heating, the emissions per unit area can be considered proportional to the population density. Using a simple box model and the visibility measured in central Europe and in Vienna, the difference in visibility inside and outside the town can be explained quantitatively. It thus is confirmed, that the generally low visibility in central Europe is a consequence of the emissions in connection with human activities and the low visibility (compared, e.g. to North or South America) in remote location such as the Alps is caused by the average European pollution.
40 CFR 1033.710 - Averaging emission credits.
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1033.710... Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. You may average emission credits only as allowed by § 1033.740. (b) You may certify one or more engine...
7 CFR 51.577 - Average midrib length.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib length means the average length of all the branches in the outer whorl measured from the point...
7 CFR 760.640 - National average market price.
2010-01-01
... 7 Agriculture 7 2010-01-01 2010-01-01 false National average market price. 760.640 Section 760.640....640 National average market price. (a) The Deputy Administrator will establish the National Average... average quality loss factors that are reflected in the market by county or part of a county. (c)...
Representing Autonomous Systems Self-Confidence through Competency Boundaries
2015-01-01
Representing Autonomous Systems’ Self-Confidence through Competency Boundaries Andrew R. Hutchins, M. L. Cummings Humans and Autonomy...characteristics of such interactions. This is particularly problematic when task demands approach, or exceed, the competency boundaries of assigned...Systems’ Self-Confidence through Competency Boundaries 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT
Representing Uncertainty by Probability and Possibility
Uncertain parameters in modeling are usually represented by probability distributions reflecting either the objective uncertainty of the parameters or the subjective belief held by the model builder. This approach is particularly suited for representing the statistical nature or variance of uncer......Uncertain parameters in modeling are usually represented by probability distributions reflecting either the objective uncertainty of the parameters or the subjective belief held by the model builder. This approach is particularly suited for representing the statistical nature or variance...
Representing Uncertainty by Probability and Possibility
Uncertain parameters in modeling are usually represented by probability distributions reflecting either the objective uncertainty of the parameters or the subjective belief held by the model builder. This approach is particularly suited for representing the statistical nature or variance of uncer......Uncertain parameters in modeling are usually represented by probability distributions reflecting either the objective uncertainty of the parameters or the subjective belief held by the model builder. This approach is particularly suited for representing the statistical nature or variance...
Kinetic energy equations for the average-passage equation system
Johnson, Richard W.; Adamczyk, John J.
1989-01-01
Important kinetic energy equations derived from the average-passage equation sets are documented, with a view to their interrelationships. These kinetic equations may be used for closing the average-passage equations. The turbulent kinetic energy transport equation used is formed by subtracting the mean kinetic energy equation from the averaged total instantaneous kinetic energy equation. The aperiodic kinetic energy equation, averaged steady kinetic energy equation, averaged unsteady kinetic energy equation, and periodic kinetic energy equation, are also treated.
Kinetic energy equations for the average-passage equation system
Johnson, Richard W.; Adamczyk, John J.
1989-01-01
Important kinetic energy equations derived from the average-passage equation sets are documented, with a view to their interrelationships. These kinetic equations may be used for closing the average-passage equations. The turbulent kinetic energy transport equation used is formed by subtracting the mean kinetic energy equation from the averaged total instantaneous kinetic energy equation. The aperiodic kinetic energy equation, averaged steady kinetic energy equation, averaged unsteady kinetic energy equation, and periodic kinetic energy equation, are also treated.
7 CFR 1280.611 - Representative period.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Representative period. 1280.611 Section 1280.611... INFORMATION ORDER Procedures To Request a Referendum Definitions § 1280.611 Representative period. Representative period means the period designated by the Secretary pursuant to § 518 of the Act. ...
29 CFR 548.405 - Representative period.
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Representative period. 548.405 Section 548.405 Labor... Application § 548.405 Representative period. (a) The application must set forth the facts relied upon to show... employee exclusive of overtime premiums over a representative period of time. 21 The basic rate will be...
7 CFR 1230.618 - Representative period.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Representative period. 1230.618 Section 1230.618... CONSUMER INFORMATION Procedures for the Conduct of Referendum Definitions § 1230.618 Representative period. The term Representative period means the 12-consecutive months prior to the first day of absentee and...
7 CFR 1220.612 - Representative period.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Representative period. 1220.612 Section 1220.612... CONSUMER INFORMATION Procedures To Request a Referendum Definitions § 1220.612 Representative period. Representative period means the period designated by the Secretary pursuant to section 1970 of the Act. ...
U.S. Geological Survey, Department of the Interior — This tabular data set represents thecatchment-average for the 30-year (1971-2000) average daily minimum temperature in Celsius multiplied by 100 compiled for every...
Spatio-temporal representativeness of aerosol remote sensing observations
Schutgens, Nick; Gryspeerdt, Edward; Tsyro, Svetlana; Goto, Daisuke; Watson-Parris, Duncan; Weigum, Natalie; Schulz, Michael; Stier, Philip
2016-04-01
One characteristic of remote sensing observations is the strong intermittency with which they observe the same scene. Due to unfavourable conditions (due to e.g. low visible light, cloudiness or high surface albedo), sampling constraints (due to e.g. polar orbits) or instrument malfunction or maintenance, gaps in the observing record of hours to months exist. At the same time, satellite L3 products often are spatial aggregates over considerable distances (e.g. 1 by 1 degree). We study the impact of spatio-temporal sampling of observations on their representativeness: i.e. how well can satellite products represent the large scale (~ 100 by 100 km) aerosol field over periods of days, months, or years. This study was conducted by using diverse global and regional aerosol models as a truth and sub-sample them according to actual observations. In this way, we have been able to study the representativeness of different observing systems like MODIS, CALIOP and AERONET. Monthly and yearly averages allow serious sampling errors, that may still be present in multi-year climatologies due to recurring observing patterns. Even daily averages are affected as diurnal cycles can often not be observed. We discuss the implications these representativeness errors have for e.g. model evaluation or the construction of climatologies. We also assess similar representativeness issues in ground site in-situ observations from e.g. EMEP or IMPROVE and show that satellite datasets have distinct advantages due to their better spatial coverage provided temporal sampling is dealt with properly (i.e. through collocation of datasets). Finally, we briefly introduce a software tool (the Community Intercomparison Suite or CIS) that is designed to improve representativeness of datasets in intercomparion studies through aggregation and collocation of data.
Spatially-Averaged Diffusivities for Pollutant Transport in Vegetated Flows
Huang, Jun; Zhang, Xiaofeng; Chua, Vivien P.
2016-06-01
Vegetation in wetlands can create complicated flow patterns and may provide many environmental benefits including water purification, flood protection and shoreline stabilization. The interaction between vegetation and flow has significant impacts on the transport of pollutants, nutrients and sediments. In this paper, we investigate pollutant transport in vegetated flows using the Delft3D-FLOW hydrodynamic software. The model simulates the transport of pollutants with the continuous release of a passive tracer at mid-depth and mid-width in the region where the flow is fully developed. The theoretical Gaussian plume profile is fitted to experimental data, and the lateral and vertical diffusivities are computed using the least squares method. In previous tracer studies conducted in the laboratory, the measurements were obtained at a single cross-section as experimental data is typically collected at one location. These diffusivities are then used to represent spatially-averaged values. With the numerical model, sensitivity analysis of lateral and vertical diffusivities along the longitudinal direction was performed at 8 cross-sections. Our results show that the lateral and vertical diffusivities increase with longitudinal distance from the injection point, due to the larger size of the dye cloud further downstream. A new method is proposed to compute diffusivities using a global minimum least squares method, which provides a more reliable estimate than the values obtained using the conventional method.
Relationships between average depth and number of misclassifications for decision trees
Chikalov, Igor
2014-02-14
This paper presents a new tool for the study of relationships between the total path length or the average depth and the number of misclassifications for decision trees. In addition to algorithm, the paper also presents the results of experiments with datasets from UCI ML Repository [9] and datasets representing Boolean functions with 10 variables.
Another Failure to Replicate Lynn's Estimate of the Average IQ of Sub-Saharan Africans
Wicherts, Jelte M.; Dolan, Conor V.; Carlson, Jerry S.; van der Maas, Han L. J.
2010-01-01
In his comment on our literature review of data on the performance of sub-Saharan Africans on Raven's Progressive Matrices, Lynn (this issue) criticized our selection of samples of primary and secondary school students. On the basis of the samples he deemed representative, Lynn concluded that the average IQ of sub-Saharan Africans stands at 67…
A Boy with a Mild Case of Cornelia de Lange Syndrome with Above Average Intelligence.
Lacassie, Yves; Bobadilla, Olga; Cambias, Ron D., Jr.
1997-01-01
Describes the characteristics of an 11-year-old boy who represents the only documented case of an individual with Cornelia de Lange syndrome who also has above average cognitive functioning. Major diagnostic criteria for de Lange syndrome and comparisons with other severe and mild cases are discussed. (Author/CR)
Compositional dependences of average positron lifetime in binary As-S/Se glasses
Ingram, A. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Golovchak, R., E-mail: roman_ya@yahoo.com [Department of Materials Science and Engineering, Lehigh University, 5 East Packer Avenue, Bethlehem, PA 18015-3195 (United States); Kostrzewa, M.; Wacke, S. [Department of Physics of Opole University of Technology, 75 Ozimska str., Opole, PL-45370 (Poland); Shpotyuk, M. [Lviv Polytechnic National University, 12, Bandery str., Lviv, UA-79013 (Ukraine); Shpotyuk, O. [Institute of Physics of Jan Dlugosz University, 13/15al. Armii Krajowej, Czestochowa, PL-42201 (Poland)
2012-02-15
Compositional dependence of average positron lifetime is studied systematically in typical representatives of binary As-S and As-Se glasses. This dependence is shown to be in opposite with molar volume evolution. The origin of this anomaly is discussed in terms of bond free solid angle concept applied to different types of structurally-intrinsic nanovoids in a glass.
U.S. Geological Survey, Department of the Interior — This data set represents the average monthly maximum temperature in Celsius multiplied by 100 for 2002 compiled for every catchment of NHDPlus for the conterminous...
U.S. Geological Survey, Department of the Interior — This data set represents the average monthly minimum temperature in Celsius multiplied by 100 for 2002 compiled for every catchment of NHDPlus for the conterminous...
U.S. Geological Survey, Department of the Interior — This data set represents the average monthly precipitation in millimeters multiplied by 100 for 2002 compiled for every catchment of NHDPlus for the conterminous...
U.S. Geological Survey, Department of the Interior — This data set represents the average value of saturation overland flow, in percent of total streamflow, compiled for every catchment of NHDPlus for the conterminous...
School Science Review, 1978
1978-01-01
Presents sixteen project notes developed by pupils of Chipping Norton School and Bristol Grammar School, in the United Kingdom. These Projects include eight biology A-level projects and eight Chemistry A-level projects. (HM)
Representing Identity and Equivalence for Scientific Data
Wickett, K. M.; Sacchi, S.; Dubin, D.; Renear, A. H.
2012-12-01
Matters of equivalence and identity are central to the stewardship of scientific data. In order to properly prepare for and manage the curation, preservation and sharing of digitally-encoded data, data stewards must be able to characterize and assess the relationships holding between data-carrying digital resources. However, identity-related questions about resources and their information content may not be straightforward to answer: for example, what exactly does it mean to say that two files contain the same data, but in different formats? Information content is frequently distinguished from particular representations, but there is no adequately developed shared understanding of what this really means and how the relationship between content and its representations hold. The Data Concepts group at the Center for Informatics Research in Science and Scholarship (CIRSS), University of Illinois at Urbana Champaign, is developing a logic-based framework of fundamental concepts related to scientific data to support curation and integration. One project goal is to develop precise accounts of information resources carrying the same data. We present two complementary conceptual models for information representation: the Basic Representation Model (BRM) and the Systematic Assertion Model (SAM). We show how these models provide an analytical account of digitally-encoded scientific data and a precise understanding of identity and equivalence. The Basic Representation Model identifies the core entities and relationships involved in representing information carried by digital objects. In BRM, digital objects are symbol structures that express propositional content, and stand in layered encoding relationships. For example, an RDF description may be serialized as either XML or N3, and those expressions in turn may be encoded as either UTF-8 or UTF-16 sequences. Defining this encoding stack reveals distinctions necessary for a precise account of identity and equivalence
G. H. de Rooij
2009-07-01
Full Text Available Current theories for water flow in porous media are valid for scales much smaller than those at which problem of public interest manifest themselves. This provides a drive for upscaled flow equations with their associated upscaled parameters. Upscaling is often achieved through volume averaging, but the solution to the resulting closure problem imposes severe restrictions to the flow conditions that limit the practical applicability. Here, the derivation of a closed expression of the effective hydraulic conductivity is forfeited to circumvent the closure problem. Thus, more limited but practical results can be derived. At the Representative Elementary Volume scale and larger scales, the gravitational potential and fluid pressure are treated as additive potentials. The necessary requirement that the superposition be maintained across scales is combined with conservation of energy during volume integration to establish consistent upscaling equations for the various heads. The power of these upscaling equations is demonstrated by the derivation of upscaled water content-matric head relationships and the resolution of an apparent paradox reported in the literature that is shown to have arisen from a violation of the superposition principle. Applying the upscaling procedure to Darcy's Law leads to the general definition of an upscaled hydraulic conductivity. By examining this definition in detail for porous media with different degrees of heterogeneity, a series of criteria is derived that must be satisfied for Darcy's Law to remain valid at a larger scale.
Representing Practice: Practice Models, Patterns, Bundles
Falconer, Isobel; Finlay, Janet; Fincher, Sally
2011-01-01
This article critiques learning design as a representation for sharing and developing practice, based on synthesis of three projects. Starting with the findings of the Mod4L Models of Practice project, it argues that the technical origins of learning design, and the consequent focus on structure and sequence, limit its usefulness for sharing…
Average glandular dose in digital mammography and breast tomosynthesis
Olgar, T. [Ankara Univ. (Turkey). Dept. of Engineering Physics; Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Kahn, T.; Gosch, D. [Universitaetsklinikum Leipzig AoeR (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie
2012-10-15
Purpose: To determine the average glandular dose (AGD) in digital full-field mammography (2 D imaging mode) and in breast tomosynthesis (3 D imaging mode). Materials and Methods: Using the method described by Boone, the AGD was calculated from the exposure parameters of 2247 conventional 2 D mammograms and 984 mammograms in 3 D imaging mode of 641 patients examined with the digital mammographic system Hologic Selenia Dimensions. The breast glandular tissue content was estimated by the Hologic R2 Quantra automated volumetric breast density measurement tool for each patient from right craniocaudal (RCC) and left craniocaudal (LCC) images in 2 D imaging mode. Results: The mean compressed breast thickness (CBT) was 52.7 mm for craniocaudal (CC) and 56.0 mm for mediolateral oblique (MLO) views. The mean percentage of breast glandular tissue content was 18.0 % and 17.4 % for RCC and LCC projections, respectively. The mean AGD values in 2 D imaging mode per exposure for the standard breast were 1.57 mGy and 1.66 mGy, while the mean AGD values after correction for real breast composition were 1.82 mGy and 1.94 mGy for CC and MLO views, respectively. The mean AGD values in 3 D imaging mode per exposure for the standard breast were 2.19 mGy and 2.29 mGy, while the mean AGD values after correction for the real breast composition were 2.53 mGy and 2.63 mGy for CC and MLO views, respectively. No significant relationship was found between the AGD and CBT in 2 D imaging mode and a good correlation coefficient of 0.98 in 3 D imaging mode. Conclusion: In this study the mean calculated AGD per exposure in 3 D imaging mode was on average 34 % higher than for 2 D imaging mode for patients examined with the same CBT.
Representing Others in a Public Good Game
Karen Evelyn Hauge
2015-09-01
Full Text Available In many important public good situations the decision-making power and authority is delegated to representatives who make binding decisions on behalf of a larger group. The purpose of this study is to compare contribution decisions made by individuals with contribution decisions made by group representatives. We present the results from a laboratory experiment that compares decisions made by individuals in inter-individual public good games with decisions made by representatives on behalf of their group in inter-group public good games. Our main finding is that contribution behavior differs between individuals and group representatives, but only for women. While men’s choices are equally self-interested as individuals and group representatives, women make less self-interested choices as group representatives.
Represented Speech in Qualitative Health Research
Musaeus, Peter
2017-01-01
Represented speech refers to speech where we reference somebody. Represented speech is an important phenomenon in everyday conversation, health care communication, and qualitative research. This case will draw first from a case study on physicians’ workplace learning and second from a case study...... on nurses’ apprenticeship learning. The aim of the case is to guide the qualitative researcher to use own and others’ voices in the interview and to be sensitive to represented speech in everyday conversation. Moreover, reported speech matters to health professionals who aim to represent the voice...... of their patients. Qualitative researchers and students might learn to encourage interviewees to elaborate different voices or perspectives. Qualitative researchers working with natural speech might pay attention to how people talk and use represented speech. Finally, represented speech might be relevant...
Average annual runoff in the United States, 1951-80
U.S. Geological Survey, Department of the Interior — This is a line coverage of average annual runoff in the conterminous United States, 1951-1980. Surface runoff Average runoff Surface waters United States
Seasonal Sea Surface Temperature Averages, 1985-2001 - Direct Download
U.S. Geological Survey, Department of the Interior — This data set consists of four images showing seasonal sea surface temperature (SST) averages for the entire earth. Data for the years 1985-2001 are averaged to...
Average American 15 Pounds Heavier Than 20 Years Ago
... page: https://medlineplus.gov/news/fullstory_160233.html Average American 15 Pounds Heavier Than 20 Years Ago ... since the late 1980s and early 1990s, the average American has put on 15 or more additional ...
Representing Quadric Surfaces Using NURBS Surfaces
秦开怀
1997-01-01
A method for representing quadrc surfaces using NURBS is presented.By means of the necessary and sufficient conditons for NURBS curves to precisely represent circular arcs and other conics,quadric surfaces can be represented by NURBS surfaces with fewer control vertices.The method can be used not only for NURBS surface representation of quadric surfaces,but also for rounding polyhedrons.Many examples are given in the paper.
Trait valence and the better-than-average effect.
Gold, Ron S; Brown, Mark G
2011-12-01
People tend to regard themselves as having superior personality traits compared to their average peer. To test whether this "better-than-average effect" varies with trait valence, participants (N = 154 students) rated both themselves and the average student on traits constituting either positive or negative poles of five trait dimensions. In each case, the better-than-average effect was found, but trait valence had no effect. Results were discussed in terms of Kahneman and Tversky's prospect theory.
Investigating Averaging Effect by Using Three Dimension Spectrum
无
2005-01-01
The eddy current displacement sensor's averaging effect has been investigated in this paper,and thefrequency spectrum property of the averaging effect was also deduced. It indicates that the averaging effect has no influences on measuring a rotor's rotating error, but it has visible influences on measuring the rotor's profile error. According to the frequency spectrum of the averaging effect, the actual sampling data can be adjusted reasonably, thus measuring precision is improved.
Average of Distribution and Remarks on Box-Splines
LI Yue-sheng
2001-01-01
A class of generalized moving average operators is introduced, and the integral representations of an average function are provided. It has been shown that the average of Dirac δ-distribution is just the well known box-spline. Some remarks on box-splines, such as their smoothness and the corresponding partition of unity, are made. The factorization of average operators is derived. Then, the subdivision algorithm for efficient computing of box-splines and their linear combinations follows.
Scalable Robust Principal Component Analysis Using Grassmann Averages
Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi
2016-01-01
provide a simple algorithm for computing this Grassmann Average (GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust...
Averaging and Globalising Quotients of Informetric and Scientometric Data.
Egghe, Leo; Rousseau, Ronald
1996-01-01
Discussion of impact factors for "Journal Citation Reports" subject categories focuses on the difference between an average of quotients and a global average, obtained as a quotient of averages. Applications in the context of informetrics and scientometrics are given, including journal prices and subject discipline influence scores.…
Spectral averaging techniques for Jacobi matrices with matrix entries
Sadel, Christian
2009-01-01
A Jacobi matrix with matrix entries is a self-adjoint block tridiagonal matrix with invertible blocks on the off-diagonals. Averaging over boundary conditions leads to explicit formulas for the averaged spectral measure which can potentially be useful for spectral analysis. Furthermore another variant of spectral averaging over coupling constants for these operators is presented.
76 FR 6161 - Annual Determination of Average Cost of Incarceration
2011-02-03
... No: 2011-2363] DEPARTMENT OF JUSTICE Bureau of Prisons Annual Determination of Average Cost of Incarceration AGENCY: Bureau of Prisons, Justice. ACTION: Notice. SUMMARY: The fee to cover the average cost of incarceration for Federal inmates in Fiscal Year 2009 was $25,251. The average annual cost to confine an...
20 CFR 226.62 - Computing average monthly compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is computed by first determining the employee's highest 60 months of railroad compensation...
40 CFR 1042.710 - Averaging emission credits.
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Averaging emission credits. 1042.710..., Banking, and Trading for Certification § 1042.710 Averaging emission credits. (a) Averaging is the exchange of emission credits among your engine families. (b) You may certify one or more engine families to...
27 CFR 19.37 - Average effective tax rate.
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Average effective tax rate..., DEPARTMENT OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Taxes Effective Tax Rates § 19.37 Average effective tax rate. (a) The proprietor may establish an average effective tax rate for any...
7 CFR 51.2561 - Average moisture content.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except...
20 CFR 404.220 - Average-monthly-wage method.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Average-monthly-wage method. 404.220 Section... INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.220 Average-monthly-wage method. (a) Who is eligible for this method. You...
7 CFR 1410.44 - Average adjusted gross income.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Average adjusted gross income. 1410.44 Section 1410... Average adjusted gross income. (a) Benefits under this part will not be available to persons or legal entities whose average adjusted gross income exceeds $1,000,000 or as further specified in part...
18 CFR 301.7 - Average System Cost methodology functionalization.
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Average System Cost... REGULATORY COMMISSION, DEPARTMENT OF ENERGY REGULATIONS FOR FEDERAL POWER MARKETING ADMINISTRATIONS AVERAGE... ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each...
47 CFR 80.759 - Average terrain elevation.
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Average terrain elevation. 80.759 Section 80... Average terrain elevation. (a)(1) Draw radials from the antenna site for each 45 degrees of azimuth.... (d) Average the values by adding them and dividing by the number of readings along each radial....
34 CFR 668.196 - Average rates appeals.
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.196 Section 668.196....196 Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under... calculated as an average rate under § 668.183(d)(2). (2) You may appeal a notice of a loss of...
20 CFR 404.221 - Computing your average monthly wage.
2010-04-01
... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing Primary Insurance Amounts § 404.221 Computing your average monthly wage. (a) General. Under the...
34 CFR 668.215 - Average rates appeals.
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false Average rates appeals. 668.215 Section 668.215... Average rates appeals. (a) Eligibility. (1) You may appeal a notice of a loss of eligibility under § 668... as an average rate under § 668.202(d)(2). (2) You may appeal a notice of a loss of eligibility...
7 CFR 51.2548 - Average moisture content determination.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548..., AND STANDARDS) United States Standards for Grades of Pistachio Nuts in the Shell § 51.2548 Average moisture content determination. (a) Determining average moisture content of the lot is not a requirement...
Relationships between feeding behavior and average daily gain in cattle
Bruno Fagundes Cunha Lage
2013-12-01
Full Text Available Several studies have reported relationship between eating behavior and performance in feedlot cattle. The evaluation of behavior traits demands high degree of work and trained manpower, therefore, in recent years has been used an automated feed intake measurement system (GrowSafe System ®, that identify and record individual feeding patterns. The aim of this study was to evaluate the relationship between feeding behavior traits and average daily gain in Nellore calves undergoing feed efficiency test. Date from 85 Nelore males was recorded during the feed efficiency test performed in 2012, at Centro APTA Bovinos de Corte, Instituto de Zootecnia, São Paulo State. Were analyzed the behavioral traits: time at feeder (TF, head down duration (HD, representing the time when the animal is actually eating, frequency of visits (FV and feed rate (FR calculated as the amount of dry matter (DM consumed by time at feeder (g.min-1. The ADG was calculated by linear regression of individual weights on days in test. ADG classes were obtained considering the average ADG and standard deviation (SD being: high ADG (>mean + 1.0 SD, medium ADG (± 1.0 SD from the mean and low ADG (
A fast algorithm for the estimation of statistical error in DNS (or experimental) time averages
Russo, Serena; Luchini, Paolo
2017-10-01
Time- and space-averaging of the instantaneous results of DNS (or experimental measurements) represent a standard final step, necessary for the estimation of their means or correlations or other statistical properties. These averages are necessarily performed over a finite time and space window, and are therefore more correctly just estimates of the 'true' statistical averages. The choice of the appropriate window size is most often subjectively based on individual experience, but as subtler statistics enter the focus of investigation, an objective criterion becomes desirable. Here a modification of the classical estimator of averaging error of finite time series, i.e. 'batch means' algorithm, will be presented, which retains its speed while removing its biasing error. As a side benefit, an automatic determination of batch size is also included. Examples will be given involving both an artificial time series of known statistics and an actual DNS of turbulence.
Data mining and visualization of average images in a digital hand atlas
Zhang, Aifeng; Gertych, Arkadiusz; Liu, Brent J.; Huang, H. K.
2005-04-01
We have collected a digital hand atlas containing digitized left hand radiographs of normally developed children grouped accordingly by age, sex, and race. A set of features stored in a database reflecting patient's stage of skeletal development has been calculated by automatic image processing procedures. This paper addresses a new concept, "average" image in the digital hand atlas. The "average" reference image in the digital atlas is selected for each of the groups of normal developed children with the best representative skeletal maturity based on bony features. A data mining procedure was designed and applied to find the average image through average feature vector matching. It also provides a temporary solution for the missing feature problem through polynomial regression. As more cases are added to the digital hand atlas, it can grow to provide clinicians accurate reference images to aid the bone age assessment process.
Representing Object Colour in Language Comprehension
Connell, Louise
2007-01-01
Embodied theories of cognition hold that mentally representing something "red" engages the neural subsystems that respond to environmental perception of that colour. This paper examines whether implicit perceptual information on object colour is represented during sentence comprehension even though doing so does not necessarily facilitate task…
A PROJECT WITHIN MICROSOFT PROJECT 2007
Emil COSMA
2009-10-01
Full Text Available The main purpose of this article is to emphasize the innumerable advantages of the Microsoft Project 2007 projecting environment that a project manager could benefit of. More exactly, Project Management stands for a function that is recognized within the majority of domains. A project is defined as “a temporary effort made for creating a product or a unique service”. A projects’ administrative programme within an informational system (such as Microsoft Project, Primavera Planner represents a “database that is in concordance with time”. It should help proceeding the required operations and, at the same time, to look and behave the same way other frequently utilized productive programmes. It keeps accounts of all information regarding the job requests, period and the project’s needed resources and visualizes the project’s plan in standard, well-defined formats, organizes the activities and resources consistently and efficiently,shares information regarding the project with all persons involved in an intranet or Internet network, and communicates efficiently with the resources and other involved persons, permitting at the same time the project manager to take the final control/decision as his/her responsibility.
Approximate Dual Averaging Method for Multiagent Saddle-Point Problems with Stochastic Subgradients
Deming Yuan
2014-01-01
Full Text Available This paper considers the problem of solving the saddle-point problem over a network, which consists of multiple interacting agents. The global objective function of the problem is a combination of local convex-concave functions, each of which is only available to one agent. Our main focus is on the case where the projection steps are calculated approximately and the subgradients are corrupted by some stochastic noises. We propose an approximate version of the standard dual averaging method and show that the standard convergence rate is preserved, provided that the projection errors decrease at some appropriate rate and the noises are zero-mean and have bounded variance.
Stakeholders management in project management: contributions of literature
Cacilda Mendes dos Santos Amaral
2017-06-01
Full Text Available The aim of this study was to obtain an overview of the scientific literature on the topic of stakeholder management in project management, identifying the most representative studies, their approaches, authors and eminent journals. The methodological approach used was a systematic literature review, adopting the bibliometric analysis techniques and content analysis. Six hundred and fourteen papers were identified for bibliometric analysis and subsequently the 28 most relevant papers, according to the proxy higher average number of citations per year, which were analyzed in depth using content analysis techniques. The results indicated a growing production in the area, mainly in the construction sector. Whereas the result of the analysis of the most relevant works still features instrumental approach and the case study method in the majority of the studies. Finally, this study provided foundations for further research by classifying the most cited authors in the relevant papers into three groups: classic project management, applied project management and leadership and behavior.
Averaging and sampling for magnetic-observatory hourly data
J. J. Love
2010-11-01
Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.
Downscaled projections of Caribbean coral bleaching that can inform conservation planning.
van Hooidonk, Ruben; Maynard, Jeffrey Allen; Liu, Yanyun; Lee, Sang-Ki
2015-09-01
Projections of climate change impacts on coral reefs produced at the coarse resolution (~1°) of Global Climate Models (GCMs) have informed debate but have not helped target local management actions. Here, projections of the onset of annual coral bleaching conditions in the Caribbean under Representative Concentration Pathway (RCP) 8.5 are produced using an ensemble of 33 Coupled Model Intercomparison Project phase-5 models and via dynamical and statistical downscaling. A high-resolution (~11 km) regional ocean model (MOM4.1) is used for the dynamical downscaling. For statistical downscaling, sea surface temperature (SST) means and annual cycles in all the GCMs are replaced with observed data from the ~4-km NOAA Pathfinder SST dataset. Spatial patterns in all three projections are broadly similar; the average year for the onset of annual severe bleaching is 2040-2043 for all projections. However, downscaled projections show many locations where the onset of annual severe bleaching (ASB) varies 10 or more years within a single GCM grid cell. Managers in locations where this applies (e.g., Florida, Turks and Caicos, Puerto Rico, and the Dominican Republic, among others) can identify locations that represent relative albeit temporary refugia. Both downscaled projections are different for the Bahamas compared to the GCM projections. The dynamically downscaled projections suggest an earlier onset of ASB linked to projected changes in regional currents, a feature not resolved in GCMs. This result demonstrates the value of dynamical downscaling for this application and means statistically downscaled projections have to be interpreted with caution. However, aside from west of Andros Island, the projections for the two types of downscaling are mostly aligned; projected onset of ASB is within ±10 years for 72% of the reef locations. © 2015 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
Equation of State, Occupation Probabilities and Conductivities in the Average Atom Purgatorio Code
Sterne, P
2006-12-22
We report on recent developments with the Purgatorio code, a new implementation of Liberman's Inferno model. This fully relativistic average atom code uses phase shift tracking and an efficient refinement scheme to provide an accurate description of continuum states. The resulting equations of state accurately represent the atomic shell-related features which are absent in Thomas-Fermi-based approaches. We discuss various representations of the exchange potential and some of the ambiguities in the choice of the effective charge Z* in average atom models, both of which affect predictions of electrical conductivities and radiative properties.
Averaging VMAT treatment plans for multi-criteria navigation
Craft, David; Unkelbach, Jan
2013-01-01
The main approach to smooth Pareto surface navigation for radiation therapy multi-criteria treatment planning involves taking real-time averages of pre-computed treatment plans. In fluence-based treatment planning, fluence maps themselves can be averaged, which leads to the dose distributions being averaged due to the linear relationship between fluence and dose. This works for fluence-based photon plans and proton spot scanning plans. In this technical note, we show that two or more sliding window volumetric modulated arc therapy (VMAT) plans can be combined by averaging leaf positions in a certain way, and we demonstrate that the resulting dose distribution for the averaged plan is approximately the average of the dose distributions of the original plans. This leads to the ability to do Pareto surface navigation, i.e. interactive multi-criteria exploration of VMAT plan dosimetric tradeoffs.
Averaging and exact perturbations in LTB dust models
Sussman, Roberto A
2012-01-01
We introduce a scalar weighed average ("q-average") acting on concentric comoving domains in spherically symmetric Lemaitre-Tolman-Bondi (LTB) dust models. The resulting averaging formalism allows for an elegant coordinate independent dynamical study of the models, providing as well a valuable theoretical insight on the properties of scalar averaging in inhomogeneous spacetimes. The q-averages of those covariant scalars common to FLRW models (the "q-scalars") identically satisfy FLRW evolution laws and determine for every domain a unique FLRW background state. All curvature and kinematic proper tensors and their invariant contractions are expressible in terms of the q-scalars and their linear and quadratic local fluctuations, which convey the effects of inhomogeneity through the ratio of Weyl to Ricci curvature invariants and the magnitude of radial gradients. We define also non-local fluctuations associated with the intuitive notion of a "contrast" with respect to FLRW reference averaged values assigned to a...
7 CFR 1205.20 - Representative period.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Representative period. 1205.20 Section 1205.20 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING... period means the 2006 calendar year....
Enhancing policy innovation by redesigning representative democracy
Sørensen, Eva
2016-01-01
. Two Danish case studies indicate that collaboration between politicians and relevant and affected stakeholders can promote policy innovation, but also that a redesign of representative democracy is needed in order to establish a productive combination of political leadership, competition...
REFractions: The Representing Equivalent Fractions Game
Tucker, Stephen I.
2014-01-01
Stephen Tucker presents a fractions game that addresses a range of fraction concepts including equivalence and computation. The REFractions game also improves students' fluency with representing, comparing and adding fractions.
Caspar Wessel on representing complex numbers (1799)
Branner, Bodil
1999-01-01
In celebration of the bicentenary of the publication of Wessel's paper on the geometric interpretation of complex numbers it is decsribed how Wessel used complex numbers to represent directions in surveying, at least as early as 1787.......In celebration of the bicentenary of the publication of Wessel's paper on the geometric interpretation of complex numbers it is decsribed how Wessel used complex numbers to represent directions in surveying, at least as early as 1787....
Distributed Weighted Parameter Averaging for SVM Training on Big Data
Das, Ayan; Bhattacharya, Sourangshu
2015-01-01
Two popular approaches for distributed training of SVMs on big data are parameter averaging and ADMM. Parameter averaging is efficient but suffers from loss of accuracy with increase in number of partitions, while ADMM in the feature space is accurate but suffers from slow convergence. In this paper, we report a hybrid approach called weighted parameter averaging (WPA), which optimizes the regularized hinge loss with respect to weights on parameters. The problem is shown to be same as solving...
On the average crosscap number Ⅱ: Bounds for a graph
Yi-chao CHEN; Yan-pei LIU
2007-01-01
The bounds are obtained for the average crosscap number. Let G be a graph which is not a tree. It is shown that the average crosscap number of G is not less than 2β(G)-1/2β(G)-1β(G)β(G) and not larger than/β(G). Furthermore, we also describe the structure of the graphs which attain the bounds of the average crosscap number.
On the average crosscap number II: Bounds for a graph
2007-01-01
The bounds are obtained for the average crosscap number. Let G be a graph which is not a tree. It is shown that the average crosscap number of G is not less thanβ(G)-1/2β(G)-1β(G) and not larger thanβ(G). Furthermore, we also describe the structure of the graphs which attain the bounds of the average crosscap number.
João M. Pinto
2017-05-01
Full Text Available Project finance is the process of financing a specific economic unit that the sponsors create, in which creditors share much of the venture’s business risk and funding is obtained strictly for the project itself. Project finance creates value by reducing the costs of funding, maintaining the sponsors financial flexibility, increasing the leverage ratios, avoiding contamination risk, reducing corporate taxes, improving risk management, and reducing the costs associated with market imperfections. However, project finance transactions are complex undertakings, they have higher costs of borrowing when compared to conventional financing and the negotiation of the financing and operating agreements is time-consuming. In addition to describing the economic motivation for the use of project finance, this paper provides details on project finance characteristics and players, presents the recent trends of the project finance market and provides some statistics in relation to project finance lending activity between 2000 and 2014. Statistical analysis shows that project finance loans arranged for U.S. borrowers have higher credit spreads and upfront fees, and have higher loan size to deal size ratios when compared with loans arranged for borrowers located in W.E. On the contrary, loans closed in the U.S. have a much shorter average maturity and are much less likely to be subject to currency risk and to be closed as term loans.
Quadratic forms representing all odd positive integers
Rouse, Jeremy
2011-01-01
We consider the problem of classifying all positive-definite integer-valued quadratic forms that represent all positive odd integers. Kaplansky considered this problem for ternary forms, giving a list of 23 candidates, and proving that 19 of those represent all positive odds. (Jagy later dealt with a 20th candidate.) Assuming that the remaining three forms represent all positive odds, we prove that an arbitrary, positive-definite quadratic form represents all positive odds if and only if it represents the odd numbers from 1 up to 451. This result is analogous to Bhargava and Hanke's celebrated 290-theorem. In addition, we prove that these three remaining ternaries represent all positive odd integers, assuming the generalized Riemann hypothesis. This result is made possible by a new analytic method for bounding the cusp constants of integer-valued quaternary quadratic forms $Q$ with fundamental discriminant. This method is based on the analytic properties of Rankin-Selberg $L$-functions, and we use it to prove...
Decision trees with minimum average depth for sorting eight elements
AbouEisha, Hassan
2015-11-19
We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.
Practical definition of averages of tensors in general relativity
Boero, Ezequiel F
2016-01-01
We present a definition of tensor fields which are average of tensors over a manifold, with a straightforward and natural definition of derivative for the averaged fields; which in turn makes a suitable and practical construction for the study of averages of tensor fields that satisfy differential equations. Although we have in mind applications to general relativity, our presentation is applicable to a general n-dimensional manifold. The definition is based on the integration of scalars constructed from a physically motivated basis, making use of the least amount of geometrical structure. We also present definitions of covariant derivative of the averaged tensors and Lie derivative.
2011-01-01
Project name： 90,000t/a BR device and auxiliary projects Construction unit： Sinopec Beijing Yanshan Petrochemical Company Total investment： 2.257 billion yuan Project description： It will cover an area of 14. lha.
Koenig, Bruce E; Lacey, Douglas S
2014-07-01
In this research project, nine small digital audio recorders were tested using five sets of 30-min recordings at all available recording modes, with consistent audio material, identical source and microphone locations, and identical acoustic environments. The averaged direct current (DC) offset values and standard deviations were measured for 30-sec and 1-, 2-, 3-, 6-, 10-, 15-, and 30-min segments. The research found an inverse association between segment lengths and the standard deviation values and that lengths beyond 30 min may not meaningfully reduce the standard deviation values. This research supports previous studies indicating that measured averaged DC offsets should only be used for exclusionary purposes in authenticity analyses and exhibit consistent values when the general acoustic environment and microphone/recorder configurations were held constant. Measured average DC offset values from exemplar recorders may not be directly comparable to those of submitted digital audio recordings without exactly duplicating the acoustic environment and microphone/recorder configurations.
Robinson, Andrew
2013-01-01
Learn to build software and hardware projects featuring the Raspberry Pi! Raspberry Pi represents a new generation of computers that encourages the user to play and to learn and this unique book is aimed at the beginner Raspberry Pi user who is eager to get started creating real-world projects. Taking you on a journey of creating 15 practical projects, this fun and informative resource introduces you to the skills you need to have in order to make the most of the Pi. The book begins with a quick look at how to get the Pi up and running and then encourages you to dive into the array of exciti
Armin Raabe
2001-03-01
Full Text Available Acoustic travel time tomography is presented as a possibility for remote monitoring of near surface airtemperature and wind fields. This technique provides line-averaged effective sound speeds changing with temporally and spatially variable air temperature and wind vector. The effective sound speed is derived from the travel times of sound signals which propagate at defined paths between different acoustic sources and receivers. Starting with the travel time data a tomographic algorithm (Simultaneous Iterative Reconstruction Technique, SIRT is used to calculate area-averaged air temperature and wind speed. The accuracy of the experimental method and the tomographic inversion algorithm is exemplarily demonstrated for one day without remarkable differences in the horizontal temperature field, determined by independent in situ measurements at different points within the measuring field. The differences between the conventionally determined air temperature (point measurement and the air temperature determined by tomography (area-averaged measurement representative for the area of the measuring field 200m x 260m were below 0.5 K for an average of 10 minutes. The differences obtained between the wind speed measured at a meteorological mast and calculated from acoustic measurements are not higher than 0.5 ms-1 for the same averaging time. The tomographically determined area-averaged distribution of air temperature (resolution 50 m x 50 m can be used to estimate the horizontal gradient of air temperature as a pre-condition to detect horizontal turbulent fluxes of sensible heat.
Managing projects a team-based approach
Brown, Karen A
2010-01-01
Students today are likely to be assigned to project teams or to be project managers almost immediately in their first job. Managing Projects: A Team-Based Approach was written for a wide range of stakeholders, including project managers, project team members, support personnel, functional mangers who provide resources for projects, project customers (and customer representatives), project sponsors, project subcontractors, and anyone who plays a role in the project delivery process. The need for project management is on the rise as product life cycles compress, demand for IT systems increases, and business takes on an increasingly global character. This book adds to the project management knowledge base in a way that fills an unmet need—it shows how teams can apply many of the standard project management tools, as well as several tools that are relatively new to the field. Managing Projects: A Team-Based Approach offers the academic rigor found in most textbooks along with the practical attributes often foun...
Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc
2015-10-01
This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.
The average visual response in patients with cerebrovascular disease
Oostehuis, H.J.G.H.; Ponsen, E.J.; Jonkman, E.J.; Magnus, O.
1969-01-01
The average visual response (AVR) was recorded in thirty patients after a cerebrovascular accident and in fourteen control subjects from the same age group. The AVR was obtained with the aid of a 16-channel EEG machine, a Computer of Average Transients and a tape recorder with 13 FM channels. This
Charging for computer usage with average cost pricing
Landau, K
1973-01-01
This preliminary report, which is mainly directed to commercial computer centres, gives an introduction to the application of average cost pricing when charging for using computer resources. A description of the cost structure of a computer installation shows advantages and disadvantages of average cost pricing. This is completed by a discussion of the different charging-rates which are possible. (10 refs).
On the Average-Case Complexity of Shellsort
Vitányi, P.M.B.
2015-01-01
We prove a lower bound expressed in the increment sequence on the average-case complexity (number of inversions which is proportional to the running time) of Shellsort. This lower bound is sharp in every case where it could be checked. We obtain new results e.g. determining the average-case complexi
Interpreting Bivariate Regression Coefficients: Going beyond the Average
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Analytic computation of average energy of neutrons inducing fission
Clark, Alexander Rich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-08-12
The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.
Safety Impact of Average Speed Control in the UK
Lahrmann, Harry Spaabæk; Brassøe, Bo; Johansen, Jonas Wibert
2016-01-01
in the UK. The study demonstrates that the introduction of average speed control results in statistically significant and substantial reductions both in speed and in number of accidents. The evaluation indicates that average speed control has a higher safety effect than point-based automatic speed control....
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
A Simple Geometrical Derivation of the Spatial Averaging Theorem.
Whitaker, Stephen
1985-01-01
The connection between single phase transport phenomena and multiphase transport phenomena is easily accomplished by means of the spatial averaging theorem. Although different routes to the theorem have been used, this paper provides a route to the averaging theorem that can be used in undergraduate classes. (JN)
Averaged EMG profiles in jogging and running at different speeds
Gazendam, Marnix G. J.; Hof, At L.
2007-01-01
EMGs were collected from 14 muscles with surface electrodes in 10 subjects walking 1.25-2.25 m s(-1) and running 1.25-4.5 m s(-1). The EMGs were rectified, interpolated in 100% of the stride, and averaged over all subjects to give an average profile. In running, these profiles could be decomposed in
Average widths of anisotropic Besov-Wiener classes
无
2000-01-01
This paper concerns the problem of average σ-K width and average σ-L width of some anisotropic Besov-Wiener classes Srp q θb(Rd) and Srp q θB(Rd) in Lq(Rd) (1≤q≤p＜∞). The weak asymptotic behavior is established for the corresponding quantities.
7 CFR 701.17 - Average adjusted gross income limitation.
2010-01-01
... 9003), each applicant must meet the provisions of the Adjusted Gross Income Limitations at 7 CFR part... 7 Agriculture 7 2010-01-01 2010-01-01 false Average adjusted gross income limitation. 701.17... RELATED PROGRAMS PREVIOUSLY ADMINISTERED UNDER THIS PART § 701.17 Average adjusted gross income...
A note on moving average models for Gaussian random fields
Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.
The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...
(Average-) convexity of common pool and oligopoly TU-games
Driessen, T.S.H.; Meinhardt, H.
2000-01-01
The paper studies both the convexity and average-convexity properties for a particular class of cooperative TU-games called common pool games. The common pool situation involves a cost function as well as a (weakly decreasing) average joint production function. Firstly, it is shown that, if the rele
Average widths of anisotropic Besov-Wiener classes
蒋艳杰
2000-01-01
This paper concems the problem of average σ-K width and average σ-L width of some anisotropic Besov-wiener classes Spqθr(Rd) and Spqθr(Rd) in Lq(Rd) (1≤≤q≤p<∞). The weak asymptotic behavior is established for the corresponding quantities.
Remarks on the Lower Bounds for the Average Genus
Yi-chao Chen
2011-01-01
Let G be a graph of maximum degree at most four. By using the overlap matrix method which is introduced by B. Mohar, we show that the average genus of G is not less than 1/3 of its maximum genus, and the bound is best possible. Also, a new lower bound of average genus in terms of girth is derived.
Delineating the Average Rate of Change in Longitudinal Models
Kelley, Ken; Maxwell, Scott E.
2008-01-01
The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…
Side chain conformational averaging in human dihydrofolate reductase.
Tuttle, Lisa M; Dyson, H Jane; Wright, Peter E
2014-02-25
The three-dimensional structures of the dihydrofolate reductase enzymes from Escherichia coli (ecDHFR or ecE) and Homo sapiens (hDHFR or hE) are very similar, despite a rather low level of sequence identity. Whereas the active site loops of ecDHFR undergo major conformational rearrangements during progression through the reaction cycle, hDHFR remains fixed in a closed loop conformation in all of its catalytic intermediates. To elucidate the structural and dynamic differences between the human and E. coli enzymes, we conducted a comprehensive analysis of side chain flexibility and dynamics in complexes of hDHFR that represent intermediates in the major catalytic cycle. Nuclear magnetic resonance relaxation dispersion experiments show that, in marked contrast to the functionally important motions that feature prominently in the catalytic intermediates of ecDHFR, millisecond time scale fluctuations cannot be detected for hDHFR side chains. Ligand flux in hDHFR is thought to be mediated by conformational changes between a hinge-open state when the substrate/product-binding pocket is vacant and a hinge-closed state when this pocket is occupied. Comparison of X-ray structures of hinge-open and hinge-closed states shows that helix αF changes position by sliding between the two states. Analysis of χ1 rotamer populations derived from measurements of (3)JCγCO and (3)JCγN couplings indicates that many of the side chains that contact helix αF exhibit rotamer averaging that may facilitate the conformational change. The χ1 rotamer adopted by the Phe31 side chain depends upon whether the active site contains the substrate or product. In the holoenzyme (the binary complex of hDHFR with reduced nicotinamide adenine dinucleotide phosphate), a combination of hinge opening and a change in the Phe31 χ1 rotamer opens the active site to facilitate entry of the substrate. Overall, the data suggest that, unlike ecDHFR, hDHFR requires minimal backbone conformational rearrangement as
Average cross-responses in correlated financial markets
Wang, Shanshan; Schäfer, Rudi; Guhr, Thomas
2016-09-01
There are non-vanishing price responses across different stocks in correlated financial markets, reflecting non-Markovian features. We further study this issue by performing different averages, which identify active and passive cross-responses. The two average cross-responses show different characteristic dependences on the time lag. The passive cross-response exhibits a shorter response period with sizeable volatilities, while the corresponding period for the active cross-response is longer. The average cross-responses for a given stock are evaluated either with respect to the whole market or to different sectors. Using the response strength, the influences of individual stocks are identified and discussed. Moreover, the various cross-responses as well as the average cross-responses are compared with the self-responses. In contrast to the short-memory trade sign cross-correlations for each pair of stocks, the sign cross-correlations averaged over different pairs of stocks show long memory.
The Optimal Selection for Restricted Linear Models with Average Estimator
Qichang Xie
2014-01-01
Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.
Representative Democracy in Australian Local Government
Colin Hearfield
2009-01-01
Full Text Available In an assessment of representative democracy in Australian local government, this paper considers long-run changes in forms of political representation, methods of vote counting, franchise arrangements, numbers of local government bodies and elected representatives, as well as the thorny question of constitutional recognition. This discussion is set against the background of ongoing tensions between the drive for economic efficiency and the maintenance of political legitimacy, along with more deep-seated divisions emerging from the legal relationship between local and state governments and the resultant problems inherent in local government autonomy versus state intervention.
Representative Sampling for reliable data analysis
Petersen, Lars; Esbensen, Kim Harry
2005-01-01
The Theory of Sampling (TOS) provides a description of all errors involved in sampling of heterogeneous materials as well as all necessary tools for their evaluation, elimination and/or minimization. This tutorial elaborates on—and illustrates—selected central aspects of TOS. The theoretical...... regime in order to secure the necessary reliability of: samples (which must be representative, from the primary sampling onwards), analysis (which will not mean anything outside the miniscule analytical volume without representativity ruling all mass reductions involved, also in the laboratory) and data...
Representing Context in Hypermedia Data Models
Hansen, Frank Allan
2005-01-01
As computers and software systems move beyond the desktopand into the physical environments we live and workin, the systems are required to adapt to these environmentsand the activities taking place within them. Making applicationscontext-aware and representing context informationalong side...... application data can be a challenging task. Thispaper describes how digital context traditionally has beenrepresented in hypermedia data models and how this representationcan scale to also represent physical context. TheHyCon framework and data model, designed for the developmentof mobile context...
Representing uncertainty on model analysis plots
Smith, Trevor I.
2016-12-01
Model analysis provides a mechanism for representing student learning as measured by standard multiple-choice surveys. The model plot contains information regarding both how likely students in a particular class are to choose the correct answer and how likely they are to choose an answer consistent with a well-documented conceptual model. Unfortunately, Bao's original presentation of the model plot did not include a way to represent uncertainty in these measurements. I present details of a method to add error bars to model plots by expanding the work of Sommer and Lindell. I also provide a template for generating model plots with error bars.
Representative Sampling for reliable data analysis
Petersen, Lars; Esbensen, Kim Harry
2005-01-01
regime in order to secure the necessary reliability of: samples (which must be representative, from the primary sampling onwards), analysis (which will not mean anything outside the miniscule analytical volume without representativity ruling all mass reductions involved, also in the laboratory) and data...... analysis (“data” do not exist in isolation of their provenance). The Total Sampling Error (TSE) is by far the dominating contribution to all analytical endeavours, often 100+ times larger than the Total Analytical Error (TAE).We present a summarizing set of only seven Sampling Unit Operations (SUOs...
Chen, Guan-Yu; Wu, Cheng-Chi; Shao, Hao-Chiang; Chang, Hsiu-Ming; Chiang, Ann-Shyn; Chen, Yung-Chang
2012-12-01
Model averaging is a widely used technique in biomedical applications. Two established model averaging methods, iterative shape averaging (ISA) method and virtual insect brain (VIB) method, have been applied to several organisms to generate average representations of their brain surfaces. However, without sufficient samples, some features of the average Drosophila brain surface obtained using the above methods may disappear or become distorted. To overcome this problem, we propose a Bézier-tube-based surface model averaging strategy. The proposed method first compensates for disparities in position, orientation, and dimension of input surfaces, and then evaluates the average surface by performing shape-based interpolation. Structural features with larger individual disparities are simplified with half-ellipse-shaped Bézier tubes, and are unified according to these tubes to avoid distortion during the averaging process. Experimental results show that the average model yielded by our method could preserve fine features and avoid structural distortions even if only a limit amount of input samples are used. Finally, we qualitatively compare our results with those obtained by ISA and VIB methods by measuring the surface-to-surface distances between input surfaces and the averaged ones. The comparisons show that the proposed method could generate a more representative average surface than both ISA and VIB methods.
Subklew, Günter; Ulrich, Julia; Fürst, Leander; Höltkemeier, Agnes
2010-05-01
As an important element in Chinese politics for the development of the Western parts of the country, a large hydraulic engineering project - the Three Gorges Dam - has been set up in order to dam the Yangtze River for a length of over 600 km with an average width of about 1,100 m. It is expected that this results in ecological, technical and social problems of a magnitude hardly dealt with before. With this gigantic project, the national executive is pursuing the aims of - preventing flooding - safeguarding the water supply - encouraging navigation and - generating electric energy. In future, fluctuations of the water level of up to 30 metres will be deliberately applied in the dammed-up section of the river while retaining the flow character of the seasonal variation. The pollution of the Yangtze with a wide range of problem substances is frequently underestimated since in many cases attention is only paid to the low measured concentrations. However, the large volumes of water lead to appreciable loads and thus the danger of an accumulation of pollutants even reaching the human food chain. It should also not be forgotten that the Yangtze represents the major, and in some cases indeed the only, source of drinking and domestic water for the population. A consideration of the water level in the impoundment that will in future arise from management of the reservoir reveals the dramatic change in contrast to the natural inundation regime. In the past, the flood events on the banks of the Yangtze and its tributaries occurred in the summer months. The plants in the riparian zone (water fluctuation zone = WFZ) were previously inundated during the warmer time of year (28 ° July/August) and the terrestrial phase of the WFZ was characterized by cool temperatures (3-5 °C January) that permitted little plant activity. In future, the highest water levels will occur in winter above the dam on the Yangtze and also on the tributaries flowing into it. The plants in the WFZ will
Hybrid Reynolds-Averaged/Large-Eddy Simulations of a Coaxial Supersonic Free-Jet Experiment
Baurle, Robert A.; Edwards, Jack R.
2010-01-01
Reynolds-averaged and hybrid Reynolds-averaged/large-eddy simulations have been applied to a supersonic coaxial jet flow experiment. The experiment was designed to study compressible mixing flow phenomenon under conditions that are representative of those encountered in scramjet combustors. The experiment utilized either helium or argon as the inner jet nozzle fluid, and the outer jet nozzle fluid consisted of laboratory air. The inner and outer nozzles were designed and operated to produce nearly pressure-matched Mach 1.8 flow conditions at the jet exit. The purpose of the computational effort was to assess the state-of-the-art for each modeling approach, and to use the hybrid Reynolds-averaged/large-eddy simulations to gather insight into the deficiencies of the Reynolds-averaged closure models. The Reynolds-averaged simulations displayed a strong sensitivity to choice of turbulent Schmidt number. The initial value chosen for this parameter resulted in an over-prediction of the mixing layer spreading rate for the helium case, but the opposite trend was observed when argon was used as the injectant. A larger turbulent Schmidt number greatly improved the comparison of the results with measurements for the helium simulations, but variations in the Schmidt number did not improve the argon comparisons. The hybrid Reynolds-averaged/large-eddy simulations also over-predicted the mixing layer spreading rate for the helium case, while under-predicting the rate of mixing when argon was used as the injectant. The primary reason conjectured for the discrepancy between the hybrid simulation results and the measurements centered around issues related to the transition from a Reynolds-averaged state to one with resolved turbulent content. Improvements to the inflow conditions were suggested as a remedy to this dilemma. Second-order turbulence statistics were also compared to their modeled Reynolds-averaged counterparts to evaluate the effectiveness of common turbulence closure
Thermal motion in proteins: Large effects on the time-averaged interaction energies
Martin Goethe
2016-03-01
Full Text Available As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.
Thermal motion in proteins: Large effects on the time-averaged interaction energies
Goethe, Martin; Fita, Ignacio; Rubi, J. Miguel
2016-03-01
As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.
Thermal motion in proteins: Large effects on the time-averaged interaction energies
Goethe, Martin, E-mail: martingoethe@ub.edu; Rubi, J. Miguel [Departament de Física Fonamental, Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Fita, Ignacio [Institut de Biologia Molecular de Barcelona, Baldiri Reixac 10, 08028 Barcelona (Spain)
2016-03-15
As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.
Model parameters for representative wetland plant functional groups
Williams, Amber S.; Kiniry, James R.; Mushet, David M.; Smith, Loren M.; McMurry, Scott T.; Attebury, Kelly; Lang, Megan; McCarty, Gregory W.; Shaffer, Jill A.; Effland, William R.; Johnson, Mari-Vaughn V.
2017-01-01
Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and realistic simulation of the upland and wetland plant growth cycles. Objectives of this study were to quantify leaf area index (LAI), light extinction coefficient (k), and plant nitrogen (N), phosphorus (P), and potassium (K) concentrations in natural stands of representative plant species for some major plant functional groups in the United States. Functional groups in this study were based on these parameters and plant growth types to enable process-based modeling. We collected data at four locations representing some of the main wetland regions of the United States. At each site, we collected on-the-ground measurements of fraction of light intercepted, LAI, and dry matter within the 2013–2015 growing seasons. Maximum LAI and k variables showed noticeable variations among sites and years, while overall averages and functional group averages give useful estimates for multisite simulation modeling. Variation within each species gives an indication of what can be expected in such natural ecosystems. For P and K, the concentrations from highest to lowest were spikerush (Eleocharis macrostachya), reed canary grass (Phalaris arundinacea), smartweed (Polygonum spp.), cattail (Typha spp.), and hardstem bulrush (Schoenoplectus acutus). Spikerush had the highest N concentration, followed by smartweed, bulrush, reed canary grass, and then cattail. These parameters will be useful for the actual wetland species measured and for the wetland plant functional groups they represent. These parameters and the associated process-based models offer promise as valuable tools for evaluating environmental benefits of wetlands and for evaluating impacts of various agronomic practices in
Self-averaging and weak ergodicity breaking of diffusion in heterogeneous media
Russian, Anna; Dentz, Marco; Gouze, Philippe
2017-08-01
Diffusion in natural and engineered media is quantified in terms of stochastic models for the heterogeneity-induced fluctuations of particle motion. However, fundamental properties such as ergodicity and self-averaging and their dependence on the disorder distribution are often not known. Here, we investigate these questions for diffusion in quenched disordered media characterized by spatially varying retardation properties, which account for particle retention due to physical or chemical interactions with the medium. We link self-averaging and ergodicity to the disorder sampling efficiency Rn, which quantifies the number of disorder realizations a noise ensemble may sample in a single disorder realization. Diffusion for disorder scenarios characterized by a finite mean transition time is ergodic and self-averaging for any dimension. The strength of the sample to sample fluctuations decreases with increasing spatial dimension. For an infinite mean transition time, particle motion is weakly ergodicity breaking in any dimension because single particles cannot sample the heterogeneity spectrum in finite time. However, even though the noise ensemble is not representative of the single-particle time statistics, subdiffusive motion in q ≥2 dimensions is self-averaging, which means that the noise ensemble in a single realization samples a representative part of the heterogeneity spectrum.
Nuclear Cryogenic Propulsion Stage Project
National Aeronautics and Space Administration — Key NCPS project objectives are to conduct preliminary design, fabrication, and test of representative fuel samples and partial length fuel elements for the two...
Impact of connected vehicle guidance information on network-wide average travel time
Jiangfeng Wang
2016-12-01
Full Text Available With the emergence of connected vehicle technologies, the potential positive impact of connected vehicle guidance on mobility has become a research hotspot by data exchange among vehicles, infrastructure, and mobile devices. This study is focused on micro-modeling and quantitatively evaluating the impact of connected vehicle guidance on network-wide travel time by introducing various affecting factors. To evaluate the benefits of connected vehicle guidance, a simulation architecture based on one engine is proposed representing the connected vehicle–enabled virtual world, and connected vehicle route guidance scenario is established through the development of communication agent and intelligent transportation systems agents using connected vehicle application programming interface considering the communication properties, such as path loss and transmission power. The impact of connected vehicle guidance on network-wide travel time is analyzed by comparing with non-connected vehicle guidance in response to different market penetration rate, following rate, and congestion level. The simulation results explore that average network-wide travel time in connected vehicle guidance shows a significant reduction versus that in non–connected vehicle guidance. Average network-wide travel time in connected vehicle guidance have an increase of 42.23% comparing to that in non-connected vehicle guidance, and average travel time variability (represented by the coefficient of variance increases as the travel time increases. Other vital findings include that higher penetration rate and following rate generate bigger savings of average network-wide travel time. The savings of average network-wide travel time increase from 17% to 38% according to different congestion levels, and savings of average travel time in more serious congestion have a more obvious improvement for the same penetration rate or following rate.
Project Management Theory Meets Practice contains the proceedings from the 1st Danish Project Management Research Conference (DAPMARC 2015), held in Copenhagen, Denmark, on May 21st, 2015.......Project Management Theory Meets Practice contains the proceedings from the 1st Danish Project Management Research Conference (DAPMARC 2015), held in Copenhagen, Denmark, on May 21st, 2015....
Munk-Madsen, Andreas
2005-01-01
"Project" is a key concept in IS management. The word is frequently used in textbooks and standards. Yet we seldom find a precise definition of the concept. This paper discusses how to define the concept of a project. The proposed definition covers both heavily formalized projects and informally...... organized, agile projects. Based on the proposed definition popular existing definitions are discussed....
Exploring the Best Classification from Average Feature Combination
Jian Hou
2014-01-01
Full Text Available Feature combination is a powerful approach to improve object classification performance. While various combination algorithms have been proposed, average combination is almost always selected as the baseline algorithm to be compared with. In previous work we have found that it is better to use only a sample of the most powerful features in average combination than using all. In this paper, we continue this work and further show that the behaviors of features in average combination can be integrated into the k-Nearest-Neighbor (kNN framework. Based on the kNN framework, we then propose to use a selection based average combination algorithm to obtain the best classification performance from average combination. Our experiments on four diverse datasets indicate that this selection based average combination performs evidently better than the ordinary average combination, and thus serves as a better baseline. Comparing with this new and better baseline makes the claimed superiority of newly proposed combination algorithms more convincing. Furthermore, the kNN framework is helpful in understanding the underlying mechanism of feature combination and motivating novel feature combination algorithms.
Bounds on Average Time Complexity of Decision Trees
Chikalov, Igor
2011-01-01
In this chapter, bounds on the average depth and the average weighted depth of decision trees are considered. Similar problems are studied in search theory [1], coding theory [77], design and analysis of algorithms (e.g., sorting) [38]. For any diagnostic problem, the minimum average depth of decision tree is bounded from below by the entropy of probability distribution (with a multiplier 1/log2 k for a problem over a k-valued information system). Among diagnostic problems, the problems with a complete set of attributes have the lowest minimum average depth of decision trees (e.g, the problem of building optimal prefix code [1] and a blood test study in assumption that exactly one patient is ill [23]). For such problems, the minimum average depth of decision tree exceeds the lower bound by at most one. The minimum average depth reaches the maximum on the problems in which each attribute is "indispensable" [44] (e.g., a diagnostic problem with n attributes and kn pairwise different rows in the decision table and the problem of implementing the modulo 2 summation function). These problems have the minimum average depth of decision tree equal to the number of attributes in the problem description. © Springer-Verlag Berlin Heidelberg 2011.
Pilkington, Alan; Chai, Kah-Hin; Le, Yang
2015-01-01
This paper identifies the true coverage of PM theory through a bibliometric analysis of the International Journal of Project Management from 1996-2012. We identify six persistent research themes: project time management, project risk management, programme management, large-scale project management......, project success/failure and practitioner development. These differ from those presented in review and editorial articles in the literature. In addition, topics missing from the PM BOK: knowledge management project-based organization and project portfolio management have become more popular topics...
Pilkington, Alan; Chai, Kah-Hin; Le, Yang
2015-01-01
This paper identifies the true coverage of PM theory through a bibliometric analysis of the International Journal of Project Management from 1996-2012. We identify six persistent research themes: project time management, project risk management, programme management, large-scale project management......, project success/failure and practitioner development. These differ from those presented in review and editorial articles in the literature. In addition, topics missing from the PM BOK: knowledge management project-based organization and project portfolio management have become more popular topics...
Transfer metrics analytics project
Matonis, Zygimantas
2016-01-01
This report represents work done towards predicting transfer rates/latencies on Worldwide LHC Computing Grid (WLCG) sites using Machine Learning techniques. Topic covered are technologies used for the project, data preparation for ML suitable format and attribute selection as well as a comparison of different ML algorithms.
Tennant-Gadd, Laurie; Sansone, Kristina Lamour
2008-01-01
Identity is the focus of the middle-school visual arts program at Cambridge Friends School (CFS) in Cambridge, Massachusetts. Sixth graders enter the middle school and design a personal logo as their first major project in the art studio. The logo becomes a way for students to introduce themselves to their teachers and to represent who they are…
Attributes Heeded When Representing an Osmosis Problem.
Zuckerman, June Trop
Eighteen high school science students were involved in a study to determine what attributes in the problem statement they need when representing a typical osmosis problem. In order to realize this goal students were asked to solve problems aloud and to explain their answers. Included as a part of the results are the attributes that the students…
A Framework for Representing Moving Objects
Becker, Ludger; Blunck, Henrik; Hinrichs, Klaus
2004-01-01
We present a framework for representing the trajectories of moving objects and the time-varying results of operations on moving objects. This framework supports the realization of discrete data models of moving objects databases, which incorporate representations of moving objects based on non-li...
26 CFR 601.502 - Recognized representative.
2010-04-01
.... 230; (4) Enrolled actuary. Any individual who is enrolled as an actuary by and is in active status with the Joint Board for the Enrollment of Actuaries pursuant to 29 U.S.C. 1242. (5) Other individuals... actuaries, and others); (3) I am authorized to represent the taxpayer(s) identified in the power of...
Developing Creativity and Abstraction in Representing Data
South, Andy
2012-01-01
Creating charts and graphs is all about visual abstraction: the process of representing aspects of data with imagery that can be interpreted by the reader. Children may need help making the link between the "real" and the image. This abstraction can be achieved using symbols, size, colour and position. Where the representation is close to what…
Average-Case Analysis of Algorithms Using Kolmogorov Complexity
姜涛; 李明
2000-01-01
Analyzing the average-case complexity of algorithms is a very prac tical but very difficult problem in computer science. In the past few years, we have demonstrated that Kolmogorov complexity is an important tool for analyzing the average-case complexity of algorithms. We have developed the incompressibility method. In this paper, several simple examples are used to further demonstrate the power and simplicity of such method. We prove bounds on the average-case number of stacks (queues) required for sorting sequential or parallel Queuesort or Stacksort.
Sample Selected Averaging Method for Analyzing the Event Related Potential
Taguchi, Akira; Ono, Youhei; Kimura, Tomoaki
The event related potential (ERP) is often measured through the oddball task. On the oddball task, subjects are given “rare stimulus” and “frequent stimulus”. Measured ERPs were analyzed by the averaging technique. In the results, amplitude of the ERP P300 becomes large when the “rare stimulus” is given. However, measured ERPs are included samples without an original feature of ERP. Thus, it is necessary to reject unsuitable measured ERPs when using the averaging technique. In this paper, we propose the rejection method for unsuitable measured ERPs for the averaging technique. Moreover, we combine the proposed method and Woody's adaptive filter method.
Taylor, Patrick C.; Baker, Noel C.
2015-01-01
Earth's climate is changing and will continue to change into the foreseeable future. Expected changes in the climatological distribution of precipitation, surface temperature, and surface solar radiation will significantly impact agriculture. Adaptation strategies are, therefore, required to reduce the agricultural impacts of climate change. Climate change projections of precipitation, surface temperature, and surface solar radiation distributions are necessary input for adaption planning studies. These projections are conventionally constructed from an ensemble of climate model simulations (e.g., the Coupled Model Intercomparison Project 5 (CMIP5)) as an equal weighted average, one model one vote. Each climate model, however, represents the array of climate-relevant physical processes with varying degrees of fidelity influencing the projection of individual climate variables differently. Presented here is a new approach, termed the "Intelligent Ensemble, that constructs climate variable projections by weighting each model according to its ability to represent key physical processes, e.g., precipitation probability distribution. This approach provides added value over the equal weighted average method. Physical process metrics applied in the "Intelligent Ensemble" method are created using a combination of NASA and NOAA satellite and surface-based cloud, radiation, temperature, and precipitation data sets. The "Intelligent Ensemble" method is applied to the RCP4.5 and RCP8.5 anthropogenic climate forcing simulations within the CMIP5 archive to develop a set of climate change scenarios for precipitation, temperature, and surface solar radiation in each USDA Farm Resource Region for use in climate change adaptation studies.
刘朝晖
2014-01-01
Robert Creeley,representative of Proj ective Verse,believes that emotion is of critical signif-icance to poetry.In the history of English poetry,the Romantic poets represented by Wordsworth and the Confessional poets represented by Lowell both stress the importance of emotion in poetry.A comparison of Creeley's mode of turning out emotion with Wordsworth's and Lowell's shows that unlike Wordsworth, who expresses emotion subj ectively after tranquil recollection,Creeley betrays symptoms of emotion spon-taneously,and that unlike Lowell,who expresses emotion generally and straightforwardly in rhetoric lan-guage,Creeley presents exact and minute feelings by such means as the rhythm,sound,and word-forma-tion of language itself.The difference in the mode of turning out emotion between Creeley,Wordsworth and Lowell reflects different approaches to emotion between the Proj ective,the Romantic and the Confes-sional poets.%投射派诗歌的代表人物克里利认为，情感是诗歌举足轻重的元素。在英语诗歌史上，以华兹华斯为代表的浪漫派诗人，和以洛维尔为代表的自白派诗人，都强调情感之于诗歌的重要性。比较克里利与华兹华斯、洛维尔的情感表达方式可以发现，克里利不像华兹华斯那样在沉思中回忆后再主观地抒发情感，而是即兴自然地流露情感；克里利也不像洛维尔那样，用修辞语言粗放直白地表达情感，而是利用语言本身的节奏、声音、构词法等特质，准确而细腻地表达情感。克里利与华兹华斯、洛维尔的情感表达方式的差异，反映了投射派、浪漫派、自白派诗歌不同的情感表达方式。
Grade-Average Method: A Statistical Approach for Estimating ...
Grade-Average Method: A Statistical Approach for Estimating Missing Value for Continuous Assessment Marks. ... Journal of the Nigerian Association of Mathematical Physics. Journal Home · ABOUT ... Open Access DOWNLOAD FULL TEXT ...
United States Average Annual Precipitation, 2000-2004 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 2000-2004. Parameter-elevation...
On the average sensitivity of laced Boolean functions
jiyou, Li
2011-01-01
In this paper we obtain the average sensitivity of the laced Boolean functions. This confirms a conjecture of Shparlinski. We also compute the weights of the laced Boolean functions and show that they are almost balanced.
Distribution of population-averaged observables in stochastic gene expression
Bhattacharyya, Bhaswati; Kalay, Ziya
2014-01-01
Observation of phenotypic diversity in a population of genetically identical cells is often linked to the stochastic nature of chemical reactions involved in gene regulatory networks. We investigate the distribution of population-averaged gene expression levels as a function of population, or sample, size for several stochastic gene expression models to find out to what extent population-averaged quantities reflect the underlying mechanism of gene expression. We consider three basic gene regulation networks corresponding to transcription with and without gene state switching and translation. Using analytical expressions for the probability generating function of observables and large deviation theory, we calculate the distribution and first two moments of the population-averaged mRNA and protein levels as a function of model parameters, population size, and number of measurements contained in a data set. We validate our results using stochastic simulations also report exact results on the asymptotic properties of population averages which show qualitative differences among different models.
on the performance of Autoregressive Moving Average Polynomial ...
Timothy Ademakinwa
Moving Average Polynomial Distributed Lag (ARMAPDL) model. The parameters of these models were estimated using least squares and Newton Raphson iterative methods. ..... Global Journal of Mathematics and Statistics. Vol. 1. No.
Medicare Part B Drug Average Sales Pricing Files
U.S. Department of Health & Human Services — Manufacturer reporting of Average Sales Price (ASP) data - A manufacturers ASP must be calculated by the manufacturer every calendar quarter and submitted to CMS...
The Partial Averaging of Fuzzy Differential Inclusions on Finite Interval
Andrej V. Plotnikov
2014-01-01
Full Text Available The substantiation of a possibility of application of partial averaging method on finite interval for differential inclusions with the fuzzy right-hand side with a small parameter is considered.
United States Average Annual Precipitation, 2005-2009 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 2005-2009. Parameter-elevation...
SAM: A Simple Averaging Model of Impression Formation
Lewis, Robert A.
1976-01-01
Describes the Simple Averaging Model (SAM) which was developed to demonstrate impression-formation computer modeling with less complex and less expensive procedures than are required by most established programs. (RC)
Average monthly and annual climate maps for Bolivia
Vicente-Serrano, Sergio M.
2015-02-24
This study presents monthly and annual climate maps for relevant hydroclimatic variables in Bolivia. We used the most complete network of precipitation and temperature stations available in Bolivia, which passed a careful quality control and temporal homogenization procedure. Monthly average maps at the spatial resolution of 1 km were modeled by means of a regression-based approach using topographic and geographic variables as predictors. The monthly average maximum and minimum temperatures, precipitation and potential exoatmospheric solar radiation under clear sky conditions are used to estimate the monthly average atmospheric evaporative demand by means of the Hargreaves model. Finally, the average water balance is estimated on a monthly and annual scale for each 1 km cell by means of the difference between precipitation and atmospheric evaporative demand. The digital layers used to create the maps are available in the digital repository of the Spanish National Research Council.
United States Average Annual Precipitation, 1961-1990 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...
The average-shadowing property and topological ergodicity for flows
Gu Rongbao [School of Finance, Nanjing University of Finance and Economics, Nanjing 210046 (China)]. E-mail: rbgu@njue.edu.cn; Guo Wenjing [School of Finance, Nanjing University of Finance and Economics, Nanjing 210046 (China)
2005-07-01
In this paper, the transitive property for a flow without sensitive dependence on initial conditions is studied and it is shown that a Lyapunov stable flow with the average-shadowing property on a compact metric space is topologically ergodic.
Time averaging, ageing and delay analysis of financial time series
Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf
2017-06-01
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
Ensemble vs. time averages in financial time series analysis
Seemann, Lars; Hua, Jia-Chen; McCauley, Joseph L.; Gunaratne, Gemunu H.
2012-12-01
Empirical analysis of financial time series suggests that the underlying stochastic dynamics are not only non-stationary, but also exhibit non-stationary increments. However, financial time series are commonly analyzed using the sliding interval technique that assumes stationary increments. We propose an alternative approach that is based on an ensemble over trading days. To determine the effects of time averaging techniques on analysis outcomes, we create an intraday activity model that exhibits periodic variable diffusion dynamics and we assess the model data using both ensemble and time averaging techniques. We find that ensemble averaging techniques detect the underlying dynamics correctly, whereas sliding intervals approaches fail. As many traded assets exhibit characteristic intraday volatility patterns, our work implies that ensemble averages approaches will yield new insight into the study of financial markets’ dynamics.
On the average exponent of elliptic curves modulo $p$
Freiberg, Tristan
2012-01-01
Given an elliptic curve $E$ defined over $\\mathbb{Q}$ and a prime $p$ of good reduction, let $\\tilde{E}(\\mathbb{F}_p)$ denote the group of $\\mathbb{F}_p$-points of the reduction of $E$ modulo $p$, and let $e_p$ denote the exponent of said group. Assuming a certain form of the Generalized Riemann Hypothesis (GRH), we study the average of $e_p$ as $p \\le X$ ranges over primes of good reduction, and find that the average exponent essentially equals $p\\cdot c_{E}$, where the constant $c_{E} > 0$ depends on $E$. For $E$ without complex multiplication (CM), $c_{E}$ can be written as a rational number (depending on $E$) times a universal constant. Without assuming GRH, we can determine the average exponent when $E$ has CM, as well as give an upper bound on the average in the non-CM case.
Model Averaging Software for Dichotomous Dose Response Risk Estimation
Matthew W. Wheeler
2008-02-01
Full Text Available Model averaging has been shown to be a useful method for incorporating model uncertainty in quantitative risk estimation. In certain circumstances this technique is computationally complex, requiring sophisticated software to carry out the computation. We introduce software that implements model averaging for risk assessment based upon dichotomous dose-response data. This software, which we call Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD, ﬁts the quantal response models, which are also used in the US Environmental Protection Agency benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates. The software fulﬁlls a need for risk assessors, allowing them to go beyond one single model in their risk assessments based on quantal data by focusing on a set of models that describes the experimental data.
United States Average Annual Precipitation, 1995-1999 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1995-1999. Parameter-elevation...
United States Average Annual Precipitation, 1990-1994 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1990-1994. Parameter-elevation...
United States Average Annual Precipitation, 1990-2009 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1990-2009. Parameter-elevation...
Does subduction zone magmatism produce average continental crust
Ellam, R. M.; Hawkesworth, C. J.
1988-01-01
The question of whether present day subduction zone magmatism produces material of average continental crust composition, which perhaps most would agree is andesitic, is addressed. It was argued that modern andesitic to dacitic rocks in Andean-type settings are produced by plagioclase fractionation of mantle derived basalts, leaving a complementary residue with low Rb/Sr and a positive Eu anomaly. This residue must be removed, for example by delamination, if the average crust produced in these settings is andesitic. The author argued against this, pointing out the absence of evidence for such a signature in the mantle. Either the average crust is not andesitic, a conclusion the author was not entirely comfortable with, or other crust forming processes must be sought. One possibility is that during the Archean, direct slab melting of basaltic or eclogitic oceanic crust produced felsic melts, which together with about 65 percent mafic material, yielded an average crust of andesitic composition.
Historical Data for Average Processing Time Until Hearing Held
Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...
Bivariate copulas on the exponentially weighted moving average control chart
Sasigarn Kuvattana
2016-10-01
Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.
Time averages, recurrence and transience in the stochastic replicator dynamics
Hofbauer, Josef; 10.1214/08-AAP577
2009-01-01
We investigate the long-run behavior of a stochastic replicator process, which describes game dynamics for a symmetric two-player game under aggregate shocks. We establish an averaging principle that relates time averages of the process and Nash equilibria of a suitably modified game. Furthermore, a sufficient condition for transience is given in terms of mixed equilibria and definiteness of the payoff matrix. We also present necessary and sufficient conditions for stochastic stability of pure equilibria.