Rice, Pamela J; Horgan, Brian P; Hamlin, Jennifer L
2017-04-01
The detection of pesticides, associated with turfgrass management, in storm runoff and surface waters of urban watersheds has raised concerns regarding their source, potential environmental effects and a need for strategies to reduce their inputs. In previous research we discovered that hollow tine core cultivation (HTCC) was more effective than other management practices for reducing the off-site transport of pesticides with runoff from creeping bentgrass turf managed as a golf course fairway. This was primarily the result of enhanced infiltration and reduced runoff volumes associated with turf managed with hollow tines. In this study we evaluated the addition of verticutting (VC) to HTCC (HTCC+VC) in an attempt to further enhance infiltration and mitigate the off-site transport of pesticides with runoff from managed turf. Overall, greater or equal quantities of pesticides were transported with runoff from plots managed with HTCC+VC compared to HTCC or VC alone. For the pesticides evaluated HTCCtransport of the high mobility pesticides while HTCC=VC=HTCC+VC for the low mobility pesticides. It is likely the addition of VC following HTCC further increased compaction and reduced availability of recently exposed soil sorptive sites produced from the HTCC. Results of this research provides guidance to golf course managers on selection of management practices that assure quality turf while minimizing off-site transport of pesticides, improving pesticide efficacy and the environmental stewardship of managed biological systems.
Individual Module Maximum Power Point Tracking for Thermoelectric Generator Systems
Vadstrup, Casper; Schaltz, Erik; Chen, Min
2013-07-01
In a thermoelectric generator (TEG) system the DC/DC converter is under the control of a maximum power point tracker which ensures that the TEG system outputs the maximum possible power to the load. However, if the conditions, e.g., temperature, health, etc., of the TEG modules are different, each TEG module will not produce its maximum power. If each TEG module is controlled individually, each TEG module can be operated at its maximum power point and the TEG system output power will therefore be higher. In this work a power converter based on noninverting buck-boost converters capable of handling four TEG modules is presented. It is shown that, when each module in the TEG system is operated under individual maximum power point tracking, the system output power for this specific application can be increased by up to 8.4% relative to the situation when the modules are connected in series and 16.7% relative to the situation when the modules are connected in parallel.
Sørensen, Jette Led; van der Vleuten, Cees; Rosthøj, Susanne; Østergaard, Doris; LeBlanc, Vicki; Johansen, Marianne; Ekelund, Kim; Starkopf, Liis; Lindschou, Jane; Gluud, Christian; Weikop, Pia; Ottesen, Bent
2015-01-01
Objective To investigate the effect of in situ simulation (ISS) versus off-site simulation (OSS) on knowledge, patient safety attitude, stress, motivation, perceptions of simulation, team performance and organisational impact. Design Investigator-initiated single-centre randomised superiority educational trial. Setting Obstetrics and anaesthesiology departments, Rigshospitalet, University of Copenhagen, Denmark. Participants 100 participants in teams of 10, comprising midwives, specialised midwives, auxiliary nurses, nurse anaesthetists, operating theatre nurses, and consultant doctors and trainees in obstetrics and anaesthesiology. Interventions Two multiprofessional simulations (clinical management of an emergency caesarean section and a postpartum haemorrhage scenario) were conducted in teams of 10 in the ISS versus the OSS setting. Primary outcome Knowledge assessed by a multiple choice question test. Exploratory outcomes Individual outcomes: scores on the Safety Attitudes Questionnaire, stress measurements (State-Trait Anxiety Inventory, cognitive appraisal and salivary cortisol), Intrinsic Motivation Inventory and perceptions of simulations. Team outcome: video assessment of team performance. Organisational impact: suggestions for organisational changes. Results The trial was conducted from April to June 2013. No differences between the two groups were found for the multiple choice question test, patient safety attitude, stress measurements, motivation or the evaluation of the simulations. The participants in the ISS group scored the authenticity of the simulation significantly higher than did the participants in the OSS group. Expert video assessment of team performance showed no differences between the ISS versus the OSS group. The ISS group provided more ideas and suggestions for changes at the organisational level. Conclusions In this randomised trial, no significant differences were found regarding knowledge, patient safety attitude, motivation or stress
Sørensen, Jette Led; van der Vleuten, Cees; Rosthøj, Susanne
2015-01-01
OBJECTIVE: To investigate the effect of in situ simulation (ISS) versus off-site simulation (OSS) on knowledge, patient safety attitude, stress, motivation, perceptions of simulation, team performance and organisational impact. DESIGN: Investigator-initiated single-centre randomised superiority...... of team performance. Organisational impact: suggestions for organisational changes. RESULTS: The trial was conducted from April to June 2013. No differences between the two groups were found for the multiple choice question test, patient safety attitude, stress measurements, motivation or the evaluation...... of the simulations. The participants in the ISS group scored the authenticity of the simulation significantly higher than did the participants in the OSS group. Expert video assessment of team performance showed no differences between the ISS versus the OSS group. The ISS group provided more ideas and suggestions...
Vickers, Linda D
2010-05-01
This paper describes the method using Microsoft Excel (Microsoft Corporation One Microsoft Way Redmond, WA 98052-6399) to compute the 5% overall site X/Q value and the 95th percentile of the distribution of doses to the nearest maximally exposed offsite individual (MEOI) in accordance with guidance from DOE-STD-3009-1994 and U.S. NRC Regulatory Guide 1.145-1982. The accurate determination of the 5% overall site X/Q value is the most important factor in the computation of the 95th percentile of the distribution of doses to the nearest MEOI. This method should be used to validate software codes that compute the X/Q. The 95th percentile of the distribution of doses to the nearest MEOI must be compared to the U.S. DOE Evaluation Guide of 25 rem to determine the relative severity of hazard to the public from a postulated, unmitigated design basis accident that involves an offsite release of radioactive material.
Individual Module Maximum Power Point Tracking for a Thermoelectric Generator Systems
Vadstrup, Casper; Chen, Min; Schaltz, Erik
Thermo Electric Generator (TEG) modules are often connected in a series and/or parallel system in order to match the TEG system voltage with the load voltage. However, in order to be able to control the power production of the TEG system a DC/DC converter is inserted between the TEG system...... and the load. The DC/DC converter is under the control of a Maximum Power Point Tracker (MPPT) which insures that the TEG system produces the maximum possible power to the load. However, if the conditions, e.g. temperature, health, etc., of the TEG modules are different each TEG module will not produce its...... maximum power. The result of the system MPPT is therefore the best compromise of all the TEG modules in the system. On the other hand, if each TEG module is controlled individual, each TEG module can be operated in its maximum power point and the TEG system output power will therefore be higher...
An investigation of rugby scrimmaging posture and individual maximum pushing force.
Wu, Wen-Lan; Chang, Jyh-Jong; Wu, Jia-Hroung; Guo, Lan-Yuen
2007-02-01
Although rugby is a popular contact sport and the isokinetic muscle torque assessment has recently found widespread application in the field of sports medicine, little research has examined the factors associated with the performance of game-specific skills directly by using the isokinetic-type rugby scrimmaging machine. This study is designed to (a) measure and observe the differences in the maximum individual pushing forward force produced by scrimmaging in different body postures (3 body heights x 2 foot positions) with a self-developed rugby scrimmaging machine and (b) observe the variations in hip, knee, and ankle angles at different body postures and explore the relationship between these angle values and the individual maximum pushing force. Ten national rugby players were invited to participate in the examination. The experimental equipment included a self-developed rugby scrimmaging machine and a 3-dimensional motion analysis system. Our results showed that the foot positions (parallel and nonparallel foot positions) do not affect the maximum pushing force; however, the maximum pushing force was significantly lower in posture I (36% body height) than in posture II (38%) and posture III (40%). The maximum forward force in posture III (40% body height) was also slightly greater than for the scrum in posture II (38% body height). In addition, it was determined that hip, knee, and ankle angles under parallel feet positioning are factors that are closely negatively related in terms of affecting maximum pushing force in scrimmaging. In cross-feet postures, there was a positive correlation between individual forward force and hip angle of the rear leg. From our results, we can conclude that if the player stands in an appropriate starting position at the early stage of scrimmaging, it will benefit the forward force production.
Offsite demonstrations for MWLID technologies
Williams, C. [Sandia National Labs., Albuquerque, NM (United States); Gruebel, R. [Tech. Reps., Inc., Albuquerque, NM (United States)
1995-04-01
The goal of the Offsite Demonstration Project for Mixed Waste Landfill Integrated Demonstration (MWLID)-developed environmental site characterization and remediation technologies is to facilitate the transfer, use, and commercialization of these technologies to the public and private sector. The meet this goal, the project identified environmental restoration needs of mixed waste and/or hazardous waste landfill owners (Native American, municipal, DOE, and DoD); documenting potential demonstration sites and the contaminants present at each site; assessing the environmental regulations that would effect demonstration activities; and evaluating site suitability for demonstrating MWLID technologies at the tribal and municipal sites identified. Eighteen landfill sites within a 40.2-km radius of Sandia National Laboratories are listed on the CERCLIS Site/Event Listing for the state of New Mexico. Seventeen are not located within DOE or DoD facilities and are potential offsite MWLID technology demonstration sites. Two of the seventeen CERCLIS sites, one on Native American land and one on municipal land, were evaluated and identified as potential candidates for off-site demonstrations of MWLID-developed technologies. Contaminants potentially present on site include chromium waste, household/commercial hazardous waste, volatile organic compounds, and petroleum products. MWLID characterization technologies applicable to these sites include Magnetometer Towed Array, Cross-borehole Electromagnetic Imaging, SitePlanner {trademark}/PLUME, Hybrid Directional Drilling, Seamist{trademark}/Vadose Zone Monitoring, Stripping Analyses, and x-ray Fluorescence Spectroscopy for Heavy Metals.
Allen, P M
1983-09-01
Analysis of the Savannah River Plant RBOF and RRF included an evaluation of the reliability of process equipment and controls, administrative controls, and engineered safety features. The evaluation also identified potential scenarios and radiological consequences. Risks were calculated in terms of 50-year population dose commitment per year (man-rem/year) to the onsite and offsite population within an 80 Km radius of RBOF and RRF, and to an individual at the plant boundary. The total 50-year onsite and offsite population radiological risks of operating the RBOF and RRF were estimated to be 1.0 man-rem/year. These risks are significantly less than the population dose of 54,000 man/rem/yr for natural background radiation in a 50-mile radius. The 50-year maximum offsite individual risk from operating the facility was estimated to be 2.1 {times} 10{sup 5} rem/yr. These risks are significantly lower than 93 mrem/yr an individual is expected to receive from natural background radiation in this area. The analysis shows. that the RBOF and RRF can be operated without undue risk to onsite personnel or to the general public.
A preliminary study to find out maximum occlusal bite force in Indian individuals
Jain, Veena; Mathur, Vijay Prakash; Pillai, Rajath;
2014-01-01
PURPOSE: This preliminary hospital based study was designed to measure the mean maximum bite force (MMBF) in healthy Indian individuals. An attempt was made to correlate MMBF with body mass index (BMI) and some of the anthropometric features. METHODOLOGY: A total of 358 healthy subjects in the age...... in subjects having concave facial profile when compared to convex (P = 0.045) and straight (P = 0.039) facial profile. BMI and arch form showed no significant relationship with MMBF. CONCLUSION: The MMBF is found to be affected by gender and some of the anthropometric features like facial form and palatal...
Disproportionate Allocation of Indirect Costs at Individual-Farm Level Using Maximum Entropy
Markus Lips
2017-08-01
Full Text Available This paper addresses the allocation of indirect or joint costs among farm enterprises, and elaborates two maximum entropy models, the basic CoreModel and the InequalityModel, which additionally includes inequality restrictions in order to incorporate knowledge from production technology. Representing the indirect costing approach, both models address the individual-farm level and use standard costs from farm-management literature as allocation bases. They provide a disproportionate allocation, with the distinctive feature that enterprises with large allocation bases face stronger adjustments than enterprises with small ones, approximating indirect costing with reality. Based on crop-farm observations from the Swiss Farm Accountancy Data Network (FADN, including up to 36 observations per enterprise, both models are compared with a proportional allocation as reference base. The mean differences of the enterprise’s allocated labour inputs and machinery costs are in a range of up to ±35% and ±20% for the CoreModel and InequalityModel, respectively. We conclude that the choice of allocation methods has a strong influence on the resulting indirect costs. Furthermore, the application of inequality restrictions is a precondition to make the merits of the maximum entropy principle accessible for the allocation of indirect costs.
Persson, Lennart; Elliott, J Malcolm
2013-05-01
The theory of cannibal dynamics predicts a link between population dynamics and individual life history. In particular, increased individual growth has, in both modeling and empirical studies, been shown to result from a destabilization of population dynamics. We used data from a long-term study of the dynamics of two leech (Erpobdella octoculata) populations to test the hypothesis that maximum size should be higher in a cycling population; one of the study populations exhibited a delayed feedback cycle while the other population showed no sign of cyclicity. A hump-shaped relationship between individual mass of 1-year-old leeches and offspring density the previous year was present in both populations. As predicted from the theory, the maximum mass of individuals was much larger in the fluctuating population. In contrast to predictions, the higher growth rate was not related to energy extraction from cannibalism. Instead, the higher individual mass is suggested to be due to increased availability of resources due to a niche widening with increased individual body mass. The larger individual mass in the fluctuating population was related to a stronger correlation between the densities of 1-year-old individuals and 2-year-old individuals the following year in this population. Although cannibalism was the major mechanism regulating population dynamics, its importance was negligible in terms of providing cannibalizing individuals with energy subsequently increasing their fecundity. Instead, the study identifies a need for theoretical and empirical studies on the largely unstudied interplay between ontogenetic niche shifts and cannibalistic population dynamics.
Klein, Daniel; Zezula, Ivan
The extended growth curve model is discussed in this paper. There are two versions of the model studied in the literature, which differ in the way how the column spaces of the design matrices are nested. The nesting is applied either to the between-individual or to the within-individual design
Pennekamp, Werner; Roggenland, Daniela; Lemburg, Stefan; Peters, Soeren; Sterl, Sabrina; Nicolas, Volkmar [University Clinics Bergmannsheil, Department of Radiology and Nuclear Medicine, Bochum (Germany); Hering, Steffen [University Clinics Bergmannsheil, Department of Internal Medicine, Bochum (Germany); Schwenke, Carsten [SCO:SSiS-Statistical Consulting, Berlin (Germany)
2011-05-15
To prove that 1.0 M gadobutrol provides superior contrast enhancement in suspicion of osteomyelitis of the feet compared with 0.5 M gadoterate. MRI of feet was performed on 2 separate occasions. Independent injections of 1.0 M gadobutrol and 0.5 M gadoterate at doses of 0.1 mmol Gd/kg body weight were administered per patient. The interval between the two MR examinations was between 24 h and 7 days. Evaluation was performed in an off-site blinded read. 41 patients were eligible for efficacy analysis. Results of secondary efficacy variables did not show statistically significant differences. For the primary efficacy variable, a trend in favour of gadobutrol was seen in the full analysis set (ITT) population resulting in at least non-inferiority. In the per protocol (PP) analysis set gadobutrol had better contrast than gadoterate (Wilcoxon signed rank test, p = 0.0466). Imaging of the distal lower limb in this special patient population requires a large number of patients to obtain enough comparative images where non-contrast-agent-dependent factors do not disturb contrast agent efficacy. The ITT analysis showed at least non-inferiority of gadobutrol in comparison to gadoterate. The avoidance of imaging artefacts demonstrates a better outcome for gadobutrol. (orig.)
LHCb Off-site HLT Farm Demonstration
Neufeld, Niko
2012-01-01
The LHCb High Level Trigger (HLT) farm consists of about 1300 nodes, which are housed in the underground server room. Due to the constraints of the power supply and cooling system, it is difficult to install more servers in this room for the future. Off-site farm is a solution to enlarge the computing capacity. In this paper, we will demonstrate the concept of LHCb off-site HLT farm extension into the CERN computing center. Furthermore, the performance of the key technologies have been tested in the lab.
Ruslana Sushko
2015-08-01
Full Text Available Purpose: to identify the factors of efficiency of competitive activity of highly skilled basketball players at the stage of maximum realization of individual potential. Material and Methods: in order to identify the factors that have supported the performance of Ukraine's male national team in the European Championship, data analysis and generalization of scientific and technical literature and online data, analysis of official protocols of competitive activities, analysis and generalization of best pedagogical practices, pedagogical supervision, methods of mathematical statistics were used. Results: the efficiency of competitive activity of basketball players was analyzed using such indicators as team roles, won and lost matches, scored and missed points, technical, tactical and age indicators. Conclusions: the factors of efficiency of competitive activity of highly skilled basketball players at the stage of maximum realization of individual potential were identified with regard to age indicators
Abu Alhaija, Elham S J; Al Zo'ubi, Ibraheem A; Al Rousan, Mohammed E; Hammad, Mohammad M
2010-02-01
This study was carried out to record maximum occlusal bite force (MBF) in Jordanian students with three different facial types: short, average, and long, and to determine the effect of gender, type of functional occlusion, and the presence of premature contacts and parafunctional habits on MBF. Sixty dental students (30 males and 30 females) were divided into three equal groups based on the maxillomandibular planes angle (Max/Mand) and degree of anterior overlap: included short-faced students with a deep anterior overbite (Max/Mand or = 32 degrees). Their age ranged between 20 and 23 years. MBF was measured using a hydraulic occlusal force gauge. Occlusal factors, including the type of functional occlusion, the presence of premature contacts, and parafunctional habits, were recorded. Differences between groups were assessed using a t-test and analysis of variance. The average MBF in Jordanian adults was 573.42 +/- 140.18 N. Those with a short face had the highest MBF (679.60 +/- 117.46 N) while the long-face types had the lowest MBF (453.57 +/- 98.30 N; P < 0.001). The average MBF was 599.02 +/- 145.91 in males and 546.97 +/- 131.18 in females (P = 0.149). No gender differences were observed. The average MBF was higher in patients with premature contacts than those without, while it did not differ in subjects with different types of functional occlusion or in the presence of parafunctional habits.
Denise Heagerty
2002-01-01
To reduce the number of regular break-ins on CERN machines due to passwords exposed on the network in clear text, OFF-SITE TELNET ACCESS TO CERN WILL BE BLOCKED in the CERN firewall from Tuesday 28 January 2003 If you use telnet to access CERN computers from outside CERN then please see the link below for alternative access means and further advice http://cern.ch/security/telnet Denise Heagerty, CERN Computer Security officer, Computer.Security@cern.ch
2004-01-01
To reduce the number of regular break-ins on CERN machines due to passwords exposed on the network in clear text, OFF-SITE FTP ACCESS TO CERN WILL BE BLOCKED in the CERN firewall from: Tuesday 20th January 2004 If you use ftp to access CERN computers from outside CERN then please see the link below for alternative access means and further advice: http://cern.ch/security/ftp Denise Heagerty, CERN Computer Security officer, Computer.Security@cern.ch
2003-01-01
To reduce the number of regular break-ins on CERN machines due to passwords exposed on the network in clear text, OFF-SITE FTP ACCESS TO CERN WILL BE BLOCKED in the CERN firewall from: Tuesday 20th January 2004 If you use ftp to access CERN computers from outside CERN then please see the link below for alternative access means and further advice: http://cern.ch/security/ftp Denise Heagerty, CERN Computer Security officer, Computer.Security@cern.ch
2003-01-01
To reduce the number of regular break-ins on CERN machines due to passwords exposed on the network in clear text, OFF-SITE TELNET ACCESS TO CERN WILL BE BLOCKED in the CERN firewall from Tuesday 28th January 2003 If you use telnet to access CERN computers from outside CERN then please see the link below for alternative access means and further advice http://cern.ch/security/telnet Denise Heagerty, CERN Computer Security officer, Computer.Security@cern.ch
Off-Site Source Recovery Project Overview.
Coel-Roback, Rebecca J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-09-29
This report introduces the Off-Site Source Recovery project and gives a summary of domestic and international work. The mission of OSRP is to eliminate excess, unwanted, abandoned, or orphan radioactive sealed sources that pose a potential risk to health, safety, and national security. OSRP identifies and tracks disused sealed sources potentially requiring recovery, and performs special form encapsulation for sealed sources to simplify transportation.
The Off-Site Rule. CERCLA Information Brief
Whitehead, B.
1994-03-01
Under Section 121(d)(3) of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), as amended by the Superfund Amendments and Reauthorization Act (SARA) of 1986, wastes generated as a result of CERCLA remediation activities and transferred off-site must be managed at a facility operating in compliance with federal laws. EPA issued its Off-Site Policy (OSWER Directive No. 9834, 11), which gave guidance on complying with this particular requirement. Specifically, EPA requires off-site waste management facilities to fulfill EPA`s definition of acceptability and has established detailed procedures for issuing and reviewing unacceptability determinations. EPA proposed amending the National Contingency Plan (NCP) (40 CFR part 300) to include the requirements contained in the Off-Site Policy (53 FR 48218). On September 22, 1993 EPA published the Off-Site Rules [58 FR 49200], which became effective on October 22, 1993. The primary purpose of the Off-Site Rule is to clarify and codify CERCLA`s requirement to prevent wastes generated from remediation activities conducted under CERCLA from contributing to present or future environmental problems at off-site waste management facilities that receive them. Thus, the Off-Site Rule requires that CERCLA wastes only be sent to off-site facilities that meet EPA`s acceptability criteria. The final Off-Site Rule makes two major changes to the proposed Off-Site Rule: (1) only EPA, not an authorized State, can make determinations of the acceptability of off-site facilities that manage CERCLA wastes, and (2) the Off-Site eliminate the distinction between CERCLA wastes governed under pre-SARA and post-SARA agreements. The purpose of this information Brief is to highlight and clarify EPA`s final Off-Site and its implications on DOE remedial actions under CERCLA.
State of offsite construction in India-Drivers and barriers
Arif, M.; Bendi, D.; Sawhney, A.; Iyer, K. C.
2012-05-01
The rapid growth of the construction industry in India has influenced key players in the industry to adopt alternative technologies addressing time, cost and quality. The rising demand in housing, infrastructure and other facilities have further highlighted the need for the construction industry to look at adopting alternate building technologies. Offsite construction has evolved as a panacea to dealing with the under-supply and poor quality in the current age construction industry. Several offsite techniques have been adopted by the construction sector. Although, different forms of offsite techniques have been around for a while but their uptake has been low in the Indian context. This paper presents the perceptions about offsite construction in India and highlights some of the barriers and drivers facing the Indian construction industry. The data was gathered through a survey of 17 high level managers from some of the largest stakeholder organizations of the construction sector in India. The influence of time and cost has been highlighted as a major factor fuelling the adoption of offsite construction. However, the influence of current planning systems and the need for a paradigm shift are some of the prominent barriers towards the adoption of offsite techniques.
Yu, C.; Gnanapragasam, E.; Cheng, J.-J.; Biwer, B.
2006-05-22
The main purpose of this report is to document the benchmarking results and verification of the RESRAD-OFFSITE code as part of the quality assurance requirements of the RESRAD development program. This documentation will enable the U.S. Department of Energy (DOE) and its contractors, and the U.S. Nuclear Regulatory Commission (NRC) and its licensees and other stakeholders to use the quality-assured version of the code to perform dose analysis in a risk-informed and technically defensible manner to demonstrate compliance with the NRC's License Termination Rule, Title 10, Part 20, Subpart E, of the Code of Federal Regulations (10 CFR Part 20, Subpart E); DOE's 10 CFR Part 834, Order 5400.5, ''Radiation Protection of the Public and the Environment''; and other Federal and State regulatory requirements as appropriate. The other purpose of this report is to document the differences and similarities between the RESRAD (onsite) and RESRAD-OFFSITE codes so that users (dose analysts and risk assessors) can make a smooth transition from use of the RESRAD (onsite) code to use of the RESRAD-OFFSITE code for performing both onsite and offsite dose analyses. The evolution of the RESRAD-OFFSITE code from the RESRAD (onsite) code is described in Chapter 1 to help the dose analyst and risk assessor make a smooth conceptual transition from the use of one code to that of the other. Chapter 2 provides a comparison of the predictions of RESRAD (onsite) and RESRAD-OFFSITE for an onsite exposure scenario. Chapter 3 documents the results of benchmarking RESRAD-OFFSITE's atmospheric transport and dispersion submodel against the U.S. Environmental Protection Agency's (EPA's) CAP88-PC (Clean Air Act Assessment Package-1988) and ISCLT3 (Industrial Source Complex-Long Term) models. Chapter 4 documents the comparison results of the predictions of the RESRAD-OFFSITE code and its submodels with the predictions of peer models. This report was prepared
Jamil, T.; Braak, ter C.J.F.
2012-01-01
Maximum Likelihood (ML) in the linear model overfits when the number of predictors (M) exceeds the number of objects (N). One of the possible solution is the Relevance vector machine (RVM) which is a form of automatic relevance detection and has gained popularity in the pattern recognition machine l
Puder, M. G.; Veil, J. A.
2006-09-05
A survey conducted in 1995 by the American Petroleum Institute (API) found that the U.S. exploration and production (E&P) segment of the oil and gas industry generated more than 149 million bbl of drilling wastes, almost 18 billion bbl of produced water, and 21 million bbl of associated wastes. The results of that survey, published in 2000, suggested that 3% of drilling wastes, less than 0.5% of produced water, and 15% of associated wastes are sent to offsite commercial facilities for disposal. Argonne National Laboratory (Argonne) collected information on commercial E&P waste disposal companies in different states in 1997. While the information is nearly a decade old, the report has proved useful. In 2005, Argonne began collecting current information to update and expand the data. This report describes the new 2005-2006 database and focuses on the availability of offsite commercial disposal companies, the prevailing disposal methods, and estimated disposal costs. The data were collected in two phases. In the first phase, state oil and gas regulatory officials in 31 states were contacted to determine whether their agency maintained a list of permitted commercial disposal companies dedicated to oil. In the second stage, individual commercial disposal companies were interviewed to determine disposal methods and costs. The availability of offsite commercial disposal companies and facilities falls into three categories. The states with high oil and gas production typically have a dedicated network of offsite commercial disposal companies and facilities in place. In other states, such an infrastructure does not exist and very often, commercial disposal companies focus on produced water services. About half of the states do not have any industry-specific offsite commercial disposal infrastructure. In those states, operators take their wastes to local municipal landfills if permitted or haul the wastes to other states. This report provides state-by-state summaries of the
Peixoto, L A; Bhering, L L; Cruz, C D
2016-11-21
Genomic selection is a useful technique to assist breeders in selecting the best genotypes accurately. Phenotypic selection in the F2 generation presents with low accuracy as each genotype is represented by one individual; thus, genomic selection can increase selection accuracy at this stage of the breeding program. This study aimed to establish the optimal number of individuals required to compose the training population and to establish the amount of markers necessary to obtain the maximum accuracy by genomic selection methods in F2 populations. F2 populations with 1000 individuals were simulated, and six traits were simulated with different heritability values (5, 20, 40, 60, 80 and 99%). Ridge regression best linear unbiased prediction was used in all analyses. Genomic selection models were set by varying the number of individuals in the training population (2 to 1000 individuals) and markers (2 to 3060 markers). Phenotypic accuracy, genotypic accuracy, genetic variance, residual variance, and heritability were evaluated. Greater the number of individuals in the training population, higher was the accuracy; the values of genotypic and residual variances and heritability were close to the optimum value. Higher the heritability of the trait, higher is the number of markers necessary to obtain maximum accuracy, ranging from 200 for the trait with 5% heritability to 900 for the trait with 99% heritability. Therefore, genomic selection models for prediction in F2 populations must consist of 200 to 900 markers of major effect on the trait and more than 600 individuals in the training population.
Soldat, J.K.; Price, K.R.; McCormack, W.D.
1986-02-01
Since 1957, evaluations of offsite impacts from each year of operation have been summarized in publicly available, annual environmental reports. These evaluations included estimates of potential radiation exposure to members of the public, either in terms of percentages of the then permissible limits or in terms of radiation dose. The estimated potential radiation doses to maximally exposed individuals from each year of Hanford operations are summarized in a series of tables and figures. The applicable standard for radiation dose to an individual for whom the maximum exposure was estimated is also shown. Although the estimates address potential radiation doses to the public from each year of operations at Hanford between 1957 and 1984, their sum will not produce an accurate estimate of doses accumulated over this time period. The estimates were the best evaluations available at the time to assess potential dose from the current year of operation as well as from any radionuclides still present in the environment from previous years of operation. There was a constant striving for improved evaluation of the potential radiation doses received by members of the public, and as a result the methods and assumptions used to estimate doses were periodically modified to add new pathways of exposure and to increase the accuracy of the dose calculations. Three conclusions were reached from this review: radiation doses reported for the years 1957 through 1984 for the maximum individual did not exceed the applicable dose standards; radiation doses reported over the past 27 years are not additive because of the changing and inconsistent methods used; and results from environmental monitoring and the associated dose calculations reported over the 27 years from 1957 through 1984 do not suggest a significant dose contribution from the buildup in the environment of radioactive materials associated with Hanford operations.
Pathways for Off-site Corporate PV Procurement
Heeter, Jenny S [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-09-06
Through July 2017, corporate customers contracted for more than 2,300 MW of utility-scale solar. This paper examines the benefits, challenges, and outlooks for large-scale off-site solar purchasing through four pathways: power purchase agreements, retail choice, utility partnerships (green tariffs and bilateral contracts with utilities), and by becoming a licensed wholesale seller of electricity. Each pathway differs based on where in the United States it is available, the value provided to a corporate off-taker, and the ease of implementation. The paper concludes with a discussion of future pathway comparison, noting that to deploy more corporate off-site solar, new procurement pathways are needed.
George C. Efthimiou
2015-06-01
Full Text Available The capability to predict short-term maximum individual exposure is very important for several applications including, for example, deliberate/accidental release of hazardous substances, odour fluctuations or material flammability level exceedance. Recently, authors have proposed a simple approach relating maximum individual exposure to parameters such as the fluctuation intensity and the concentration integral time scale. In the first part of this study (Part I, the methodology was validated against field measurements, which are governed by the natural variability of atmospheric boundary conditions. In Part II of this study, an in-depth validation of the approach is performed using reference data recorded under truly stationary and well documented flow conditions. For this reason, a boundary-layer wind-tunnel experiment was used. The experimental dataset includes 196 time-resolved concentration measurements which detect the dispersion from a continuous point source within an urban model of semi-idealized complexity. The data analysis allowed the improvement of an important model parameter. The model performed very well in predicting the maximum individual exposure, presenting a factor of two of observations equal to 95%. For large time intervals, an exponential correction term has been introduced in the model based on the experimental observations. The new model is capable of predicting all time intervals giving an overall factor of two of observations equal to 100%.
Analysis of offsite Emergency Planning Zones (EPZ) for the Rocky Flats Plant
Hodgin, C.R.; Armstrong, C.; Daugherty, N.M.; Foppe, T.L.; Petrocchi, A.J.; Southward, B.
1990-05-01
This project plan for Phase II summarizes the design of a project to complete analysis of offsite Emergency Planning Zones (EPZ) for the Rocky Flats Plant. Federal, state, and local governments develop emergency plans for facilities that may affect the public in the event of an accidental release of nuclear or hazardous materials. One of the purposes of these plans is to identify EPZs where actions might be necessary to protect public health. Public protective actions include sheltering, evacuation, and relocation. Agencies use EPZs to develop response plans and to determine needed resources. The State of Colorado, with support from the US Department of Energy (DOE) and Rocky Flats contractors, has developed emergency plans and EPZs for the Rocky Flats Plant periodically beginning in 1980. In Phase II, Interim Emergency Planning Zones Analysis, Maximum Credible Accident'' we will utilize the current Rocky Flats maximum credible accident (MCA), existing dispersion methodologies, and upgraded dosimetry methodologies to update the radiological EPZs. Additionally, we will develop recommendations for EPZs for nonradiological hazardous materials releases and evaluate potential surface water releases from the facility. This project will allow EG G Rocky Flats to meet current commitments to the state of Colorado and make steady, tangible improvements in our understanding of risk to offsite populations during potential emergencies at the Rocky Flats Plant. 8 refs., 5 figs., 4 tabs.
40 CFR 68.30 - Defining offsite impacts-population.
2010-07-01
... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Defining offsite impacts-population... impacts—population. (a) The owner or operator shall estimate in the RMP the population within a circle... defined in § 68.22(a). (b) Population to be defined. Population shall include residential population. The...
Pesticide use and off-site risk assessment
Yang, X.
2016-01-01
Pesticide use and off-site risk assessment: a case study of glyphosate fate in Chinese Loess soil Xiaomei Yang Abstract: Repeated applications of pesticide may contaminate the soil and water, threatening their quality within
Pesticide use and off-site risk assessment
Yang, X.
2016-01-01
Pesticide use and off-site risk assessment: a case study of glyphosate fate in Chinese Loess soil Xiaomei Yang Abstract: Repeated applications of pesticide may contaminate the soil and water, threatening their quality within
40 CFR 68.22 - Offsite consequence analysis parameters.
2010-07-01
... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Offsite consequence analysis parameters. 68.22 Section 68.22 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... topography, as appropriate. Urban means that there are many obstacles in the immediate area; obstacles...
40 CFR 68.165 - Offsite consequence analysis.
2010-07-01
... (toxics only); (10) Topography (toxics only); (11) Distance to endpoint; (12) Public and environmental... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Offsite consequence analysis. 68.165 Section 68.165 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS...
40 CFR 68.33 - Defining offsite impacts-environment.
2010-07-01
... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Defining offsite impacts-environment. 68.33 Section 68.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... impacts—environment. (a) The owner or operator shall list in the RMP environmental receptors within...
LHCb: The LHCb off-Site HLT Farm Demonstration
Liu, Guoming
2012-01-01
The LHCb High Level Trigger (HLT) farm consists of about 1300 nodes, which are housed in the underground server room of the experiment point. Due to the constraints of the power supply and cooling system, it is difficult to install more servers in this room for the future. Off-site computing farm is a solution to enlarge the computing capacity. In this paper, we will demonstrate the LHCb off-site HLT farm which locate in the CERN computing center. Since we use private IP addresses for the HLT farm, we would need virtual private network (VPN) to bridge both sites. There are two kinds of traffic in the event builder: control traffic for the control and monitoring of the farm and the Data Acquisition (DAQ) traffic. We adopt IP tunnel for the control traffic and Network Address Translate (NAT) for the DAQ traffic. The performance of the off-site farm have been tested and compared with the on-site farm. The effect of the network latency has been studied. To employ a large off-site farm, one of the potential bottle...
User's Guide for RESRAD-OFFSITE
Gnanapragasam, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Yu, C. [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-04-01
The RESRAD-OFFSITE code can be used to model the radiological dose or risk to an offsite receptor. This User’s Guide for RESRAD-OFFSITE Version 3.1 is an update of the User’s Guide for RESRAD-OFFSITE Version 2 contained in the Appendix A of the User’s Manual for RESRAD-OFFSITE Version 2 (ANL/EVS/TM/07-1, DOE/HS-0005, NUREG/CR-6937). This user’s guide presents the basic information necessary to use Version 3.1 of the code. It also points to the help file and other documents that provide more detailed information about the inputs, the input forms and features/tools in the code; two of the features (overriding the source term and computing area factors) are discussed in the appendices to this guide. Section 2 describes how to download and install the code and then verify the installation of the code. Section 3 shows ways to navigate through the input screens to simulate various exposure scenarios and to view the results in graphics and text reports. Section 4 has screen shots of each input form in the code and provides basic information about each parameter to increase the user’s understanding of the code. Section 5 outlines the contents of all the text reports and the graphical output. It also describes the commands in the two output viewers. Section 6 deals with the probabilistic and sensitivity analysis tools available in the code. Section 7 details the various ways of obtaining help in the code.
Lawrence Livermore National Laboratory offsite hazardous waste shipment data validation report
NONE
1995-09-01
The U.S. Department of Energy Headquarters requested this report to verify that Lawrence Livermore National Laboratory (LLNL) properly categorized hazardous waste shipped offsite from 1984 to 1991. LLNL categorized the waste shipments by the new guidelines provided on the definition of radioactive waste. For this validation, waste that has had no radioactivity added by DOE operations is nonradioactive. Waste to which DOE operations has added or concentrated any radioactivity is radioactive. This report documents findings from the review of available LLNL hazardous waste shipment information and summarizes the data validation strategy. The report discusses administrative and radiological control procedures in place at LLNL during the data validation period. It also describes sampling and analysis and surface survey procedures used in determining radionuclide concentrations for offsite release of hazardous waste shipments. The evaluation team reviewed individual items on offsite hazardous waste shipments and classified them, using the DOE-HQ waste category definitions. LLNL relied primarily on generator knowledge to classify wastes. Very little radioanalytical information exists on hazardous wastes shipped from LLNL. Slightly greater than one-half of those hazardous waste items for which the documentation included radioanalytical data showed concentrations of radioactivity higher than the LLNL release criteria used from 1989 to 1991. Based on this small amount of available radioanalytical data, very little (less than one percent) of the hazardous waste generated at the LLNL main site can be shown to contain DOE added radioactivity. LLNL based the criteria on the limit of analytical sensitivity for gross alpha and gross beta measurements and the background levels of tritium. Findings in this report are based on information and documentation on the waste handling procedures in place before the start of the hazardous waste shipping moratorium in May 1991.
Offsite Radiological Consequence Analysis for the Bounding Flammable Gas Accident
Carro, C A
2003-01-01
This document quantifies the offsite radiological consequences of the bounding flammable gas accident for comparison with the 25 rem Evaluation Guideline established in DOE-STD-3009, Appendix A. The bounding flammable gas accident is a detonation in a single-shell tank The calculation applies reasonably conservation input parameters in accordance with DOE-STD-3009, Appendix A, guidance. Revision 1 incorporates comments received from Office of River Protection.
Sørensen, Jette Led; Østergaard, Doris; LeBlanc, Vicki
2017-01-01
BACKGROUND: Simulation-based medical education (SBME) has traditionally been conducted as off-site simulation in simulation centres. Some hospital departments also provide off-site simulation using in-house training room(s) set up for simulation away from the clinical setting, and these activities...... that choice of setting for simulations does not seem to influence individual and team learning. Department-based local simulation, such as simulation in-house and especially in situ simulation, leads to gains in organisational learning. The overall objectives of simulation-based education and factors...... are called in-house training. In-house training facilities can be part of hospital departments and resemble to some extent simulation centres but often have less technical equipment. In situ simulation, introduced over the past decade, mainly comprises of team-based activities and occurs in patient care...
Sørensen, Jette Led; Østergaard, Doris; LeBlanc, Vicki
2017-01-01
simulations. DISCUSSION: Non-randomised studies argue that in situ simulation is more effective for educational purposes than other types of simulation settings. Conversely, the few comparison studies that exist, either randomised or retrospective, show that choice of setting does not seem to influence......BACKGROUND: Simulation-based medical education (SBME) has traditionally been conducted as off-site simulation in simulation centres. Some hospital departments also provide off-site simulation using in-house training room(s) set up for simulation away from the clinical setting, and these activities...... that choice of setting for simulations does not seem to influence individual and team learning. Department-based local simulation, such as simulation in-house and especially in situ simulation, leads to gains in organisational learning. The overall objectives of simulation-based education and factors...
13 CFR 120.1025 - Off-site reviews and monitoring.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Off-site reviews and monitoring. 120.1025 Section 120.1025 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Risk-Based Lender Oversight Supervision § 120.1025 Off-site reviews and monitoring. SBA may conduct...
14 CFR 151.95 - Fences; distance markers; navigational and landing aids; and offsite work.
2010-01-01
... landing aids; and offsite work. 151.95 Section 151.95 Aeronautics and Space FEDERAL AVIATION... Standards § 151.95 Fences; distance markers; navigational and landing aids; and offsite work. (a) Boundary... navigational aids is eligible for inclusion in a proj- ect whenever necessitated by development on the...
40 CFR 1400.5 - Internet access to certain off-site consequence analysis data elements.
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Internet access to certain off-site... DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Public Access § 1400.5 Internet access to certain off... elements in the risk management plan database available on the Internet: (a) The concentration of...
2010-01-01
... or substantial radiation levels offsite. 840.4 Section 840.4 Energy DEPARTMENT OF ENERGY... substantial radiation levels offsite. DOE will determine that there has been a substantial discharge or dispersal of radioactive material offsite, or that there have been substantial levels of radiation...
2010-01-01
... or substantial radiation levels offsite. 140.84 Section 140.84 Energy NUCLEAR REGULATORY COMMISSION... § 140.84 Criterion I—Substantial discharge of radioactive material or substantial radiation levels... radioactive material offsite, or that there have been substantial levels of radiation offsite, when, as...
40 CFR 1400.8 - Access to off-site consequence analysis information by Federal government officials.
2010-07-01
... INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.8 Access to off-site consequence analysis information by Federal... analysis information by Federal government officials. 1400.8 Section 1400.8 Protection of...
2010-07-01
... CONSEQUENCE ANALYSIS INFORMATION DISTRIBUTION OF OFF-SITE CONSEQUENCE ANALYSIS INFORMATION Access to Off-Site Consequence Analysis Information by Government Officials. § 1400.9 Access to off-site consequence analysis... analysis information by State and local government officials. 1400.9 Section 1400.9 Protection...
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Zhao, W.; Cella, M.; Pasqua, O. Della; Burger, D.M.; Jacqz-Aigrain, E.
2012-01-01
WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT: Abacavir is used to treat HIV infection in both adults and children. The recommended paediatric dose is 8 mg kg(-1) twice daily up to a maximum of 300 mg twice daily. Weight was identified as the central covariate influencing pharmacokinetics of abacavir in
Statistical Analysis of Loss of Offsite Power Events
Andrija Volkanovski
2016-01-01
Full Text Available This paper presents the results of the statistical analysis of the loss of offsite power events (LOOP registered in four reviewed databases. The reviewed databases include the IRSN (Institut de Radioprotection et de Sûreté Nucléaire SAPIDE database and the GRS (Gesellschaft für Anlagen- und Reaktorsicherheit mbH VERA database reviewed over the period from 1992 to 2011. The US NRC (Nuclear Regulatory Commission Licensee Event Reports (LERs database and the IAEA International Reporting System (IRS database were screened for relevant events registered over the period from 1990 to 2013. The number of LOOP events in each year in the analysed period and mode of operation are assessed during the screening. The LOOP frequencies obtained for the French and German nuclear power plants (NPPs during critical operation are of the same order of magnitude with the plant related events as a dominant contributor. A frequency of one LOOP event per shutdown year is obtained for German NPPs in shutdown mode of operation. For the US NPPs, the obtained LOOP frequency for critical and shutdown mode is comparable to the one assessed in NUREG/CR-6890. Decreasing trend is obtained for the LOOP events registered in three databases (IRSN, GRS, and NRC.
Qingtongxia Aluminum Carrying Out Off-site Renovation in Ningdong Energy & Chemical Base
2008-01-01
<正>Recently,the off-site renovation project of Qingtongxia Aluminum commenced the con- struction in Linhe General Industrial Park of Ningdong Energy & Chemical Base,symboliz- ing a concrete step of Qingtongxia Aluminum
AMCO Off-Site Air Monitoring Polygons, Oakland CA, 2017, US EPA Region 9
U.S. Environmental Protection Agency — This feature class was developed to support the AMCO Chemical Superfund Site air monitoring process and depicts a single polygon layer, Off-Site Air Monitors,...
77 FR 59001 - Fee for Services To Support FEMA's Offsite Radiological Emergency Preparedness Program
2012-09-25
... SECURITY Federal Emergency Management Agency Fee for Services To Support FEMA's Offsite Radiological Emergency Preparedness Program AGENCY: Federal Emergency Management Agency, DHS. ACTION: Notice. SUMMARY... services provided by FEMA personnel for FEMA's Radiological Emergency Preparedness (REP) Program....
75 FR 19985 - Fee for Services To Support FEMA's Offsite Radiological Emergency Preparedness Program
2010-04-16
... SECURITY Federal Emergency Management Agency Fee for Services To Support FEMA's Offsite Radiological Emergency Preparedness Program AGENCY: Federal Emergency Management Agency, DHS. ACTION: Notice. SUMMARY... services provided by FEMA personnel for FEMA's Radiological Emergency Preparedness (REP) Program....
de Oliveira, Liliam Fernandes; Menegaldo, Luciano Luporini
2010-10-19
EMG-driven models can be used to estimate muscle force in biomechanical systems. Collected and processed EMG readings are used as the input of a dynamic system, which is integrated numerically. This approach requires the definition of a reasonably large set of parameters. Some of these vary widely among subjects, and slight inaccuracies in such parameters can lead to large model output errors. One of these parameters is the maximum voluntary contraction force (F(om)). This paper proposes an approach to find F(om) by estimating muscle physiological cross-sectional area (PCSA) using ultrasound (US), which is multiplied by a realistic value of maximum muscle specific tension. Ultrasound is used to measure muscle thickness, which allows for the determination of muscle volume through regression equations. Soleus, gastrocnemius medialis and gastrocnemius lateralis PCSAs are estimated using published volume proportions among leg muscles, which also requires measurements of muscle fiber length and pennation angle by US. F(om) obtained by this approach and from data widely cited in the literature was used to comparatively test a Hill-type EMG-driven model of the ankle joint. The model uses 3 EMGs (Soleus, gastrocnemius medialis and gastrocnemius lateralis) as inputs with joint torque as the output. The EMG signals were obtained in a series of experiments carried out with 8 adult male subjects, who performed an isometric contraction protocol consisting of 10s step contractions at 20% and 60% of the maximum voluntary contraction level. Isometric torque was simultaneously collected using a dynamometer. A statistically significant reduction in the root mean square error was observed when US-obtained F(om) was used, as compared to F(om) from the literature.
Ron Warren
2006-12-01
hypothetical maximally exposed individual at the closest NTS boundary to the proposed Divine Strake experiment, as estimated by the CAP88-PC model, was 0.005 mrem with wind blowing directly towards that location. Boundary dose, as modeled by NARAC, ranged from about 0.006 to 0.007 mrem. Potential doses to actual offsite populated locations were generally two to five times lower still, or about 40 to 100 times lower then the 0.1 mrem level at which EPA approval is required pursuant to Section 61.96.
Reconstructing ancient sustainability: a comparison of onsite and offsite data
Lubos, Carolin; Dreibrodt, Stefan; Horejs, Barbara
2013-04-01
With the onset of sedentism humans started to convert their surroundings. Whereas reconstructions of geochemical traces of settlement activity (e.g. Arrhenius, 1931) or man's pressure on the soils of landscapes (e.g. van Andel et al., 1990; Bork, 1998) were carried out at many sites holistic approaches questioning the sustainability of ancient societies are missing so far. A new approach, applied to the multi layered settlement mound "Cukurici Höyük" (western Anatolia, Turkey) aims at comparing land use intensity and settlement intensity. Land use intensity of the former settlers will be described by determining slope instability phases and quantifying slope deposits at hills adjacent to the settlement. Geochemical and physical properties as well as bio remains will be analysed of the dated debris layers onsite and quantified as matter fluxes. Matter accumulation onsite, being an indicator for settlement intensities, is compared to slope instability phases offsite, describing the impact of former settlers on their environment. The approach aims at quantifying historical settlement pressure over several settlement phases and might shed light on different phases of sustainability in ancient Times. The planned project is imbedded within the archaeological project (ERC Project / Austrian Archaeological Institute) which investigates alternating societal systems in a changing environment between 7000 and 3000 BC. Focus is laid on architectural research, archaeobotany, archaeozoology, lithics, metallurgy, and ore deposit. In a first geoarchaeological field campaign differentiable slope deposits could be proved. These contained datable organic material as well as pottery sherds dating to different historical phases. A well-established archaeological chronosequence of settlement layers will provide the onsite framework for this new project. The paper presents preliminary results of the outlined approach. Additionally several geochemical methodologies applied to the debris
User's Manual for RESRAD-OFFSITE Version 2.
Yu, C.; Gnanapragasam, E.; Biwer, B. M.; Kamboj, S.; Cheng, J. -J.; Klett, T.; LePoire, D.; Zielen, A. J.; Chen, S. Y.; Williams, W. A.; Wallo, A.; Domotor, S.; Mo, T.; Schwartzman, A.; Environmental Science Division; DOE; NRC
2007-09-05
The RESRAD-OFFSITE code is an extension of the RESRAD (onsite) code, which has been widely used for calculating doses and risks from exposure to radioactively contaminated soils. The development of RESRAD-OFFSITE started more than 10 years ago, but new models and methodologies have been developed, tested, and incorporated since then. Some of the new models have been benchmarked against other independently developed (international) models. The databases used have also expanded to include all the radionuclides (more than 830) contained in the International Commission on Radiological Protection (ICRP) 38 database. This manual provides detailed information on the design and application of the RESRAD-OFFSITE code. It describes in detail the new models used in the code, such as the three-dimensional dispersion groundwater flow and radionuclide transport model, the Gaussian plume model for atmospheric dispersion, and the deposition model used to estimate the accumulation of radionuclides in offsite locations and in foods. Potential exposure pathways and exposure scenarios that can be modeled by the RESRAD-OFFSITE code are also discussed. A user's guide is included in Appendix A of this manual. The default parameter values and parameter distributions are presented in Appendix B, along with a discussion on the statistical distributions for probabilistic analysis. A detailed discussion on how to reduce run time, especially when conducting probabilistic (uncertainty) analysis, is presented in Appendix C of this manual.
Using RFID to enhance security in off-site data storage.
Lopez-Carmona, Miguel A; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R
2010-01-01
Off-site data storage is one of the most widely used strategies in enterprises of all sizes to improve business continuity. In medium-to-large size enterprises, the off-site data storage processes are usually outsourced to specialized providers. However, outsourcing the storage of critical business information assets raises serious security considerations, some of which are usually either disregarded or incorrectly addressed by service providers. This article reviews these security considerations and presents a radio frequency identification (RFID)-based, off-site, data storage management system specifically designed to address security issues. The system relies on a set of security mechanisms or controls that are arranged in security layers or tiers to balance security requirements with usability and costs. The system has been successfully implemented, deployed and put into production. In addition, an experimental comparison with classical bar-code-based systems is provided, demonstrating the system's benefits in terms of efficiency and failure prevention.
On-Site or Off-Site Renewable Energy Supply Options?
Marszal, Anna Joanna; Heiselberg, Per; Jensen, Rasmus Lund;
2012-01-01
The concept of a Net Zero Energy Building (Net ZEB) encompasses two options of supplying renewable energy, which can offset energy use of a building, in particular on-site or off-site renewable energy supply. Currently, the on-site options are much more popular than the off-site; however, taking...... into consideration the limited area of roof and/or façade, primarily in the dense city areas, the Danish weather conditions, the growing interest and number of wind turbine co-ops, the off-site renewable energy supply options could become a meaningful solution for reaching ‘zero’ energy goal in the Danish context....... Therefore, this paper deploys the life cycle cost analysis and takes the private economy perspective to investigate the life cycle cost of different renewable energy supply options, and to identify the costoptimal combination between energy efficiency and renewable energy generation. The analysis includes...
Wireless remote control clinical image workflow: utilizing a PDA for offsite distribution
Liu, Brent J.; Documet, Luis; Documet, Jorge; Huang, H. K.; Muldoon, Jean
2004-04-01
Last year we presented in RSNA an application to perform wireless remote control of PACS image distribution utilizing a handheld device such as a Personal Digital Assistant (PDA). This paper describes the clinical experiences including workflow scenarios of implementing the PDA application to route exams from the clinical PACS archive server to various locations for offsite distribution of clinical PACS exams. By utilizing this remote control application, radiologists can manage image workflow distribution with a single wireless handheld device without impacting their clinical workflow on diagnostic PACS workstations. A PDA application was designed and developed to perform DICOM Query and C-Move requests by a physician from a clinical PACS Archive to a CD-burning device for automatic burning of PACS data for the distribution to offsite. In addition, it was also used for convenient routing of historical PACS exams to the local web server, local workstations, and teleradiology systems. The application was evaluated by radiologists as well as other clinical staff who need to distribute PACS exams to offsite referring physician"s offices and offsite radiologists. An application for image workflow management utilizing wireless technology was implemented in a clinical environment and evaluated. A PDA application was successfully utilized to perform DICOM Query and C-Move requests from the clinical PACS archive to various offsite exam distribution devices. Clinical staff can utilize the PDA to manage image workflow and PACS exam distribution conveniently for offsite consultations by referring physicians and radiologists. This solution allows the radiologist to expand their effectiveness in health care delivery both within the radiology department as well as offisite by improving their clinical workflow.
Off-site training of laparoscopic skills, a scoping review using a thematic analysis
Thinggaard, Ebbe; Kleif, Jakob; Bjerrum, Flemming
2016-01-01
on learning theories including proficiency-based learning, deliberate practice, and self-regulated learning. CONCLUSIONS: Methods of instructional design vary widely in off-site training of laparoscopic skills. Implementation can be facilitated by organizing courses and training curricula following sound...... education theories such as proficiency-based learning and deliberate practice. Directed self-regulated learning has the potential to improve off-site laparoscopic skills training; however, further studies are needed to demonstrate the effect of this type of instructional design....
Idaho Habitat Evaluation for Off-Site Mitigation Record : Annual Report 1987.
Petrosky, Charles E.; Holubetz, Terry B. (Idaho Dept. of Fish and Game, Boise, ID (USA)
1988-04-01
The Idaho Department of Fish and Game has been monitoring and evaluating existing and proposed habitat improvement projects for steelhead (Salmo gairdneri) and chinook salmon (Oncorhynchus tshawytscha) in the Clearwater and Salmon River drainages over the last four years. Projects included in the evaluation are funded by, or proposed for funding by, the Bonneville Power Administration (BPA) under the Northwest Power Planning Act as off-site mitigation for downstream hydropower development on the Snake and Columbia rivers. A mitigation record is being developed to use increased smolt production at full seeding as the best measure of benefit from a habitat enhancement project. Determination of full benefit from a project depends on presence of adequate numbers of fish to document actual increases in fish production. The depressed nature of upriver anadromous stocks have precluded attainment of full benefit of any habitat project in Idaho. Partial benefit will be credited to the mitigation record in the interim period of run restoration. According to the BPA Work Plan, project implementors have the primary responsibility for measuring physical habitat and estimating habitat change. To date, Idaho habitat projects have been implemented primarily by the US Forest Service (USFS). The Shoshone-Bannock Tribes (SBT) have sponsored three projects (Bear Valley Mine, Yankee Fork, and the proposed East Fork Salmon River projects). IDFG implemented two barrier-removal projects (Johnson Creek and Boulder Creek) that the USFS was unable to sponsor at that time. The role of IDFG in physical habitat monitoring is primarily to link habitat quality and habitat change to changes in actual, or potential, fish production. Individual papers were processed separately for the data base.
Sorensen, J.L.; Ostergaard, D.; Leblanc, V.; Ottesen, B.; Konge, L.; Dieckmann, P.; Vleuten, C. van der
2017-01-01
BACKGROUND: Simulation-based medical education (SBME) has traditionally been conducted as off-site simulation in simulation centres. Some hospital departments also provide off-site simulation using in-house training room(s) set up for simulation away from the clinical setting, and these activities
Black, S. C.; Grossman, R. F.; Mullen, A. A.; Potter, G. D.; Smith, D. D. [comps.
1983-07-01
A principal activity of the Offsite Radiological Safety Program is routine environmental monitoring for radioactive materials in various media and for radiation in areas which may be affected by nuclear tests. It is conducted to document compliance with standards, to identify trends, and to provide information to the public. This report summarizes these activities for CY 1982.
40 CFR 300.440 - Procedures for planning and implementing off-site response actions.
2010-07-01
... applies to any remedial or removal action involving the off-site transfer of any hazardous substance... emergency removal actions under CERCLA, emergency actions taken during remedial actions, or response actions... administrative or judicial challenge to the finding of noncompliance or uncontrolled releases upon which...
Harris, J.D.; Harvego, L.A.; Jacobs, A.M. [Lockheed Martin Idaho Technologies Co., Idaho Falls, ID (United States); Willcox, M.V. [Dept. of Energy Idaho Operations Office, Idaho Falls, ID (United States)
1998-01-01
The Waste Experimental Reduction Facility (WERF) incinerator at the Idaho National Engineering and Environmental Laboratory (INEEL) is one of three incinerators in the US Department of Energy (DOE) Complex capable of incinerating mixed low-level waste (MLLW). WERF has received MLLW from offsite generators and is scheduled to receive more. The State of Idaho supports receipt of offsite MLLW waste at the WERF incinerator within the requirements established in the (INEEL) Site Treatment Plan (STP). The incinerator is operating as a Resource Conservation and Recovery Act (RCRA) Interim Status Facility, with a RCRA Part B permit application currently being reviewed by the State of Idaho. Offsite MLLW received from other DOE facilities are currently being incinerated at WERF at no charge to the generator. Residues associated with the incineration of offsite MLLW waste that meet the Envirocare of Utah waste acceptance criteria are sent to that facility for treatment and/or disposal. WERF is contributing to the treatment and reduction of MLLW in the DOE Complex.
Environmental Assessment Offsite Thermal Treatment of Low-Level Mixed Waste
N/A
1999-05-06
The U.S. Department of Energy (DOE), Richland Operations Office (RL) needs to demonstrate the economics and feasibility of offsite commercial treatment of contact-handled low-level mixed waste (LLMW), containing polychlorinated biphenyls (PCBS) and other organics, to meet existing regulatory standards for eventual disposal.
On-Site or Off-Site Renewable Energy Supply Options?
Marszal, Anna Joanna; Heiselberg, Per; Jensen, Rasmus Lund
2012-01-01
into consideration the limited area of roof and/or façade, primarily in the dense city areas, the Danish weather conditions, the growing interest and number of wind turbine co-ops, the off-site renewable energy supply options could become a meaningful solution for reaching ‘zero’ energy goal in the Danish context...
Off-Site Supervision in Social Work Education: What Makes It Work?
Maynard, Sarah P.; Mertz, Linda K. P.; Fortune, Anne E.
2015-01-01
The field practicum is the signature pedagogy of the social work profession, yet field directors struggle to find adequate field placements--both in quantity and quality. To accommodate more students with a dwindling pool of practicum sites, creative models of field supervision have emerged. This article considers off-site supervision and its…
Analysis of offsite dose calculation methodology for a nuclear power reactor
Moser, Donna Smith [Univ. of North Carolina, Chapel Hill, NC (United States)
1995-01-01
This technical study reviews the methodology for calculating offsite dose estimates as described in the offsite dose calculation manual (ODCM) for Pennsylvania Power and Light - Susquehanna Steam Electric Station (SSES). An evaluation of the SSES ODCM dose assessment methodology indicates that it conforms with methodology accepted by the US Nuclear Regulatory Commission (NRC). Using 1993 SSES effluent data, dose estimates are calculated according to SSES ODCM methodology and compared to the dose estimates calculated according to SSES ODCM and the computer model used to produce the reported 1993 dose estimates. The 1993 SSES dose estimates are based on the axioms of Publication 2 of the International Commission of Radiological Protection (ICRP). SSES Dose estimates based on the axioms of ICRP Publication 26 and 30 reveal the total body estimates to be the most affected.
Shedrow, C.B.
1999-11-29
The Safety Analysis Report documents the safety authorization basis for the Receiving Basin for Offsite Fuels (RBOF) and the Resin Regeneration Facility (RRF) at the Savannah River Site (SRS). The present mission of the RBOF and RRF is to continue in providing a facility for the safe receipt, storage, handling, and shipping of spent nuclear fuel assemblies from power and research reactors in the United States, fuel from SRS and other Department of Energy (DOE) reactors, and foreign research reactors fuel, in support of the nonproliferation policy. The RBOF and RRF provide the capability to handle, separate, and transfer wastes generated from nuclear fuel element storage. The DOE and Westinghouse Savannah River Company, the prime operating contractor, are committed to managing these activities in such a manner that the health and safety of the offsite general public, the site worker, the facility worker, and the environment are protected.
Minutes of the workshop on off-site release criteria for contaminated materials
Singh, S.P.N.
1989-11-01
A one and one-half-day workshop was held May 2-3, 1989, at the Pollard Auditorium in Oak Ridge, Tennessee, with the objective of formulating a strategy for developing reasonable and uniform criteria for releasing radioactively contaminated materials from the US Department of Energy (DOE) sites. This report contains the minutes of the workshop. At the conclusion of the workshop, a plan was formulated to facilitate the development of the above-mentioned off-site release criteria.
Davis, M.G.; Flotard, R.D.; Fontana, C.A.; Huff, P.A.; Maunu, H.K.; Mouck, T.L.; Mullen, A.A.; Sells, M.D.
1997-08-01
This report describes the Offsite Radiation Safety Program. This laboratory operated an environmental radiation monitoring program in the region surrounding the Nevada Test Site (NTS) and at former test sites in Alaska, Colorado, Mississippi, Nevada, and New Mexico. The surveillance program is designed to measure levels and trends of radioactivity, if present, in the environment surrounding testing areas to ascertain whether current radiation levels and associated doses to the general public are in compliance with existing radiation protection standards. The surveillance program additionally has the responsibility to take action to protect the health and well being of the public in the event of any accidental release of radioactive contaminants. Offsite levels of radiation and radioactivity are assessed by sampling milk, water, and air; by deploying thermoluminescent dosimeters (TLDs); and using pressurized ionization chambers (PICs). No nuclear weapons testing was conducted in 1996 due to the continuing nuclear test moratorium. During this period, R and IE personnel maintained readiness capability to provide direct monitoring support if testing were to be resumed and ascertained compliance with applicable EPA, DOE, state, and federal regulations and guidelines. Comparison of the measurements and sample analysis results with background levels and with appropriate standards and regulations indicated that there was no airborne radioactivity from diffusion or resuspension detected by the various EPA monitoring networks surrounding the NTS. There was no indication of potential migration of radioactivity to the offsite area through groundwater and no radiation exposure above natural background was received by the offsite population. All evaluated data were consistent with previous data history.
Off-site interaction effect in the Extended Hubbard Model with SCRPA method
Harir, S.; Bennai, M.; Boughaleb, Y.
2009-01-01
The Self Consistent Random Phase Approximation (SCRPA) and a Direct Analytical (DA) method are proposed to solve the Extended Hubbard Model in 1D. We have considered an Extended Hubbard Model (EHM) including on-site and off-site interactions for closed chains in one dimension with periodic boundary conditions. The comparison of the SCRPA results with ones obtained by a Direct Analytical approach shows that the SCRPA treats the problem of these closed chains with a rigorous manner. The analysi...
Landman, Claudia [Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany); Pro-Science GmbH, Ettlingen (Germany); Raskob, Wolfgang; Trybushnyi, Dmytro [Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany)
2016-07-01
JRodos is a non-commercial computer-based decision support system for nuclear accidents. The simulation models for assessing radiological and other consequences and the system features and components allow real-time operation for off-site emergency management as well as the use as a tool for preparing exercises and pre-plannng of countermeasures. There is an active user community that takes influence on further developments.
Sørensen, Jette Led; Østergaard, Doris; LeBlanc, Vicki; Ottesen, Bent; Konge, Lars; Dieckmann, Peter; Van der Vleuten, Cees
2017-01-21
Simulation-based medical education (SBME) has traditionally been conducted as off-site simulation in simulation centres. Some hospital departments also provide off-site simulation using in-house training room(s) set up for simulation away from the clinical setting, and these activities are called in-house training. In-house training facilities can be part of hospital departments and resemble to some extent simulation centres but often have less technical equipment. In situ simulation, introduced over the past decade, mainly comprises of team-based activities and occurs in patient care units with healthcare professionals in their own working environment. Thus, this intentional blend of simulation and real working environments means that in situ simulation brings simulation to the real working environment and provides training where people work. In situ simulation can be either announced or unannounced, the latter also known as a drill. This article presents and discusses the design of SBME and the advantage and disadvantage of the different simulation settings, such as training in simulation-centres, in-house simulations in hospital departments, announced or unannounced in situ simulations. Non-randomised studies argue that in situ simulation is more effective for educational purposes than other types of simulation settings. Conversely, the few comparison studies that exist, either randomised or retrospective, show that choice of setting does not seem to influence individual or team learning. However, hospital department-based simulations, such as in-house simulation and in situ simulation, lead to a gain in organisational learning. To our knowledge no studies have compared announced and unannounced in situ simulation. The literature suggests some improved organisational learning from unannounced in situ simulation; however, unannounced in situ simulation was also found to be challenging to plan and conduct, and more stressful among participants. The importance of
ALTERNATIVES OF MACCS2 IN LANL DISPERSION ANALYSIS FOR ONSITE AND OFFSITE DOSES
Wang, John HC [Los Alamos National Laboratory
2012-05-01
In modeling atmospheric dispersion to determine accidental release of radiological material, one of the common statistical analysis tools used at Los Alamos National Laboratory (LANL) is MELCOR Accident Consequence Code System, Version 2 (MACCS2). MACCS2, however, has some limitations and shortfalls for both onsite and offsite applications. Alternative computer codes, which could provide more realistic calculations, are being investigated for use at LANL. In the Yucca Mountain Project (YMP), the suitability of MACCS2 for the calculation of onsite worker doses was a concern; therefore, ARCON96 was chosen to replace MACCS2. YMP's use of ARCON96 provided results which clearly demonstrated the program's merit for onsite worker safety analyses in a wide range of complex configurations and scenarios. For offsite public exposures, the conservatism of MACCS2 on the treatment of turbulence phenomena at LANL is examined in this paper. The results show a factor of at least two conservatism in calculated public doses. The new EPA air quality model, AERMOD, which implements advanced meteorological turbulence calculations, is a good candidate for LANL applications to provide more confidence in the accuracy of offsite public dose projections.
Chaloud, D.J.; Dicey, B.B.; Mullen, A.A.; Neale, A.C.; Sparks, A.R.; Fontana, C.A.; Carroll, L.D.; Phillips, W.G.; Smith, D.D.; Thome, D.J.
1992-01-01
This report describes the Offsite Radiation Safety Program conducted during 1991 by the Environmental Protection Agency`s (EPA`s) Environmental Monitoring Systems Laboratory-Las Vegas. This laboratory operates an environmental radiation monitoring program in the region surrounding the Nevada Test Site (NTS) and at former test sites in Alaska, Colorado, Mississippi, Nevada, and New Mexico. The surveillance program is designed to measure levels and trends of radioactivity, if present, in the environment surrounding testing areas to ascertain whether current radiation levels and associated doses to the general public are in compliance with existing radiation protection standards. The surveillance program additionally has the responsibility to take action to protect the health and well being of the public in the event of any accidental release of radioactive contaminants. Offsite levels of radiation and radioactivity are assessed by sampling milk, water, and air; by deploying thermoluminescent dosimeters (TLDs) and using pressurized ion chambers (PICs); and by biological monitoring of animals, food crops, and humans. Personnel with mobile monitoring equipment are placed in areas downwind from the test site prior to each nuclear weapons test to implement protective actions, provide immediate radiation monitoring, and obtain environmental samples rapidly after any occurrence of radioactivity release. Comparison of the measurements and sample analysis results with background levels and with appropriate standards and regulations indicated that there was no radioactivity detected offsite by the various EPA monitoring networks and no exposure above natural background to the population living in the vicinity of the NTS that could be attributed to current NTS activities. Annual and long-term trends were evaluated in the Noble Gas, Tritium, Milk Surveillance, Biomonitoring, TLD, PIC networks, and the Long-Term Hydrological Monitoring Program.
Offsite Source Recovery Program (OSRP) Workshop Module: Tianjin, China, July 16-July 17, 2012
Houlton, Robert J. [Los Alamos National Laboratory
2012-07-11
Recovering and disposal of radioactive sources that are no longer in service in their intended capacity is an area of high concern Globally. A joint effort to recover and dispose of such sources was formed between the US Department of Energy and the Chinese Ministry of Environmental Protection (MEP), in preparation for the 2008 Beijing Olympics. LANL involvement in this agreement continues today under the DOE-Global Threat Reduction Initiative (GTRI) program. LANL will be presenting overview information on their Offsite Source Recovery (OSRP) and Source Disposal programs, in a workshop for the Ministry of Environmental Protection (MEP) at Tianjin, China, on July 16 and 17, 2012.
Assessment of uncertainties in early off-site consequences from nuclear reactor accidents
Madni, I.K.; Cazzoli, E.G. (Brookhaven National Lab., Dept. of Nuclear Energy, Upton, NY (US)); Khatib-Rahbar, M. (Energy Research, Inc., Rockville, MD (US))
1990-04-01
A simplified approach has been developed to calculate uncertainties in early off-site consequences from nuclear reactor accidents. The consequence model (SMART) is based on a solution procedure that uses simplified meteorology and involves direct analytic integration of air concentration equations over time and position. This is different from the discretization approach currently used in the CRAC2 and MACCS codes. The SMART code is fast running, thereby providing a valuable tool for sensitivity and uncertainty studies. The code was benchmarked against both MACCS version 1.4 and CRAC2. Results of benchmarketing and detailed sensitivity and uncertainty analyses using SMART are presented.
Off-Site Prefabrication: What Does it Require from the Trade Contractor?
Bekdik, Baris; Hall, Daniel; Aslesen, Sigmund
2016-01-01
understand and improve existing construction processes, relatively few contributions have focused on the opportunities for industrialization from the trade contractor’s perspective. This paper uses an in-depth case study to address the deployment strategy for off-site fabrication techniques and processes...... at only one case study, the conclusions are limited in generalizability to other prefabrication operations. However, it represents an important in-depth case from the trade contractors’ perspective and will contribute to the growing body of research focused on industrialization and prefabrication in lean...... construction. ....
Hu, Kaifeng; Ellinger, James J; Chylla, Roger A; Markley, John L
2011-12-15
Time-zero 2D (13)C HSQC (HSQC(0)) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC(0) spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero (1)H-(13)C HSQC(0) in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant-time mode. Semiautomatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semiautomated gsHSQC(0) with those obtained by the original manual phase-cycled HSQC(0) approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Analysis of Loss-of-Offsite-Power Events 1997-2015
Johnson, Nancy Ellen [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schroeder, John Alton [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-07-01
Loss of offsite power (LOOP) can have a major negative impact on a power plant’s ability to achieve and maintain safe shutdown conditions. LOOP event frequencies and times required for subsequent restoration of offsite power are important inputs to plant probabilistic risk assessments. This report presents a statistical and engineering analysis of LOOP frequencies and durations at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience during calendar years 1997 through 2015. LOOP events during critical operation that do not result in a reactor trip, are not included. Frequencies and durations were determined for four event categories: plant-centered, switchyard-centered, grid-related, and weather-related. Emergency diesel generator reliability is also considered (failure to start, failure to load and run, and failure to run more than 1 hour). There is an adverse trend in LOOP durations. The previously reported adverse trend in LOOP frequency was not statistically significant for 2006-2015. Grid-related LOOPs happen predominantly in the summer. Switchyard-centered LOOPs happen predominantly in winter and spring. Plant-centered and weather-related LOOPs do not show statistically significant seasonality. The engineering analysis of LOOP data shows that human errors have been much less frequent since 1997 than in the 1986 -1996 time period.
Chaloud, D.J; Daigler, D.M.; Davis, M.G. [and others
1996-06-01
This report describes the Offsite Radiation Safety Program conducted during 1993 by the Environmental Protection Agency`s (EPA`s) Environmental Monitoring Systems Laboratory - Las Vegas (EMSL-LV). This laboratory operates an environmental radiation monitoring program in the region surrounding the Nevada Test Site (NTS) and at former test sites in Alaska, Colorado, Mississippi, Nevada, and New Mexico. The surveillance program is designed to measure levels and trends of radioactivity, if present, in the environment surrounding testing areas to ascertain whether current radiation levels and associated doses to the general public are in compliance with existing radiation protection standards. The surveillance program additionally has the responsibility to take action to protect the health and well being of the public in the event of any accidental release of radioactive contaminants. Offsite levels of radiation and radioactivity are assessed by sampling milk, water, and air; by deploying thermoluminescent dosimeters (TLDs) and using pressurized ionization chambers (PICs); by biological monitoring of foodstuffs including animal tissues and food crops; and by measurement of radioactive material deposited in humans.
McCormick, J. L.; Whitney, D.; Schill, D. J.; Quist, Michael
2015-01-01
Accuracy of angler-reported data on steelhead, Oncorhynchus mykiss (Walbaum), harvest in Idaho, USA, was quantified by comparing data recorded on angler harvest permits to the numbers that the same group of anglers reported in an off-site survey. Anglers could respond to the off-site survey using mail or Internet; if they did not respond using these methods, they were called on the telephone. A majority of anglers responded through the mail, and the probability of responding by Internet decreased with increasing age of the respondent. The actual number of steelhead harvested did not appear to influence the response type. Anglers in the autumn 2012 survey overreported harvest by 24%, whereas anglers in the spring 2013 survey under-reported steelhead harvest by 16%. The direction of reporting bias may have been a function of actual harvest, where anglers harvested on average 2.6 times more fish during the spring fishery than the autumn. Reporting bias that is a function of actual harvest can have substantial management and conservation implications because the fishery will be perceived to be performing better at lower harvest rates and worse when harvest rates are higher. Thus, these findings warrant consideration when designing surveys and evaluating management actions.
Analysis of Loss-of-Offsite-Power Events 1998–2013
Schroeder, John Alton [Idaho National Lab. (INL), Idaho Falls, ID (United States). Risk Assessment and Management Services Dept.
2015-02-01
Loss of offsite power (LOOP) can have a major negative impact on a power plant’s ability to achieve and maintain safe shutdown conditions. Risk analyses suggest that loss of all alternating current power contributes over 70% of the overall risk at some U.S. nuclear plants. LOOP event and subsequent restoration of offsite power are important inputs to plant probabilistic risk assessments. This report presents a statistical and engineering analysis of LOOP frequencies and durations at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience during calendar years 1997 through 2013. Frequencies and durations were determined for four event categories: plant-centered, switchyard-centered, grid-related, and weather-related. The emergency diesel generator failure modes considered are failure to start, failure to load and run, and failure to run more than 1 hour. The component reliability estimates and the reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. No statistically significant trends in LOOP frequencies over the 1997–2013 period are identified. There is a possibility that a significant trend in grid-related LOOP frequency exists that is not easily detected by a simple analysis. Statistically significant increases in recovery times after grid- and switchyard-related LOOPs are identified.
Off-site toxic consequence assessment: a simplified modeling procedure and case study.
Guarnaccia, Joe; Hoppe, Tom
2008-11-15
An assessment of off-site exposure from spills/releases of toxic chemicals can be conducted by compiling site-specific operational, geographic, demographic, and meteorological data and by using screening-level public-domain modeling tools (e.g., RMP Comp, ALOHA and DEGADIS). In general, the analysis is confined to the following: event-based simulations (allow for the use of known, constant, atmospheric conditions), known receptor distances (on the order of miles or less), short time scale for the distances considered (order of 10's of minutes or less), gently sloping rough terrain, dense and neutrally buoyant gas dispersion, known chemical inventory and infrastructure (used to define source-term), and known toxic endpoint (defines significance). While screening-level models are relatively simple to use, care must be taken to ensure that the results are meaningful. This approach allows one to assess risk from catastrophic release (e.g., via terrorism), or plausible release scenarios (related to standard operating procedures and industry standards). In addition, given receptor distance and toxic endpoint, the model can be used to predict the critical spill volume to realize significant off-site risk. This information can then be used to assess site storage and operation parameters and to determine the most economical and effective risk reduction measures to be applied.
Study on the code system for the off-site consequences assessment of severe nuclear accident
Kim, Sora; Mn, Byung Il; Park, Ki Hyun; Yang, Byung Mo; Suh, Kyung Suk [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-12-15
The importance of severe nuclear accidents and probabilistic safety assessment (PSA) were brought to international attention with the occurrence of severe nuclear accidents caused by the extreme natural disaster at Fukushima Daiichi nuclear power plant in Japan. In Korea, studies on level 3 PSA had made little progress until recently. The code systems of level 3 PSA, MACCS2 (MELCORE Accident Consequence Code System 2, US), COSYMA (COde SYstem from MAria, EU) and OSCAAR (Off-Site Consequence Analysis code for Atmospheric Releases in reactor accidents, JAPAN), were reviewed in this study, and the disadvantages and limitations of MACCS2 were also analyzed. Experts from Korea and abroad pointed out that the limitations of MACCS2 include the following: MACCS2 cannot simulate multi-unit accidents/release from spent fuel pools, and its atmospheric dispersion is based on a simple Gaussian plume model. Some of these limitations have been improved in the updated versions of MACCS2. The absence of a marine and aquatic dispersion model and the limited simulating range of food-chain and economic models are also important aspects that need to be improved. This paper is expected to be utilized as basic research material for developing a Korean code system for assessing off-site consequences of severe nuclear accidents.
Analysis of Loss-of-Offsite-Power Events 1998–2012
T. E. Wierman
2013-10-01
Loss of offsite power (LOOP) can have a major negative impact on a power plant’s ability to achieve and maintain safe shutdown conditions. Risk analyses performed loss of all alternating current power contributes over 70% of the overall risk at some U.S. nuclear plants. LOOP event and subsequent restoration of offsite power are important inputs to plant probabilistic risk assessments. This report presents a statistical and engineering analysis of LOOP frequencies and durations at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience from fiscal year 1998 through 2012. Frequencies and durations were determined for four event categories: plant-centered, switchyard-centered, grid-related, and weather-related. The EDG failure modes considered are failure to start, failure to load and run, and failure to run more than 1 hour. The component reliability estimates and the reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. A statistically significant increase in industry performance was identified for plant-centered and switchyard-centered LOOP frequencies. There is no statistically significant trend in LOOP durations.
Davis, M.G.; Flotard, R.D.; Fontana, C.A.; Hennessey, P.A.; Maunu, H.K.; Mouck, T.L.; Mullen, A.A.; Sells, M.D.
1999-01-01
This report describes the Offsite Radiological Environmental Monitoring Program (OREMP) conducted during 1997 by the US Environmental Protection Agency`s (EPAs), Radiation and Indoor Environments National Laboratory, Las Vegas, Nevada. This laboratory operated an environmental radiation monitoring program in the region surrounding the Nevada Test Site (NTS) and at former test sites in Alaska, Colorado, Mississippi, Nevada, and New Mexico. The surveillance program is designed to measure levels and trends of radioactivity, if present, in the environment surrounding testing areas to ascertain whether current radiation levels and associated doses to the general public are in compliance with existing radiation protection standards. The surveillance program additionally has the responsibility to take action to protect the health and well being of the public in the event of any accidental release of radioactive contaminants. Offsite levels of radiation and radioactivity are assessed by sampling and analyzing milk, water, and air; by deploying and reading thermoluminescent dosimeters (TLDs); and using pressurized ionization chambers (PICs) to measure ambient gamma exposure rates with a sensitivity capable of detecting low level exposures not detected by other monitoring methods.
Thuesen, Christian; Hvam, Lars
2013-01-01
This paper presents a set of insights to be used in the development of business models for off-site system deliveries contributing to the development of Off-Site Manufacturing practices (OSM). The theoretical offset for discussing the development of business models is the blue ocean strategy...... in the constant pursue of value creation and cost reduction. On the basis of that system deliverances represent a promising strategy in the future development and application of off-site manufacturing practices. The application of system deliveries is however demanding as it represents a fundamental shift...... in the existing design and production practices. More specifically the development of system deliveries requires: (1) an explicit market focus, enabling the achievement of economy of scale, (2) a coordinated and coherent development around the system deliverance focusing on its internal and external modularity...
Strasser, Barbara; Schwarz, Joachim; Haber, Paul; Schobersberger, Wolfgang
2011-12-01
Aim of this study was to evaluate reliable guide values for heart rate (HF) and blood pressure (RR) with reference to defined sub maximum exertion considering age, gender and body mass. One hundred and eighteen healthy but non-trained subjects (38 women, 80 men) were included in the study. For interpretation, finally facts of 28 women and 59 men were used. We found gender differences for HF and RR. Further, we noted significant correlations between HF and age as well as between RR and body mass at all exercise levels. We established formulas for gender-specific calculation of reliable guide values for HF and RR on sub maximum exercise levels.
Aldrich, D.C.; McGrath, P.E.; Rasmussen, N.C.
1978-06-01
Evacuation, sheltering followed by population relocation, and iodine prophylaxis are evaluated as offsite public protective measures in response to nuclear reactor accidents involving core-melt. Evaluations were conducted using a modified version of the Reactor Safety Study consequence model. Models representing each measure were developed and are discussed. Potential PWR core-melt radioactive material releases are separated into two categories, ''Melt-through'' and ''Atmospheric,'' based upon the mode of containment failure. Protective measures are examined and compared for each category in terms of projected doses to the whole body and thyroid. Measures for ''Atmospheric'' accidents are also examined in terms of their influence on the occurrence of public health effects.
Off-site interaction effect in the Extended Hubbard Model with the SCRPA method
Harir, S [Laboratoire de Physique de la Matiere Condensee, Faculte des Sciences Ben M' Sik, Universite Hassan II-Mohammedia Casablanca (Morocco); Bennai, M [Laboratoire de Physique de la Matiere Condensee, Faculte des Sciences Ben M' Sik, Universite Hassan II-Mohammedia Casablanca (Morocco); Boughaleb, Y [Laboratoire de Physique de la Matiere Condensee, Faculte des Sciences Ben M' Sik, Universite Hassan II-Mohammedia Casablanca (Morocco)
2007-10-15
The self consistent random phase approximation (SCRPA) and a direct analytical (DA) method are proposed to solve the Extended Hubbard Model (EHM) in one dimension (1D). We have considered an EHM including on-site and off-site interactions for closed chains in 1D with periodic boundary conditions. The comparison of the SCRPA results with the ones obtained by a DA approach shows that the SCRPA treats the problem of these closed chains in a rigorous manner. The analysis of the nearest-neighbour repulsion effect on the dynamics of our closed chains shows that this repulsive interaction between the electrons of the neighbouring atoms induces supplementary conductivity, since, the SCRPA energygap vanishes when these closed chains are governed by a strong repulsive on-site interaction and intermediate nearest-neighbour repulsion.
Off-site interaction effect in the Extended Hubbard Model with the SCRPA method
Harir, S.; Bennai, M.; Boughaleb, Y.
2007-10-01
The self consistent random phase approximation (SCRPA) and a direct analytical (DA) method are proposed to solve the Extended Hubbard Model (EHM) in one dimension (1D). We have considered an EHM including on-site and off-site interactions for closed chains in 1D with periodic boundary conditions. The comparison of the SCRPA results with the ones obtained by a DA approach shows that the SCRPA treats the problem of these closed chains in a rigorous manner. The analysis of the nearest-neighbour repulsion effect on the dynamics of our closed chains shows that this repulsive interaction between the electrons of the neighbouring atoms induces supplementary conductivity, since, the SCRPA energygap vanishes when these closed chains are governed by a strong repulsive on-site interaction and intermediate nearest-neighbour repulsion.
New Source Term Model for the RESRAD-OFFSITE Code Version 3
Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)
2013-06-01
This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.
Homma, Toshimitsu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Takahashi, Tomoyuki [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst; Yonehara, Hidenori [National Inst. of Radiological Sciences, Chiba (Japan)] [eds.
2000-12-01
This report is a revision of JAERI-M 91-005, 'Health Effects Models for Off-Site Radiological Consequence Analysis of Nuclear Reactor Accidents'. This revision provides a review of two revisions of NUREG/CR-4214 reports by the U.S. Nuclear Regulatory Commission which is the basis of the JAERI health effects models and other several recent reports that may impact the health effects models by international organizations. The major changes to the first version of the JAERI health effects models and the recommended parameters in this report are for late somatic effects. These changes reflect recent changes in cancer risk factors that have come from longer followup and revised dosimetry in major studies on the Japanese A-bomb survivors. This report also provides suggestions about future revisions of computational aspects on health effects models. (author)
An off-site screening process for the public in radiation emergencies and disasters
Yoon, Seok Won; Ho, Ha Wi; Jin, Young Woo [National Radiation Emergency Medical Center, Korea Institute of Radiological and Medical Sciences, Seoul (Korea, Republic of)
2016-09-15
A contamination screening process for the local population in radiation emergencies is discussed. We present an overview of the relevant Korean governmental regulations that underpin the development of an effective response system. Moreover, case studies of foreign countries responding to mass casualties are presented, and indicate that responses should be able to handle a large demand for contamination screening of the local public as well as screening of the immediate victims of the incident. We propose operating procedures for an off-site contamination screening post operated by the local government for members of the public who have not been directly harmed in the accident. In order to devise screening categories, sorting strategies assessing contamination and exposure are discussed, as well as a psychological response system. This study will lead to the effective operation of contamination screening clinics if an accident occurs. Furthermore, the role of contamination screening clinics in the overall context of the radiation emergency treatment system should be clearly established.
RHF RELAP5 model and preliminary loss-of-offsite-power simulation results for LEU conversion
Licht, J. R. [Argonne National Laboratory (ANL), Argonne, IL (United States). Nuclear Engineering Div.; Bergeron, A. [Argonne National Laboratory (ANL), Argonne, IL (United States). Nuclear Engineering Div.; Dionne, B. [Argonne National Laboratory (ANL), Argonne, IL (United States). Nuclear Engineering Div.; Thomas, F. [Institut Laue-Langevin (ILL), Grenoble (Switzerland). RHF Reactor Dept.
2014-08-01
The purpose of this document is to describe the current state of the RELAP5 model for the Institut Laue-Langevin High Flux Reactor (RHF) located in Grenoble, France, and provide an update to the key information required to complete, for example, simulations for a loss of offsite power (LOOP) accident. A previous status report identified a list of 22 items to be resolved in order to complete the RELAP5 model. Most of these items have been resolved by ANL and the RHF team. Enough information was available to perform preliminary safety analyses and define the key items that are still required. Section 2 of this document describes the RELAP5 model of RHF. The final part of this section briefly summarizes previous model issues and resolutions. Section 3 of this document describes preliminary LOOP simulations for both HEU and LEU fuel at beginning of cycle conditions.
RHF RELAP5 Model and Preliminary Loss-Of-Offsite-Power Simulation Results for LEU Conversion
Licht, J. R. [Argonne National Laboratory (ANL), Argonne, IL (United States). Nuclear Engineering Div.; Bergeron, A. [Argonne National Laboratory (ANL), Argonne, IL (United States). Nuclear Engineering Div.; Dionne, B. [Argonne National Laboratory (ANL), Argonne, IL (United States). Nuclear Engineering Div.; Thomas, F. [Institut Laue-Langevin (ILL), Grenoble (Switzerland). RHF Reactor Dept.
2014-08-01
The purpose of this document is to describe the current state of the RELAP5 model for the Institut Laue-Langevin High Flux Reactor (RHF) located in Grenoble, France, and provide an update to the key information required to complete, for example, simulations for a loss of offsite power (LOOP) accident. A previous status report identified a list of 22 items to be resolved in order to complete the RELAP5 model. Most of these items have been resolved by ANL and the RHF team. Enough information was available to perform preliminary safety analyses and define the key items that are still required. Section 2 of this document describes the RELAP5 model of RHF. The final part of this section briefly summarizes previous model issues and resolutions. Section 3 of this document describes preliminary LOOP simulations for both HEU and LEU fuel at beginning of cycle conditions.
Chang, Ni-Bin; Ning, Shu-Kuang; Chen, Jen-Chang
2006-08-01
Due to increasing environmental consciousness in most countries, every utility that owns a commercial nuclear power plant has been required to have both an on-site and off-site emergency response plan since the 1980s. A radiation monitoring network, viewed as part of the emergency response plan, can provide information regarding the radiation dosage emitted from a nuclear power plant in a regular operational period and/or abnormal measurements in an emergency event. Such monitoring information might help field operators and decision-makers to provide accurate responses or make decisions to protect the public health and safety. This study aims to conduct an integrated simulation and optimization analysis looking for the relocation strategy of a long-term regular off-site monitoring network at a nuclear power plant. The planning goal is to downsize the current monitoring network but maintain its monitoring capacity as much as possible. The monitoring sensors considered in this study include the thermoluminescence dosimetry (TLD) and air sampling system (AP) simultaneously. It is designed for detecting the radionuclide accumulative concentration, the frequency of violation, and the possible population affected by a long-term impact in the surrounding area regularly while it can also be used in an accidental release event. With the aid of the calibrated Industrial Source Complex-Plume Rise Model Enhancements (ISC-PRIME) simulation model to track down the possible radionuclide diffusion, dispersion, transport, and transformation process in the atmospheric environment, a multiobjective evaluation process can be applied to achieve the screening of monitoring stations for the nuclear power plant located at Hengchun Peninsula, South Taiwan. To account for multiple objectives, this study calculated preference weights to linearly combine objective functions leading to decision-making with exposure assessment in an optimization context. Final suggestions should be useful for
Extensive management of field margins enhances their potential for off-site soil erosion mitigation.
Ali, Hamada E; Reineking, Björn
2016-03-15
Soil erosion is a widespread problem in agricultural landscapes, particularly in regions with strong rainfall events. Vegetated field margins can mitigate negative impacts of soil erosion off-site by trapping eroded material. Here we analyse how local management affects the trapping capacity of field margins in a monsoon region of South Korea, contrasting intensively and extensively managed field margins on both steep and shallow slopes. Prior to the beginning of monsoon season, we equipped a total of 12 sites representing three replicates for each of four different types of field margins ("intensive managed flat", "intensive managed steep", "extensive managed flat" and "extensive managed steep") with Astroturf mats. The mats (n = 15/site) were placed before, within and after the field margin. Sediment was collected after each rain event until the end of the monsoon season. The effect of management and slope on sediment trapping was analysed using linear mixed effects models, using as response variable either the sediment collected within the field margin or the difference in sediment collected after and before the field margin. There was no difference in the amount of sediment reaching the different field margin types. In contrast, extensively managed field margins showed a large reduction in collected sediment before and after the field margins. This effect was pronounced in steep field margins, and increased with the size of rainfall events. We conclude that a field margin management promoting a dense vegetation cover is a key to mitigating negative off-site effects of soil erosion in monsoon regions, particularly in field margins with steep slopes.
Priddle, Charlotte; McCann, Laura
2015-01-01
Special collections libraries collect and preserve materials of intellectual and cultural heritage, providing access to unique research resources. As their holdings continue to expand, special collections in research libraries confront increased space pressures. Off-site storage facilities, used frequently by research libraries for general…
Sørensen, Jette Led; Navne, Laura Emdal; Martin, Helle Max;
2015-01-01
OBJECTIVE: To examine how the setting in in situ simulation (ISS) and off-site simulation (OSS) in simulation-based medical education affects the perceptions and learning experience of healthcare professionals. DESIGN: Qualitative study using focus groups and content analysis. PARTICIPANTS: Twenty...
Melissa Goertzen
2016-03-01
Full Text Available Objective – To measure the use of off-site storage for special collections materials and to examine how this use impacts core special collections activities. Design – Survey questionnaire containing both structured and open ended questions. Follow-up interviews were also conducted. Setting – Association of Research Libraries (ARL member institutions in the United States of America. Subjects – 108 directors of special collections. Methods – Participants were recruited via email; contact information was compiled through professional directories, web searches, and referrals from professionals at ARL member libraries. The survey was sent out on October 31, 2013, and two reminder emails were distributed before it closed three weeks later. The survey was created and distributed using Qualtrics, a research software that supports online data collection and analysis. All results were analyzed using Microsoft Excel and Qualtrics. Main Results – The final response rate was 58% (63 out of 108. The majority (51 participants, or 81% reported use of off-site storage for library collections. Of this group, 91% (47 out of 51 house a variety of special collections in off-site storage. The criteria most frequently utilized to designate these materials to off-site storage are use (87%, size (66%, format (60%, and value (57%. The authors found that special collections directors are most likely to send materials to off-site storage facilities that are established and in use by other departments at their home institution; access to established workflows, especially those linked to transit and delivery, and space for expanding collections are benefits. In regard to core special collections activities, results indicated that public service was most impacted by off-site storage. The authors discussed challenges related to patron use and satisfaction. In regard to management and processing, directors faced challenges using the same level of staff to maintain
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
The detection of pesticides, associated with turfgrass management, in storm runoff and surface waters of urban watersheds has raised concerns regarding their source, potential environmental effects and a need for strategies to reduce their inputs. In previous research we discovered that hollow tine ...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Idaho Habitat Evaluation for Off-Site Mitigation Record : Annual Report 1988.
Idaho. Dept. of Fish and Game.
1990-03-01
The Idaho Department of Fish and Game (IDFG) has been monitoring and evaluating existing and proposed habitat improvement projects for steelhead and chinook in the Clearwater and Salmon subbasins since 1984. Projects included in the monitoring are funded by, or proposed for funding by, the Bonneville Power Administration (BPA) under the Northwest Power Planning Act as off-site mitigation for downstream hydropower development on the Snake and Columbia Rivers. This monitoring project is also funded under the same authority. A mitigation record is being developed to use actual and potential increases in smolt production as the best measures of benefit from a habitat improvement project. This project is divided into two subprojects: general and intensive monitoring. Primary objectives of the general monitoring subproject are to determine natural production increases due to habitat improvement projects in terms of parr production and to determine natural production status and trends in Idaho. The second objective is accomplished by combining parr density from monitoring and evaluation of BPA habitat projects and from other IDFG management and research activities. The primary objective of the intensive monitoring subproject is to determine the relationships between spawning escapement, parr production, and smolt production in two Idaho streams; the upper Salmon River and Crooked River. Results of the intensive monitoring will be used to estimate mitigation benefits in terms of smolt production and to interpret natural production monitoring in Idaho. 30 refs., 19 figs., 34 tabs.
Current State of Off-Site Manufacturing in Australian and Chinese Residential Construction
Malik M. A. Khalfan
2014-01-01
Full Text Available Many techniques have been implemented to make construction industry more productive. The key focus is on reduction of total duration, reduction in construction cost, improvements in the quality, achieving more sustainable development, and safer construction sites. One of the techniques, which is emerging in the last two decades, is the use of off-site manufacturing (OSM within the construction industry. Several research projects and industry initiatives have reported the benefits and challenges of implementation of OSM. The focus of this paper is Australian and Chinese residential construction industry and the uptake of the OSM concepts. The paper presents a brief review of the current state of OSM in the last five to seven years with the context of the above-mentioned two countries. The paper concludes that the construction industry, both in Australia and China, needs to start walking the talk with regard to OSM adoption. The paper also highlights some of the research gaps in the OSM area, especially within the housing and residential sector.
Idaho Habitat Evaluation for Off-Site Mitigation Record : Annual Report 1988.
Idaho. Dept. of Fish and Game.
1990-03-01
The Idaho Department of Fish and Game (IDFG) has been monitoring and evaluating existing and proposed habitat improvement projects for steelhead and chinook in the Clearwater and Salmon subbasins since 1984. Projects included in the monitoring are funded by, or proposed for funding by, the Bonneville Power Administration (BPA) under the Northwest Power Planning Act as off-site mitigation for downstream hydropower development on the Snake and Columbia Rivers. This monitoring project is also funded under the same authority. A mitigation record is being developed to use actual and potential increases in smolt production as the best measures of benefit from a habitat improvement project. This project is divided into two subprojects: general and intensive monitoring. Primary objectives of the general monitoring subproject are to determine natural production increases due to habitat improvement projects in terms of parr production and to determine natural production status and trends in Idaho. The second objective is accomplished by combining parr density from monitoring and evaluation of BPA habitat projects and from other IDFG management and research activities. The primary objective of the intensive monitoring subproject is to determine the relationships between spawning escapement, parr production, and smolt production in two Idaho streams; the upper Salmon River and Crooked River. Results of the intensive monitoring will be used to estimate mitigation benefits in terms of smolt production and to interpret natural production monitoring in Idaho. 30 refs., 19 figs., 34 tabs.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
San Jiang
2017-03-01
Full Text Available Regular inspection of transmission lines is an essential work, which has been implemented by either labor intensive or very expensive approaches. 3D reconstruction could be an alternative solution to satisfy the need for accurate and low cost inspection. This paper exploits the use of an unmanned aerial vehicle (UAV for outdoor data acquisition and conducts accuracy assessment tests to explore potential usage for offsite inspection of transmission lines. Firstly, an oblique photogrammetric system, integrating with a cheap double-camera imaging system, an onboard dual-frequency GNSS (Global Navigation Satellite System receiver and a ground master GNSS station in fixed position, is designed to acquire images with ground resolutions better than 3 cm. Secondly, an image orientation method, considering oblique imaging geometry of the dual-camera system, is applied to detect enough tie-points to construct stable image connection in both along-track and across-track directions. To achieve the best geo-referencing accuracy and evaluate model measurement precision, signalized ground control points (GCPs and model key points have been surveyed. Finally, accuracy assessment tests, including absolute orientation precision and relative model precision, have been conducted with different GCP configurations. Experiments show that images captured by the designed photogrammetric system contain enough information of power pylons from different viewpoints. Quantitative assessment demonstrates that, with fewer GCPs for image orientation, the absolute and relative accuracies of image orientation and model measurement are better than 0.3 and 0.2 m, respectively. For regular inspection of transmission lines, the proposed solution can to some extent be an alternative method with competitive accuracy, lower operational complexity and considerable gains in economic cost.
10 years and 20,000 sources: the offsite source recovery project
Whitworth, Julia R [Los Alamos National Laboratory; Abeyta, Cristy L [Los Alamos National Laboratory; Pearson, Michael W [Los Alamos National Laboratory
2009-01-01
The Global Threat Reduction Initiative's (GTRI) Offsite Source Recovery Project (OSRP) has been recovering excess and unwanted sealed sources for ten years. In January 2009, GTRI announced that the project had recovered 20,000 sealed radioactive sources. This project grew out of early efforts at Los Alamos National Laboratory (LANL) to recover and disposition excess Plutonium-239 (Pu-239) sealed sources that were distributed in the 1960s and 1970s under the Atoms for Peace Program. Sealed source recovery was initially considered a waste management activity, as evidenced by its initial organization under the Department of Energy's (DOE's) Environmental Management (EM) program. After the terrorist attacks of 2001, however, the interagency community began to recognize the threat posed by excess and unwanted radiological material, particularly those that could not be disposed at the end of their useful life. After being transferred to the National Nuclear Security Administration (NNSA) to be part of GTRI, OSRP's mission was expanded to include not only material that would be classified as Greater-than-Class-C (GTCC) when it became waste, but also any other materials that might be a 'national security consideration.' This paper discusses OSRP's history, recovery operations, expansion to accept high-activity beta-gamma-emitting sealed sources and devices and foreign-possessed sources, and more recent efforts such as cooperative projects with the Council on Radiation Control Program Directors (CRCPD) and involvement in GTRI's Search and Secure project. Current challenges and future work will also be discussed.
Idaho Habitat Evaluation for Off-Site Mitigation Record : Annual Report 1985.
Petrosky, Charles E.; Holubetz, Terry B.
1986-04-01
Evaluation approaches to document a record of credit for mitigation were developed in 1984-1985 for most of the habitat projects. Restoration of upriver anadromous fish runs through increased passage survival at main stem Columbia and Snake River dams is essential to the establishment of an off-site mitigation record, as well as to the success of the entire Fish and Wildlife program. The mitigation record is being developed to use increased smolt production (i.e., yield) at full-seeding as the basic measure of benefit from a habitat project. The IDFG evaluation approach consists of three basic, integrated levels: general monitoring, standing crop evaluations, and intensive studies. Annual general monitoring of anadromous fish densities in a small number of sections for each project will be used to follow population trends and define full-seeding levels. For most projects, smolt production will be estimated indirectly from standing crop estimates by factoring appropriate survival rates from parr to smolt stages. Intensive studies in a few key production streams will be initiated to determine these appropriate survival rates and provide other basic biological information that is needed for evaluation of the Fish and Wildlife program. A common physical habitat and fish population data base is being developed for every BPA habitat project in Idaho to be integrated at each level of evaluation. Compatibility of data is also needed between Idaho and other agencies and tribes in the Columbia River basin. No final determination of mitigation credit for any Idaho habitat enhancement project has been attainable to date.
Lessons from a BACE1 inhibitor trial: off-site but not off base.
Lahiri, Debomoy K; Maloney, Bryan; Long, Justin M; Greig, Nigel H
2014-10-01
Alzheimer's disease (AD) is characterized by formation of neuritic plaque primarily composed of a small filamentous protein called amyloid-β peptide (Aβ). The rate-limiting step in the production of Aβ is the processing of Aβ precursor protein (APP) by β-site APP-cleaving enzyme (BACE1). Hence, BACE1 activity plausibly plays a rate-limiting role in the generation of potentially toxic Aβ within brain and the development of AD, thereby making it an interesting drug target. A phase II trial of the promising LY2886721 inhibitor of BACE1 was suspended in June 2013 by Eli Lilly and Co., due to possible liver toxicity. This outcome was apparently a surprise to the study's team, particularly since BACE1 knockout mice and mice treated with the drug did not show such liver toxicity. Lilly proposed that the problem was not due to LY2886721 anti-BACE1 activity. We offer an alternative hypothesis, whereby anti-BACE1 activity may induce apparent hepatotoxicity through inhibiting BACE1's processing of β-galactoside α-2,6-sialyltransferase I (STGal6 I). In knockout mice, paralogues, such as BACE2 or cathepsin D, could partially compensate. Furthermore, the short duration of animal studies and short lifespan of study animals could mask effects that would require several decades to accumulate in humans. Inhibition of hepatic BACE1 activity in middle-aged humans would produce effects not detectable in mice. We present a testable model to explain the off-target effects of LY2886721 and highlight more broadly that so-called off-target drug effects might actually represent off-site effects that are not necessarily off-target. Consideration of this concept in forthcoming drug design, screening, and testing programs may prevent such failures in the future.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
None
1984-05-25
As part of the overall Solvent Refined Coal (SRC-1) project baseline being prepared by International Coal Refining Company (ICRC), the RUST Engineering Company is providing necessary input for the Outside Battery Limits (OSBL) Facilities. The project baseline is comprised of: design baseline - technical definition of work; schedule baseline - detailed and management level 1 schedules; and cost baseline - estimates and cost/manpower plan. The design baseline (technical definition) for the OSBL Facilities has been completed and is presented in Volumes I, II, III, IV, V and VI. The OSBL technical definition is based on, and compatible with, the ICRC defined statement of work, design basis memorandum, master project procedures, process and mechanical design criteria, and baseline guidance documents. The design basis memorandum is included in Paragraph 1.3 of Volume I. The baseline design data is presented in 6 volumes. Volume I contains the introduction section and utility systems data through steam and feedwater. Volume II continues with utility systems data through fuel system, and contains the interconnecting systems and utility system integration information. Volume III contains the offsites data through water and waste treatment. Volume IV continues with offsites data, including site development and buildings, and contains raw materials and product handling and storage information. Volume V contains wastewater treatment and solid wastes landfill systems developed by Catalytic, Inc. to supplement the information contained in Volume III. Volume VI contains proprietary information of Resources Conservation Company related to the evaporator/crystallizer system of the wastewater treatment area.
Han, Seok-Jung; KEUM, Dong-Kwon; Jang, Seung-Cheol [KAERI, Daejeon (Korea, Republic of)
2015-05-15
The FCM includes complex transport phenomena of radiation materials on a biokinetic system of contaminated environments. An estimation of chronic health effects is a key part of the level 3 PSA (Probabilistic Safety Assessment), which depends on the FCM estimation from contaminated foods ingestion. A cultural ingestion habit of a local region and agricultural productions are different to the general features over worldwide scale or case by case. This is a reason to develop a domestic FCM data for the level 3 PSA. However, a generation of the specific FCM data is a complex process and under a large degree of uncertainty due to inherent biokinetic models. As a preliminary study, the present study focuses on an infrastructure development to generation of a specific FCM data. During this process, the features of FCM data to generate a domestic FCM data were investigated. Based on the insights obtained from this process, a specific domestic FCM data was developed. The present study was developed a domestic FCM data to estimate the chronic health effects of off-site consequence analysis. From this study, an insight was obtained, that a domestic FCM data is roughly 20 times higher than the MACCS2 defaults data. Based on this observation, it is clear that the specific chronic health effects of a domestic plant site should be considered in the off-site consequence analysis.
Individualizing Services, Individualizing Responsibility
Garsten, Christina; Hollertz, Katarina; Jacobsson, Kerstin
and responsibilising the unemployed individual? The paper finds that the individualisation that is taking place occurs as an individualisation of responsibility, more than as an individualisation of interventions. A related finding is that the social rights perspective is becoming performance......-oriented, and the normative demands placed on individuals appear increasingly totalizing, concerning the whole individual rather than the job-related aspects only. The paper is based on 23 in-depth interviews with individual clients as well as individual caseworkers and other professionals engaged in client-related work...
Wang, X Y; Qu, J Y; Shi, Z Q; Ling, Y S
2003-01-01
GNARD (Guangdong Nuclear Accident Real-time Decision support system) is a decision support system for off-site emergency management in the event of an accidental release from the nuclear power plants located in Guangdong province, China. The system is capable of calculating wind field, concentrations of radionuclide in environmental media and radiation doses. It can also estimate the size of the area where protective actions should be taken and provide other information about population distribution and emergency facilities available in the area. Furthermore, the system can simulate and evaluate the effectiveness of countermeasures assumed and calculate averted doses by protective actions. All of the results can be shown and analysed on the platform of a geographical information system (GIS).
Maskill, Mark (US Fish and Wildlife Service, Creston National Fish Hatchery, Kalispell, MT)
2003-03-01
Mitigation Objective 1: Produce Native Westslope Cutthroat Trout at Creston NFH--Task: Acquire eggs and rear up to 100,000 Westslope Cutthroat trout annually for offsite mitigation stocking. Accomplishments: A total of 150,000 westslope cutthroat eggs (M012 strain) were acquired from the State of Montana Washoe Park State Fish Hatchery in July 2001 for this objective. Another 120,000 westslope cutthroat eggs were taken from feral fish at Rogers Lake in May of 2001 by the Creston Hatchery crew. The fish were reared using approved fish culture techniques as defined in the U.S. Department of the Interior Fish Hatchery Management guidelines. Post release survival and angler success is monitored annually by Montana Fish Wildlife and Parks (MFWP) and the Confederated Salish and Kootenai Tribe (CSKT). Stocking numbers and locations may vary yearly based on results of biological monitoring. Mitigation Objective 2: Produce Rainbow Trout at Creston NFH--Task: Acquire and rear up to 100,000 Rainbow trout annually for offsite mitigation in closed basin waters. Accomplishments: A total of 50,500 rainbow trout eggs (Arlee strain) were acquired from the State of Montana Arlee State Fish Hatchery in December 2001 for this objective. The fish were reared using approved fish culture techniques as defined in the U.S. Department of the Interior Fish Hatchery Management guidelines. Arlee rainbow trout are being used for this objective because the stocking locations are terminal basin reservoirs and habitat conditions and returns to creel are unsuitable for native cutthroat. Post release survival and angler success is monitored annually by the Confederated Salish and Kootenai Tribe (CSKT). Stocking numbers and locations may vary yearly based on results of biological monitoring.
Sorensen, J.L.; Navne, L.E.; Martin, H.M.; Ottesen, B.; Albrecthsen, C.K.; Pedersen, B.W.; Kjaergaard, H.; Vleuten, C. van der
2015-01-01
OBJECTIVE: To examine how the setting in in situ simulation (ISS) and off-site simulation (OSS) in simulation-based medical education affects the perceptions and learning experience of healthcare professionals. DESIGN: Qualitative study using focus groups and content analysis. PARTICIPANTS: Twenty-f
Cocina, Frank G [Los Alamos National Laboratory; Stewart, William C [Los Alamos National Laboratory; Wald - Hopkins, Mark [Los Alamos National Laboratory; Hageman, John P [SWRI
2009-01-01
The Off-Site Source Recovery Project has been operating at Los Alamos National Laboratory since 1998 to address the U.S. Department of Energy responsibility for collection and management of orphaned or disused radioactive sealed sources which may represent a risk to public health and national security if not properly managed.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Schindewolf, Marcus; Arevalo, Annika; Saathoff, Ulfert; Käpermann, Philipp; Schmidt, Jürgen
2013-04-01
Since soil erosion is one of the most important issues of global soil degradation, great effort was put into the application of erosion models for the assessment and prevention of on-site damages. Beside the primary impact of soil loss in decreasing soil fertility, erosion can cause significant impacts if transported sediments are entering downslope ecosystems, settlements, infrastructure or traffic routes. These off-site damages can be very costly, affect a lot of people and contaminate water-resources. The analysis of these problems is intensified by the requirements of new legislation, such as the EU Water Framework Directive (WFD), providing new challenges for planning authorities in order to combat off-site damage. Hence there is strong public and scientific interest in understanding the processes of sediment as well as particle attached nutrient and pollutant transport. Predicting the frequency, magnitude and extent of off-site impacts of water erosion is a necessary precondition for adequate risk assessments and mitigation measures. Process based models are increasingly used for the simulation of soil erosion. Regarding the requirements of the WFD, these models need to deliver comparable estimates from the regional scale to the level of mitigation measures. This study aims on the application of the process based model EROSION 3D for off-site risk assessment on different scales for the German federal state of Saxony using available geo data, data base applications and GIS-routines. Following issues were investigated: - Where are the expected sediment deposition areas? - Which settlements, infrastructures and traffic routes are affected by sediment fluxes? - Which river sections are affected by sediment inputs? - Which river sections are affected by nutrient and heavy metal inputs? The model results identify the Saxon loess belt as highly endangered by off-site damages although hotspots can be found in the northern flatlands and the southern mountain range as
US Fish and Wildlife Service Staff, (US Fish and Wildlife Service, Creston National Fish Hatchery, Kalispell, MT)
2004-02-01
Mitigation Objective 1: Produce Native Westslope Cutthroat Trout at Creston NFH--Task: Acquire eggs and rear up to 100,000 Westslope Cutthroat trout annually for offsite mitigation stocking. Accomplishments: A total of 141,000 westslope cutthroat eggs (M012 strain) was acquired from the State of Montana Washoe Park State Fish Hatchery in May 2002 for this objective. We also received an additional 22,000 westslope cutthroat eggs, MO12 strain naturalized, from feral fish at Rogers Lake, Flathead County, Montana. The fish were reared using approved fish culture techniques as defined in the U.S. Fish and Wildlife Service, Fish Hatchery Management guidelines. Survival from the swim up fry stage to stocking was 95.6%. We achieved a 0.80 feed conversion this year on a new diet, Skretting ''Nutra Plus''. Post release survival and angler success is monitored annually by Montana Fish Wildlife and Parks (MFWP) and the Confederated Salish and Kootenai Tribe (CSKT). Stocking numbers and locations vary yearly based on results of biological monitoring and adaptive management. Mitigation Objective 2: Produce Rainbow Trout at Creston NFH--Task: Acquire and rear up to 100,000 Rainbow trout annually for offsite mitigation in closed basin waters. Accomplishments: A total of 54,000 rainbow trout eggs (Arlee strain) was acquired from the Ennis National Fish Hatchery in December 2002 for this objective. The fish were reared using approved fish culture techniques as defined in the U.S. Fish and Wildlife Service, Fish Hatchery Management guidelines. Survival from the swim up fry stage to stocking was 99.9%. We achieved a 0.79 feed conversion this year on a new diet, Skretting ''Nutra Plus''. Arlee rainbow trout are being used for this objective because the stocking locations are terminal basin reservoirs and habitat conditions and returns to the creel are unsuitable for native cutthroat. Post release survival and angler success is monitored annually
van Rooyen Elise
2008-07-01
Full Text Available Abstract Background Scaling up the implementation of new health care interventions can be challenging and demand intensive training or retraining of health workers. This paper reports on the results of testing the effectiveness of two different kinds of face-to-face facilitation used in conjunction with a well-designed educational package in the scaling up of kangaroo mother care. Methods Thirty-six hospitals in the Provinces of Gauteng and Mpumalanga in South Africa were targeted to implement kangaroo mother care and participated in the trial. The hospitals were paired with respect to their geographical location and annual number of births. One hospital in each pair was randomly allocated to receive either 'on-site' facilitation (Group A or 'off-site' facilitation (Group B. Hospitals in Group A received two on-site visits, whereas delegates from hospitals in Group B attended one off-site, 'hands-on' workshop at a training hospital. All hospitals were evaluated during a site visit six to eight months after attending an introductory workshop and were scored by means of an existing progress-monitoring tool with a scoring scale of 0–30. Successful implementation was regarded as demonstrating evidence of practice (score >10 during the site visit. Results There was no significant difference between the scores of Groups A and B (p = 0.633. Fifteen hospitals in Group A and 16 in Group B demonstrated evidence of practice. The median score for Group A was 16.52 (range 00.00–23.79 and that for Group B 14.76 (range 07.50–23.29. Conclusion A previous trial illustrated that the implementation of a new health care intervention could be scaled up by using a carefully designed educational package, combined with face-to-face facilitation by respected resource persons. This study demonstrated that the site of facilitation, either on site or at a centre of excellence, did not influence the ability of a hospital to implement KMC. The choice of outreach
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Veil, J.A.
1997-09-01
According to an American Petroleum Institute production waste survey reported on by P.G. Wakim in 1987 and 1988, the exploration and production segment of the US oil and gas industry generated more than 360 million barrels (bbl) of drilling wastes, more than 20 billion bbl of produced water, and nearly 12 million bbl of associated wastes in 1985. Current exploration and production activities are believed to be generating comparable quantities of these oil field wastes. Wakim estimates that 28% of drilling wastes, less than 2% of produced water, and 52% of associated wastes are disposed of in off-site commercial facilities. In recent years, interest in disposing of oil field wastes in solution-mined salt caverns has been growing. This report provides information on the availability of commercial disposal companies in oil-and gas-producing states, the treatment and disposal methods they employ, and the amounts they charge. It also compares cavern disposal costs with the costs of other forms of waste disposal.
Hosokawa, Naoto
2011-10-01
In recent years, budget restrictions have prompted hospital managers to consider outsourcing microbiology service. But there are many advantages onsite microbiology services. Onsite microbiology services have some advantages. 1) High recovery rate of microorganism. 2) Shorter turn around time. 3) Easy to communicate between physician and laboratory technician. 4) Effective utilization of blood culture. 5) Getting early information about microorganism. 6) Making antibiogram (microbiological local factor). 7) Getting information for infection control. The disadvantages are operating costs and labor cost. The important point of maximal utilization of onsite microbiology service is close communication between physicians to microbiology laboratory. It will be able to provide prompt and efficient report to physicians through discussion about Gram stain findings, agar plate media findings and epidemiological information. The rapid and accurate identification of pathogen affords directed therapy, thereby decreasing the use of broad-spectrum antibiotics and shortening the length of hospital stay and unnecessary ancillary procedures. When the physician use outsourcing microbiology services, should discuss with offsite laboratories about provided services. Infection control person has to arrange data of susceptibility about every isolate and monitoring multi-drug resistant organism. Not only onsite microbiology services but also outsourcing microbiology services, to communicate bedside and laboratory is most important point of effective utilization.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Evaluation and measurements of radioactive air emission and off-site doses at SLAC.
Chan, Ivy; Liu, James; Tran, Henry
2013-08-01
SLAC, a high-energy (GeV) electron accelerator facility, performs experimental and theoretical research using high-energy electron and/or positron beams that can produce secondary neutron and gamma radiation when beam losses occur. Radioactive gas production (mainly C, N, O, Ar) and release is one of the environmental protection program issues. U.S. DOE Order 458.1 requires that 40 CFR 61 Subpart H's NESHAP requirements be followed. These regulations prescribe a total dose limit of 0.1 mSv y to the Maximally Exposed Individual (MEI) of the general public, a requirement for a continuous air monitoring system if a release point within a facility can cause > 1 × 10 mSv y to the MEI, and a requirement for periodic confirmatory measurements for minor sources which give releases that contribute ≤ 1 × 10 mSv y to the MEI. At SLAC, all air release points for current operations are evaluated to be minor sources. This paper describes SLAC's evaluation following NESHAP requirements; measurements using the Air Monitoring Station (AMS) as periodic confirmatory measurements are also discussed.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Bruce, G.M.; Buddenbaum, J.E.; Lamb, J.K.; Widner, T.E.
1993-09-01
The Phase I feasibility study has focused on determining the availability of information for estimating exposures of the public to chemicals and radionuclides released as a result of historical operation of the facilities at the Oak Ridge Reservation (ORR). The estimation of such past exposures is frequently called dose reconstruction. The initial project tasks, Tasks 1 and 2 were designed to identify and collect information that documents the history of activities at the ORR that resulted in the release of contamination and to characterize the availability of data that could be used to estimate the magnitude of the contaminant releases or public exposures. A history of operations that are likely to have generated off-site releases has been documented as a result of Task 1 activities. The activities required to perform this task involved the extensive review of historical operation records and interviews with present and past employees as well as other knowledgeable individuals. The investigation process is documented in this report. The Task 1 investigations have led to the documentation of an overview of the activities that have taken place at each of the major complexes, including routine operations, waste management practices, special projects, and accidents and incidents. Historical activities that appear to warrant the highest priority in any further investigations were identified based on their likely association with off-site emissions of hazardous materials as indicated by the documentation reviewed or information obtained in interviews.
Helena Cotler A.
2005-05-01
Full Text Available One of the primary global concerns during the new millennium is the assessment of the impact of accelerated soil erosion on the economy and the environment (Pimentel et al. 1995; Lal, 1995. Erosion damages the site on which it occurs and also has undesirable effects off-site in the larger environment. Erosion moves sediments and nutrients out of the land, creating the two most widespread water pollution problems in the rivers, lakes and dams. The nutrients impact water quality largely through the process of eutrophication caused by an excessive content of nitrogen and phosphorus. In addition to the nutrients presence, sediment and runoff may also carry toxic metals and organic compounds, such as pesticides (Brady and Weil, 1999; Lal, 1994; de Graaf, 2000; Renschler and Harbor, 2002. The sediment itself is a major pollutant causative agent, causing a wide range of environmental damages. The sedimentation of dams and canals, reduces their lifetime and efficiency, promoting a high restoration cost to the downstream users and affecting thenational budget. In this sense, sedimentation knowledge is an important tool to guide spatial planning efficiently. Despite more than six decades of research, sedimentation is still probably the most serious technical problem faced by the dam industry (Mc Cully, 2001. Many studies estimate present-day fluvial sediment and solute loads including both natural and accelerated soil erosion (Douglas, 1990. However, as Douglas mentioned (op.cit many do not include all the erosion caused by human activity, because the eroded sediment is redeposited after a short movement downslope. Many soil particles are detached and carried downslope only to be held and trapped by a plant, tree or other obstacle a little further downslope. The sediment reaching the valley floor may not be completely removed by the river, but may be redistributed as alluvial floodplain deposits. The sediment transported downstream may be redeposited
A study on the effect of containment filtered venting system to off-site under severe accident
Jeon, Ju Young; Kwon, Tae Eun; Lee, Jai Ki [Hanyang University, Seoul (Korea, Republic of)
2015-12-15
The containment filtered venting system reduces the range of the contamination area around the nuclear power plant by strengthening the integrity of the containment building. In this study, the probabilistic assessment code MACCS2 was used to assess the effect of the CFVS to off-site. The accident source term was selected from a Probabilistic Safety Analysis report of SHINKORI 1 and 2 Nuclear Power Plant. The three source term categories from 19 STC were chosen to evaluate the effective dose and thyroid dose of residents around the power plant and the dose with CFVS and without CFVS were compared. The dose was calculated according to the distance from the nuclear power plant, so the damage scale based on the distance that exceeds the IAEA criteria for effective dose (100 mSv per 7 days) and thyroid dose (50 mSv per 7 days) were compared. The effective dose reduction rates of the STC-3, STC-4, STC-6 were about 95-99% in the whole range (0⁓35 km), 96-98% for the thyroid dose. There are similar results between effective dose and thyroid dose. After applying the CFVS, the damage scale that exceeds the effective dose criteria was about 1 km (mean). Especially, the STC-4 damage scale was decreased from 26 km (mean) to 1.2 km (mean) significantly. The damage scale that exceed the thyroid dose criteria was decreased to 2⁓3 km (mean). The STC-4 damage scale was also decreased significantly as compared to STC-3, STC-6 in terms of effective dose.
Vickers, Linda
2015-02-01
Protection of the public from hazardous airborne releases at U.S. Department of Energy non-reactor nuclear facilities is the highest priority in all operations. As a result, the safety basis calculations that derive the protection of the public in the form of safety-class structures, systems, and components (SC SSCs) must accurately model the accident event and airborne plume in order to characterize the relative severity of the consequences. The simple Gaussian plume model is the most accurate and most commonly used model to describe the pollutant plume relative concentration (X/Q) downwind at receptor. As a result, the X/Q value can easily be calculated by an electronic spreadsheet as shown by this paper.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
How long do centenarians survive? Life expectancy and maximum lifespan.
Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A
2017-08-01
The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Pre-Study of Off-site Consequence Analysis in Level 3 PSA of Wolsong Unit 1
Kim, Won-Jik; Yang, Ho-Chang; Choi, Seong-Soo [ACT, Daejeon (Korea, Republic of)
2015-10-15
In order to perform level 3 PSA, MACCS II (MELCOR Accident Consequence Code System 2) is needed. MACCS II is used in PSA for plants in order to evaluate population dose that is the effects on health and environment caused by released radioisotopes after an accident. In this study, Steam Generator Tube Rupture (SGTR) event in CANDU-6 plants is evaluated population dose that is the effects on health and environment caused by released radioisotopes after an accident. In this study, Steam Generator Tube Rupture (SGTR) event has been evaluated by using Level 1 PSA result and Level 2 PSA result(ISSAC) and MACCS II. As a result, We are obtained the following conclusion. - Early maximum early fatalities is 5.35E+02 equal to latent maximum early fatalities.(99.5%) - Early and latent maximum cancer fatalities are 2.33E+03 and 1.11E+04, respectively. (99.5%) - Early and latent maximum population doses are 1.25 and 5.00 person-rem/yr, respectively. (99.5%) Other study has shown that MACCS II was performed evaluation for Wolsong NPP. Small Break Loss of Coolant Accident(SBLOCA) event is selected by other study. The results of early and cancer fatalities applied similar assumption were 3.02E+00 and 1.89E+03, respectively. This study's results are higher than other study's result. Because, basis input data is different each studies, and event frequency are different (This study : 2.10E-07/ Other study : 4.93E-09)
Corsini, Raymond
1981-01-01
Paper presented at the 66th Convention of the International Association of Pupil Personnel Workers, October 20, 1980, Baltimore, Maryland, describes individual education based on the principles of Alfred Adler. Defines six advantages of individual education, emphasizing student responsibility, mutual respect, and allowing students to progress at…
Corsini, Raymond
1981-01-01
Paper presented at the 66th Convention of the International Association of Pupil Personnel Workers, October 20, 1980, Baltimore, Maryland, describes individual education based on the principles of Alfred Adler. Defines six advantages of individual education, emphasizing student responsibility, mutual respect, and allowing students to progress at…
The Maximum Coping Time Analysis of the ELAP for the OPR1400
Shin, Sung Hyun; Hah, Chang Joo [KINGS, Ulsan (Korea, Republic of); Jung, Si Chae; Lee, Chang Gyun [KEPCO E and C, Daejeon (Korea, Republic of)
2014-05-15
There have been many evaluations and recommendations for the extended Station Black Out (SBO) condition of the nuclear power plant. For example, the 'SECY-11-0093/0137', is a recommendation of NRC and the 'WCAP-17601-P' is an evaluation of the PWROG. The extended loss of AC power (ELAP) can be defined as same with the extended (or prolonged) SBO which has a Loss of Offsite Power (LOOP) condition and loss of all Emergency Diesel Generators (EDG), Alternative Alternating Current (AAC), but Direct Current (DC) source is available. This evaluation provides NSSS responses to an ELAP for the OPR1000 unit. And the results presented provide certain phenomena which occur during the ELAP, the maximum coping time until a core uncovery condition. It is assumed for this case that sufficient SG secondary makeup inventory exists or can be attained, so that the duration of the ELAP prior to core damage is dependent solely upon the loss of inventory from the RCS. Even with a limited RCS cooldown and depressurization, and conservatively high assumed RCP seal leakage, the plant can be sustained for over 65 hours prior to core uncovery.
An Taicheng, E-mail: antc99@gig.ac.cn [State Key Laboratory of Organic Geochemistry, Guangdong Key Laboratory of Environmental Resources Utilization and Protection, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, Guangzhou 510640 (China); Zhang Delin [State Key Laboratory of Organic Geochemistry, Guangdong Key Laboratory of Environmental Resources Utilization and Protection, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, Guangzhou 510640 (China); Graduate School of Chinese Academy of Sciences, Beijing 100049 (China); Li Guiying; Mai Bixian; Fu Jiamo [State Key Laboratory of Organic Geochemistry, Guangdong Key Laboratory of Environmental Resources Utilization and Protection, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, Guangzhou 510640 (China)
2011-12-15
Gas samples and total suspended particle during work and off work time were investigated on-site and off-site electronic waste dismantling workshop (I- and O-EWDW), then compared with plastic recycling workshop (PRW) and waste incineration plant (WIP). TSP concentrations and total PBDE were 0.36-2.21 mg/m{sup 3} and 27-2975 ng/m{sup 3} at different workshops, respectively. BDE-47, -99, and -209 were major {Sigma}PBDE congeners at I-EWDW and WIP, while BDE-209 was only dominant congener in PRW and control sites during work time and all sites during off work time. The gas-particle partitioning result was well correlated with the subcooled liquid vapor pressure for all samples, except for WIP and I-EDWD, at park during work time, and residential area during off work time. The predicted urban curve fitted well with measured {phi} values at O-DEWD during work time, whereas it was slightly overestimated or underestimated for others. Exposure assessment revealed the highest exposure site was I-EDWD. - Highlights: > On- and off-site atmospheric PBDEs was monitored in e-waste dismantling workshops in south China. > The gas-particle partitioning result was well correlated with the subcooled liquid vapor pressure for some samples. > Exposure assessment revealed that workers in I-EDWD were the highest exposure population. - The findings of this study may serve as a valuable reference for future risk assessment and environmental management in Guiyu, South China.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Lucas R. Nascimento
2012-04-01
Full Text Available OBJECTIVE: To evaluate the effects of different instructions for the assessment of maximum walking speed during the ten-meter walking test with chronic stroke subjects. METHODS: Participants were instructed to walk under four experimental conditions: (1 comfortable speed, (2 maximum speed (simple verbal command, (3 maximum speed (modified verbal command-"catch a bus" and (4 maximum speed (verbal command + demonstration. Participants walked three times in each condition and the mean time to cover the intermediate 10 meters of a 14-meter corridor was registered to calculate the gait speed (m/s. Repeated-measures ANOVAs, followed by planned contrasts, were employed to investigate differences between the conditions (α=5%. Means, standard deviations and 95% confidence intervals (CI were calculated. RESULTS: The mean values for the four conditions were: (1 0.74m/s; (2 0.85 m/s; (3 0.93 m/s; (4 0.92 m/s, respectively, with significant differences between the conditions (F=40.9; pOBJETIVO: Avaliar os efeitos de diferentes instruções para avaliação da velocidade de marcha máxima de indivíduos hemiparéticos durante o teste de caminhada de 10 metros. MÉTODOS: Os indivíduos deambularam em quatro condições experimentais: (1 velocidade habitual, (2 velocidade máxima (comando verbal simples, (3 velocidade máxima (comando verbal modificado: pegar ônibus, (4 velocidade máxima (comando verbal + demonstração. Solicitou-se a cada participante que deambulasse três vezes em cada condição, e a média do tempo necessário para percorrer os 10 metros intermediários de um corredor de 14 metros foi utilizada para cálculo da velocidade (m/s. A ANOVA de medidas repetidas, com contrastes pré-planejados, foi utilizada para comparação dos dados (α=5%, sendo apresentados valores de média, desvio-padrão e intervalos de confiança (IC de 95%. RESULTADOS: As médias de velocidade para as quatro condições foram: (1 0,74m/s; (2 0,85m/s; (3 0,93m/s; (4
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Chollet, D J
1999-05-01
Despite the enactment of significant changes to the Medicare program in 1997, Medicare's Hospital Insurance trust fund is projected to be exhausted just as the baby boom enters retirement. To address Medicare's financial difficulties, a number of reform proposals have been offered, including several to individualize Medicare financing and benefits. These proposals would attempt to increase Medicare revenues and reduce Medicare expenditures by having individuals bear risk--investment market risk before retirement and insurance market risk after retirement. Many fundamental aspects of these proposals have yet to be worked out, including how to guarantee a baseline level of saving for health insurance after retirement, how retirees might finance unanticipated health insurance price increases after retirement, the potential implications for Medicaid of inadequate individual saving, and whether the administrative cost of making the system fair and adequate ultimately would eliminate any rate-of-return advantages from allowing workers to invest their Medicare contributions in corporate stocks and bonds.
Baarts, Charlotte
2009-01-01
Safety knowledge appears to be ‘a doing’. In construction work safety is practised in the complex interrelationship between the individual, pair and gang. Thus the aim is to explore the nature and scope of individualist and collectivist preferences pertaining to the practice of safety at a constr...
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
A strong test of the maximum entropy theory of ecology.
Xiao, Xiao; McGlinn, Daniel J; White, Ethan P
2015-03-01
The maximum entropy theory of ecology (METE) is a unified theory of biodiversity that predicts a large number of macroecological patterns using information on only species richness, total abundance, and total metabolic rate of the community. We evaluated four major predictions of METE simultaneously at an unprecedented scale using data from 60 globally distributed forest communities including more than 300,000 individuals and nearly 2,000 species.METE successfully captured 96% and 89% of the variation in the rank distribution of species abundance and individual size but performed poorly when characterizing the size-density relationship and intraspecific distribution of individual size. Specifically, METE predicted a negative correlation between size and species abundance, which is weak in natural communities. By evaluating multiple predictions with large quantities of data, our study not only identifies a mismatch between abundance and body size in METE but also demonstrates the importance of conducting strong tests of ecological theories.
Ignacio Jesús Chirosa Ríos
2011-05-01
Full Text Available
Resumen
En el presente estudio se propone una ecuación logarítmica para el cálculo de la frecuencia cardiaca máxima (FC máx de forma indirecta en jugadores de deportes de equipo en situaciones integradas de juego. La muestra experimental estuvo formada por trece jugadores (24± 3 años pertenecientes a un equipo de División de Honor B de balonmano. Se midió la FC máx inicialmente por medio de la prueba de Course Navette. Posteriormente, se realizaron veintiuna sesiones de entrenamiento en las que se registró la FC, de forma continua, y la percepción subjetiva del esfuerzo (RPE, en cada tarea. Se realizó un análisis de regresión lineal que permitió encontrar una ecuación de predicción de la FC máx. a partir de las frecuencias cardiacas máximas de las tres sesiones de mayor intensidad. Los datos previstos por esta ecuación correlacionan significativamente con los datos obtenidos en el Course Navette y tienen menor error típico de medida que otros métodos de cálculo. Como conclusión principal se destaca que esta ecuación posibilita una manera útil y cómoda del cálculo de FC máx en situaciones reales de juego, evitándose la realización de test analíticos no específicos y, de este modo, reducir la falta de ecología en la valoración funcional.
Palabras clave: control del entrenamiento, valoración funcional, fórmula predictiva
Abstract
This research developed a logarithms for calculating the maximum heart rate (max. HR for players in team sports in game situations. The sample was made of thirteen players (aged 24 ± 3 to a Division Two Handball team. HR was initially measured by Course Navette test. Later, twenty one training sessions were conducted in which HR and Rate of Perceived Exertion (RPE, were continuously monitored, in each task. A
An, Taicheng; Zhang, Delin; Li, Guiying; Mai, Bixian; Fu, Jiamo
2011-12-01
Gas samples and total suspended particle during work and off work time were investigated on-site and off-site electronic waste dismantling workshop (I- and O-EWDW), then compared with plastic recycling workshop (PRW) and waste incineration plant (WIP). TSP concentrations and total PBDE were 0.36-2.21 mg/m(3) and 27-2975 ng/m(3) at different workshops, respectively. BDE-47, -99, and -209 were major ∑PBDE congeners at I-EWDW and WIP, while BDE-209 was only dominant congener in PRW and control sites during work time and all sites during off work time. The gas-particle partitioning result was well correlated with the subcooled liquid vapor pressure for all samples, except for WIP and I-EDWD, at park during work time, and residential area during off work time. The predicted urban curve fitted well with measured φ values at O-DEWD during work time, whereas it was slightly overestimated or underestimated for others. Exposure assessment revealed the highest exposure site was I-EDWD.
Lucero, V.; Meale, B.M.; Reny, D.A.; Brown, A.N.
1990-09-01
In compliance with Department of Energy (DOE) Order 6430.1A, a seismic analysis was performed on DOE's Process Experimental Pilot Plant (PREPP), a facility for processing low-level and transuranic (TRU) waste. Because no hazard curves were available for the Idaho National Engineering Laboratory (INEL), DOE guidelines were used to estimate the frequency for the specified design-basis earthquake (DBE). A dynamic structural analysis of the building was performed, using the DBE parameters, followed by a probabilistic risk assessment (PRA). For the PRA, a functional organization of the facility equipment was effected so that top events for a representative event tree model could be determined. Building response spectra (calculated from the structural analysis), in conjunction with generic fragility data, were used to generate fragility curves for the PREPP equipment. Using these curves, failure probabilities for each top event were calculated. These probabilities were integrated into the event tree model, and accident sequences and respective probabilities were calculated through quantification. By combining the sequences failure probabilities with a transport analysis of the estimated airborne source term from a DBE, onsite and offsite consequences were calculated. The results of the comprehensive analysis substantiated the ability of the PREPP facility to withstand a DBE with negligible consequence (i.e., estimated release was within personnel and environmental dose guidelines). 57 refs., 19 figs., 20 tabs.
Individual based population inference using tagging data
Pedersen, Martin Wæver; Thygesen, Uffe Høgsbro; Baktoft, Henrik
A hierarchical framework for simultaneous analysis of multiple related individual datasets is presented. The approach is very similar to mixed effects modelling as known from statistical theory. The model used at the individual level is, in principle, irrelevant as long as a maximum likelihood es...... telemetry data from pike illustrates how the framework can identify individuals that deviate from the remaining population....
Ian Walkinshaw
2015-09-01
Full Text Available Responding to calls for research into measurable English language outcomes from individual language support consultations at universities, this study investigated the effect of individual consultations (ICs on the academic writing skills and lexico-grammatical competence of students who speak English as an additional language (EAL. Attendance by 31 EAL students at ICs was recorded, and samples of their academic writing texts before and after a 9-month interval were compared. Participants’ academic writing skills were rated, and lexico-grammatical irregularities were quantified. No statistically significant positive shifts manifested, due to the relatively short research period and limited participant uptake, but there were encouraging predictors of future shifts given continued utilization of the service. First, although a Wilcoxon signed-rank test showed no association between attendance at ICs and shifts in academic writing ability, a Spearman’s rho calculation suggested a tentative relationship to positive pre–post shifts in three academic writing sub-skills: Task Fulfillment, Grammar, and Vocabulary. Second, instances of four common lexico-grammatical irregularities (subject/verb, wrong word, plural/singular, and punctuation declined at post-testing. Although only regular, sustained attendance would produce statistically significant shifts, there is a potential association between participants’ use of ICs and improved academic writing skills/lexico-grammatical competence.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Soldat, J.K.
1989-10-01
This report compares the results of the calculation of potential radiation doses to the public by two different environmental dosimetric systems for the years 1983 through 1987. Both systems project the environmental movement of radionuclides released with effluents from Hanford operations; their concentrations in air, water, and foods; the intake of radionuclides by ingestion and inhalation; and, finally, the potential radiation doses from radionuclides deposited in the body and from external sources. The first system, in use for the past decade at Hanford, calculates radiation doses in terms of 50-year cumulative dose equivalents to body organs and to the whole body, based on the methodology defined in ICRP Publication 2. This system uses a suite of three computer codes: PABLM, DACRIN, and KRONIC. In the new system, 50-year committed doses are calculated in accordance with the recommendations of the ICRP Publications 26 and 30, which were adopted by the US Department of Energy (DOE) in 1985. This new system calculates dose equivalent (DE) to individual organs and effective dose equivalent (EDE). The EDE is a risk-weighted DE that is designed to be an indicator of the potential health effects arising from the radiation dose. 16 refs., 1 fig., 38 tabs.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
商业银行国库业务非现场监管模式研究%Research on the Offsite Regulatory Model of Commercial Banks' Treasury Operations
李升高
2014-01-01
随着我国商业银行国库业务的快速发展，中央银行目前依赖手工报送和柜面监督的非现场监管模式已滞后于商业银行的国库业务信息化发展。本文分析了当前中央银行对商业银行国库业务非现场监管的现状与不足，指出建设国库业务非现场监管系统的意义，并从建设目标、功能需求和实现途径等方面，提出推进商业银行国库业务非现场监管系统建设的构想。%With the rapid development of commercial banks' treasury operations, the off-site supervision model of manual reporting and incase supervision in Central banks have lagged behind the development of the treasury business information in commercial banks. This paper analyzes the status quo and shortcomings about the off-site supervision in treasury operations of central bank to commercial banks, and points out the great significance in construction systems of offsite supervision in treasury operations. At last, the paper gives suggestions about building objectives, functional requirements and realization path.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Use of Maximum Entropy Modeling in Wildlife Research
Roger A. Baldwin
2009-11-01
Full Text Available Maximum entropy (Maxent modeling has great potential for identifying distributions and habitat selection of wildlife given its reliance on only presence locations. Recent studies indicate Maxent is relatively insensitive to spatial errors associated with location data, requires few locations to construct useful models, and performs better than other presence-only modeling approaches. Further advances are needed to better define model thresholds, to test model significance, and to address model selection. Additionally, development of modeling approaches is needed when using repeated sampling of known individuals to assess habitat selection. These advancements would strengthen the utility of Maxent for wildlife research and management.
Time-Reversal Acoustics and Maximum-Entropy Imaging
Berryman, J G
2001-08-22
Target location is a common problem in acoustical imaging using either passive or active data inversion. Time-reversal methods in acoustics have the important characteristic that they provide a means of determining the eigenfunctions and eigenvalues of the scattering operator for either of these problems. Each eigenfunction may often be approximately associated with an individual scatterer. The resulting decoupling of the scattered field from a collection of targets is a very useful aid to localizing the targets, and suggests a number of imaging and localization algorithms. Two of these are linear subspace methods and maximum-entropy imaging.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Taisia Vitkovski
2015-01-01
Full Text Available Background: Increasingly, as in our institution, operating rooms are located in hospitals and the pathology suite is located at a distant location because of off-site consolidation of pathology services. Telepathology is a technology which bridges the gap between pathologists and offers a means to obtain a consultation remotely. We aimed to evaluate the utility of telepathology as a means to assist the pathologist at the time of intraoperative consultation of lung nodules when a subspecialty pathologist is not available to directly review the slide. Methods: Cases of lung nodules suspicious for a neoplasm were included. Frozen sections were prepared in the usual manner. The pathologists on the intraoperative consultation service at two of our system hospitals notified the thoracic pathologist of each case after rendering a preliminary diagnosis. The consultation was performed utilizing a Nikon™ Digital Sight camera and web-based Remote Medical Technologies™ software with live video streaming directed by the host pathologist. The thoracic pathologist rendered a diagnosis without knowledge of the preliminary interpretation then discussed the interpretation with the frozen section pathologist. The interpretations were compared with the final diagnosis rendered after sign-out. Results: One hundred and three consecutive cases were included. The frozen section pathologist and a thoracic pathologist had concordant diagnoses in 93 cases (90.2%, discordant diagnoses in nine cases (8.7%, and one case in which both deferred. There was an agreement between the thoracic pathologist′s diagnosis and the final diagnosis in 98% of total cases including 8/9 (88.9% of the total discordant cases. In two cases, if the thoracic pathologist had not been consulted, the patient would have been undertreated. Conclusions: We have shown that telepathology is an excellent consultation tool in the frozen section diagnosis of lung nodules.
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Metabolic networks evolve towards states of maximum entropy production.
Unrean, Pornkamol; Srienc, Friedrich
2011-11-01
A metabolic network can be described by a set of elementary modes or pathways representing discrete metabolic states that support cell function. We have recently shown that in the most likely metabolic state the usage probability of individual elementary modes is distributed according to the Boltzmann distribution law while complying with the principle of maximum entropy production. To demonstrate that a metabolic network evolves towards such state we have carried out adaptive evolution experiments with Thermoanaerobacterium saccharolyticum operating with a reduced metabolic functionality based on a reduced set of elementary modes. In such reduced metabolic network metabolic fluxes can be conveniently computed from the measured metabolite secretion pattern. Over a time span of 300 generations the specific growth rate of the strain continuously increased together with a continuous increase in the rate of entropy production. We show that the rate of entropy production asymptotically approaches the maximum entropy production rate predicted from the state when the usage probability of individual elementary modes is distributed according to the Boltzmann distribution. Therefore, the outcome of evolution of a complex biological system can be predicted in highly quantitative terms using basic statistical mechanical principles.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Understanding Peripheral Bat Populations Using Maximum-Entropy Suitability Modeling
Barnhart, Paul R.; Gillam, Erin H.
2016-01-01
Individuals along the periphery of a species distribution regularly encounter more challenging environmental and climatic conditions than conspecifics near the center of the distribution. Due to these potential constraints, individuals in peripheral margins are expected to change their habitat and behavioral characteristics. Managers typically rely on species distribution maps when developing adequate management practices. However, these range maps are often too simplistic and do not provide adequate information as to what fine-scale biotic and abiotic factors are driving a species occurrence. In the last decade, habitat suitability modelling has become widely used as a substitute for simplistic distribution mapping which allows regional managers the ability to fine-tune management resources. The objectives of this study were to use maximum-entropy modeling to produce habitat suitability models for seven species that have a peripheral margin intersecting the state of North Dakota, according to current IUCN distributions, and determine the vegetative and climatic characteristics driving these models. Mistnetting resulted in the documentation of five species outside the IUCN distribution in North Dakota, indicating that current range maps for North Dakota, and potentially the northern Great Plains, are in need of update. Maximum-entropy modeling showed that temperature and not precipitation were the variables most important for model production. This fine-scale result highlights the importance of habitat suitability modelling as this information cannot be extracted from distribution maps. Our results provide baseline information needed for future research about how and why individuals residing in the peripheral margins of a species’ distribution may show marked differences in habitat use as a result of urban expansion, habitat loss, and climate change compared to more centralized populations. PMID:27935936
Uncertainties in offsite consequence analysis
Young, M.L.; Harper, F.T.; Lui, C.H.
1996-03-01
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequences from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the U.S. Nuclear Regulatory Commission and the European Commission began co-sponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables using a formal expert judgment elicitation and evaluation process. This paper focuses on the methods used in and results of this on-going joint effort.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
On-line Auditing:Reflections on Off-site Auditing%联网审计：非现场审计的思考
林忠华
2016-01-01
With the development of the information-based national economy,it has become an urgent task to develop information-based auditing.As a product of the high-stage of computer auditing,on-line auditing is an approach that adapts to the audit environment and is regarded as one of the major objectives of information-based audit work.As an off-site auditing approach,on-line auditing is characterized by real-time and remote audit as well as highly efficient data acquisition and analysis.The key technologies of on-line auditing involves networking technology,data acquisition and processing technology while the mandatory function and advantage thereof is the audit warning mechanism.The specialized auditing procedures include information system testing and auditing,daily on-line query,and data acquisition and analysis.At present there are still problems to be solved in the field of on-line auditing,which requires reference to pioneer successful examples and effective measures to consistently improve the on-line auditing system.%随着国民经济信息化，审计工作信息化已刻不容缓。联网审计作为计算机审计发展较高阶段的产物，是适应新的审计环境所产生的审计方式，是审计工作信息化的主攻方向之一。联网审计作为非现场审计，具有实时审计、远程审计、高效率的数据采集和分析等特征。组网技术、数据采集和处理技术是联网审计的关键技术，审计预警机制是联网审计的必备功能和优势所在。信息系统测试与审计、日常联网查询、数据采集与分析是联网审计的特殊审计程序。当前联网审计中还存在不少困难与问题，要借鉴联网审计的成功范例，采取有效对策和措施，不断完善联网审计制度。
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Individual Genetic Susceptibility
Eric J. Hall
2008-12-08
Risk estimates derived from epidemiological studies of exposed populations, as well as the maximum permissible doses allowed for occupational exposure and exposure of the public to ionizing radiation are all based on the assumption that the human population is uniform in its radiosensitivity, except for a small number of individuals, such as ATM homozygotes who are easily identified by their clinical symptoms. The hypothesis upon which this proposal is based is that the human population is not homogeneous in radiosensitiviry, but that radiosensitive sub-groups exist which are not easy to identify. These individuals would suffer an increased incidence of detrimental radiation effects, and distort the shape of the dose response relationship. The radiosensitivity of these groups depend on the expression levels of specific proteins. The plan was to investigate the effect of 3 relatively rare, high penetrate genes available in mice, namely Atm, mRad9 & Brca1. The purpose of radiation protection is to prevent! deterministic effects of clinical significance and limit stochastic effects to acceptable levels. We plan, therefore to compare with wild type animals the radiosensitivity of mice heterozygous for each of the genes mentioned above, as well as double heterozygotes for pairs of genes, using two biological endpoints: a) Ocular cataracts as an important and relevant deterministic effect, and b) Oncogenic transformation in cultured embryo fibroblasts, as a surrogate for carcinogenesis, the most relevant stochastic effect.
Brazilian Cardiorespiratory Fitness Classification Based on Maximum Oxygen Consumption
Herdy, Artur Haddad; Caixeta, Ananda
2016-01-01
Background Cardiopulmonary exercise test (CPET) is the most complete tool available to assess functional aerobic capacity (FAC). Maximum oxygen consumption (VO2 max), an important biomarker, reflects the real FAC. Objective To develop a cardiorespiratory fitness (CRF) classification based on VO2 max in a Brazilian sample of healthy and physically active individuals of both sexes. Methods We selected 2837 CEPT from 2837 individuals aged 15 to 74 years, distributed as follows: G1 (15 to 24); G2 (25 to 34); G3 (35 to 44); G4 (45 to 54); G5 (55 to 64) and G6 (65 to 74). Good CRF was the mean VO2 max obtained for each group, generating the following subclassification: Very Low (VL): VO2 105%. Results Men VL 105% G1 53.13 G2 49.77 G3 47.67 G4 42.52 G5 37.06 G6 31.50 Women G1 40.85 G2 40.01 G3 34.09 G4 32.66 G5 30.04 G6 26.36 Conclusions This chart stratifies VO2 max measured on a treadmill in a robust Brazilian sample and can be used as an alternative for the real functional evaluation of physically and healthy individuals stratified by age and sex. PMID:27305285
Maximum likelihood pedigree reconstruction using integer linear programming.
Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A
2013-01-01
Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Accurate Maximum Power Tracking in Photovoltaic Systems Affected by Partial Shading
Pierluigi Guerriero
2015-01-01
Full Text Available A maximum power tracking algorithm exploiting operating point information gained on individual solar panels is presented. The proposed algorithm recognizes the presence of multiple local maxima in the power voltage curve of a shaded solar field and evaluates the coordinated of the absolute maximum. The effectiveness of the proposed approach is evidenced by means of circuit level simulation and experimental results. Experiments evidenced that, in comparison with a standard perturb and observe algorithm, we achieve faster convergence in normal operating conditions (when the solar field is uniformly illuminated and we accurately locate the absolute maximum power point in partial shading conditions, thus avoiding the convergence on local maxima.
Neuromuscular determinants of maximum walking speed in well-functioning older adults.
Clark, David J; Manini, Todd M; Fielding, Roger A; Patten, Carolynn
2013-03-01
Maximum walking speed may offer an advantage over usual walking speed for clinical assessment of age-related declines in mobility function that are due to neuromuscular impairment. The objective of this study was to determine the extent to which maximum walking speed is affected by neuromuscular function of the lower extremities in older adults. We recruited two groups of healthy, well functioning older adults who differed primarily on maximum walking speed. We hypothesized that individuals with slower maximum walking speed would exhibit reduced lower extremity muscle size and impaired plantarflexion force production and neuromuscular activation during a rapid contraction of the triceps surae muscle group (soleus (SO) and gastrocnemius (MG)). All participants were required to have usual 10-meter walking speed of >1.0m/s. If the difference between usual and maximum 10m walking speed was 0.6m/s, the individual was assigned to the "Faster" group (n=12). Peak rate of force development (RFD) and rate of neuromuscular activation (rate of EMG rise) of the triceps surae muscle group were assessed during a rapid plantarflexion movement. Muscle cross sectional area of the right triceps surae, quadriceps and hamstrings muscle groups was determined by magnetic resonance imaging. Across participants, the difference between usual and maximal walking speed was predominantly dictated by maximum walking speed (r=.85). We therefore report maximum walking speed (1.76 and 2.17m/s in Slower and Faster, ptriceps surae (p=.44), quadriceps (p=.76) and hamstrings (p=.98). MG rate of EMG rise was positively associated with RFD and maximum 10m walking speed, but not the usual 10m walking speed. These findings support the conclusion that maximum walking speed is limited by impaired neuromuscular force and activation of the triceps surae muscle group. Future research should further evaluate the utility of maximum walking speed for use in clinical assessment to detect and monitor age
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Marginal Maximum Likelihood Estimation of Item Response Models in R
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
A maximum entropy model for opinions in social groups
Davis, Sergio; Navarrete, Yasmín; Gutiérrez, Gonzalo
2014-04-01
We study how the opinions of a group of individuals determine their spatial distribution and connectivity, through an agent-based model. The interaction between agents is described by a Hamiltonian in which agents are allowed to move freely without an underlying lattice (the average network topology connecting them is determined from the parameters). This kind of model was derived using maximum entropy statistical inference under fixed expectation values of certain probabilities that (we propose) are relevant to social organization. Control parameters emerge as Lagrange multipliers of the maximum entropy problem, and they can be associated with the level of consequence between the personal beliefs and external opinions, and the tendency to socialize with peers of similar or opposing views. These parameters define a phase diagram for the social system, which we studied using Monte Carlo Metropolis simulations. Our model presents both first and second-order phase transitions, depending on the ratio between the internal consequence and the interaction with others. We have found a critical value for the level of internal consequence, below which the personal beliefs of the agents seem to be irrelevant.
Tests of maximum oxygen intake. A critical review.
Shephard, R J
1984-01-01
The determinants of endurance effort vary, depending upon the extent of the muscle mass that is activated. Large muscle work, such as treadmill running, is halted by impending circulatory failure; lack of venous return may compound the basic problem of an excessive cardiac work-load. If the task calls for use of a smaller muscle mass, there is ultimately difficulty in perfusing the active muscles, and glycolysis is halted by an accumulation of acid metabolites. Simple field tests of endurance, such as Cooper's 12-minute run and the Canadian Home Fitness Test, have some value in the rapid screening of large populations, but like other submaximal tests of human performance they lack the precision needed to advise the individual. The directly measured maximum oxygen intake (VO2 max) varies with the type of exercise. The highest values are obtained during uphill treadmill running, but well trained athletes often approach these values during performance of sport-specific tasks. Limitations of methodology and wide interindividual variations of constitutional potential limit the interpretation of maximum oxygen intake data in terms of personal fitness, exercise prescription and the monitoring of training responses. The main practical value of VO2 max measurement is in the functional assessment of patients with cardiorespiratory disease, since changes are then large relative to the precision of the test.
IDENTIFICATION OF IDEOTYPES BY CANONICAL ANALYSIS IN Panicum maximum
Janaina Azevedo Martuscello
2015-04-01
Full Text Available Grouping of genotypes by canonical variable analysis is an important tool in breeding. It allows the grouping of individuals with similar characteristics that are associated with superior agronomic performance and may indicate the ideal profile of a plant for the region. The objective of the present study was to define, by canonical analysis, the agronomic profile of Panicum maximum plants adapted to the Agreste region. The experiment was conducted in a completely randomized design with 28 treatments, 22 genotypes of Panicum maximum, and cultivars Mombasa, Tanzania, Massai, Milenio, BRS Zuri, and BRS Tamani in triplicate in 4-m² plots. Plots were harvested five times and the following traits were evaluated: plant height; total, leaf, and stem; dead dry matter yields; leaf:stem ratio; leaf percentage; and volumetric density of forage. The analysis of canonical variables was performed based on the phenotypic means of the evaluated traits and on the residual variance and covariance matrix. Genotype PM34 showed higher mean leaf dry matter yield under the conditions of the Agreste of Alagoas (on average 53% higher than cultivars Mombasa, Tanzania, Milenio and Massai. It was possible to summarize the variation observed in eight agronomic characteristics in only two canonical variables accounting for 81.44 % of the data variation. The ideotype plant adapted to the conditions of the Agreste should be tall and present high leaf yield, leaf percentage, and leaf:stem ratio, and intermediate values of volumetric density of forage.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Local image statistics: maximum-entropy constructions and perceptual salience.
Victor, Jonathan D; Conte, Mary M
2012-07-01
The space of visual signals is high-dimensional and natural visual images have a highly complex statistical structure. While many studies suggest that only a limited number of image statistics are used for perceptual judgments, a full understanding of visual function requires analysis not only of the impact of individual image statistics, but also, how they interact. In natural images, these statistical elements (luminance distributions, correlations of low and high order, edges, occlusions, etc.) are intermixed, and their effects are difficult to disentangle. Thus, there is a need for construction of stimuli in which one or more statistical elements are introduced in a controlled fashion, so that their individual and joint contributions can be analyzed. With this as motivation, we present algorithms to construct synthetic images in which local image statistics--including luminance distributions, pair-wise correlations, and higher-order correlations--are explicitly specified and all other statistics are determined implicitly by maximum-entropy. We then apply this approach to measure the sensitivity of the human visual system to local image statistics and to sample their interactions.
Maximum likelihood based classification of electron tomographic data.
Stölken, Michael; Beck, Florian; Haller, Thomas; Hegerl, Reiner; Gutsche, Irina; Carazo, Jose-Maria; Baumeister, Wolfgang; Scheres, Sjors H W; Nickell, Stephan
2011-01-01
Classification and averaging of sub-tomograms can improve the fidelity and resolution of structures obtained by electron tomography. Here we present a three-dimensional (3D) maximum likelihood algorithm--MLTOMO--which is characterized by integrating 3D alignment and classification into a single, unified processing step. The novelty of our approach lies in the way we calculate the probability of observing an individual sub-tomogram for a given reference structure. We assume that the reference structure is affected by a 'compound wedge', resulting from the summation of many individual missing wedges in distinct orientations. The distance metric underlying our probability calculations effectively down-weights Fourier components that are observed less frequently. Simulations demonstrate that MLTOMO clearly outperforms the 'constrained correlation' approach and has advantages over existing approaches in cases where the sub-tomograms adopt preferred orientations. Application of our approach to cryo-electron tomographic data of ice-embedded thermosomes revealed distinct conformations that are in good agreement with results obtained by previous single particle studies.
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Thinking the individual as form of individuation
Samuel Mateus
2011-12-01
Full Text Available In this paper we will ponder the problem of the individualism through the individuation, pointing out the implications on the idea of “individual”. It attempts to ﬁnd a theoretical way that allows a broader understanding of its role in human societies It will be suggested that the emphasis placed by modernity in the individual can be evaluated, not as a solipsist individualism, but as a ﬁgurational form speciﬁc of social contexts characterized by a wide objectivation of the social tissue. That means that beside individualism we can think individualizations through the seminal setting of individuation. This hypothesis is already insinuated in the German sociological thought, in particular, in the sociology of the social forms of Georg Simmel and in the process sociology of Norbert Elias.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Martin Wilde, Principal Investigator
2012-12-31
ABSTRACT Application of Real-Time Offsite Measurements in Improved Short-Term Wind Ramp Prediction Skill Improved forecasting performance immediately preceding wind ramp events is of preeminent concern to most wind energy companies, system operators, and balancing authorities. The value of near real-time hub height-level wind data and more general meteorological measurements to short-term wind power forecasting is well understood. For some sites, access to onsite measured wind data - even historical - can reduce forecast error in the short-range to medium-range horizons by as much as 50%. Unfortunately, valuable free-stream wind measurements at tall tower are not typically available at most wind plants, thereby forcing wind forecasters to rely upon wind measurements below hub height and/or turbine nacelle anemometry. Free-stream measurements can be appropriately scaled to hub-height levels, using existing empirically-derived relationships that account for surface roughness and turbulence. But there is large uncertainty in these relationships for a given time of day and state of the boundary layer. Alternatively, forecasts can rely entirely on turbine anemometry measurements, though such measurements are themselves subject to wake effects that are not stationary. The void in free-stream hub-height level measurements of wind can be filled by remote sensing (e.g., sodar, lidar, and radar). However, the expense of such equipment may not be sustainable. There is a growing market for traditional anemometry on tall tower networks, maintained by third parties to the forecasting process (i.e., independent of forecasters and the forecast users). This study examines the value of offsite tall-tower data from the WINDataNOW Technology network for short-horizon wind power predictions at a wind farm in northern Montana. The presentation shall describe successful physical and statistical techniques for its application and the practicality of its application in an operational
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
A Rational Procedure for Determination of Directional Individual Design Wave Heights
Sterndorff, M.; Sørensen, John Dalsgaard
2001-01-01
crest elevation are available. In Sørensen & Sterndorff (2000) stochastic models for the annual maximum values of the omnidirectional and directional significant wave heights, individual wave heights, and individual crest heights were presented. The models include dependencies between the maximum wave......For code-based LRFD and for reliability-based assessment of offshore structures such as steel platforms it is essential that consistent directional and omnidirectional probability distributions for the maximum significant wave height, the maximum individual wave height, and the maximum individual...
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
Maximum-likelihood estimation of haplotype frequencies in nuclear families.
Becker, Tim; Knapp, Michael
2004-07-01
The importance of haplotype analysis in the context of association fine mapping of disease genes has grown steadily over the last years. Since experimental methods to determine haplotypes on a large scale are not available, phase has to be inferred statistically. For individual genotype data, several reconstruction techniques and many implementations of the expectation-maximization (EM) algorithm for haplotype frequency estimation exist. Recent research work has shown that incorporating available genotype information of related individuals largely increases the precision of haplotype frequency estimates. We, therefore, implemented a highly flexible program written in C, called FAMHAP, which calculates maximum likelihood estimates (MLEs) of haplotype frequencies from general nuclear families with an arbitrary number of children via the EM-algorithm for up to 20 SNPs. For more loci, we have implemented a locus-iterative mode of the EM-algorithm, which gives reliable approximations of the MLEs for up to 63 SNP loci, or less when multi-allelic markers are incorporated into the analysis. Missing genotypes can be handled as well. The program is able to distinguish cases (haplotypes transmitted to the first affected child of a family) from pseudo-controls (non-transmitted haplotypes with respect to the child). We tested the performance of FAMHAP and the accuracy of the obtained haplotype frequencies on a variety of simulated data sets. The implementation proved to work well when many markers were considered and no significant differences between the estimates obtained with the usual EM-algorithm and those obtained in its locus-iterative mode were observed. We conclude from the simulations that the accuracy of haplotype frequency estimation and reconstruction in nuclear families is very reliable in general and robust against missing genotypes.
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
Maximum Power Game as a Physical and Social Extension of Classical Games
Kim, Pilwon
2017-01-01
We consider an electric circuit in which the players participate as resistors and adjust their resistance in pursuit of individual maximum power. The maximum power game(MPG) becomes very complicated in a circuit which is indecomposable into serial/parallel components, yielding a nontrivial power distribution at equilibrium. Depending on the circuit topology, MPG covers a wide range of phenomena: from a social dilemma in which the whole group loses to a well-coordinated situation in which the individual pursuit of power promotes the collective outcomes. We also investigate a situation where each player in the circuit has an intrinsic heat waste. Interestingly, it is this individual inefficiency which can keep them from the collective failure in power generation. When coping with an efficient opponent with small intrinsic resistance, a rather inefficient player gets more power than efficient one. A circuit with multiple voltage inputs forms the network-based maximum power game. One of our major interests is to figure out, in what kind of the networks the pursuit for private power leads to greater total power. It turns out that the circuits with the scale-free structure is one of the good candidates which generates as much power as close to the possible maximum total. PMID:28272544
Maximum Power Game as a Physical and Social Extension of Classical Games
Kim, Pilwon
2017-03-01
We consider an electric circuit in which the players participate as resistors and adjust their resistance in pursuit of individual maximum power. The maximum power game(MPG) becomes very complicated in a circuit which is indecomposable into serial/parallel components, yielding a nontrivial power distribution at equilibrium. Depending on the circuit topology, MPG covers a wide range of phenomena: from a social dilemma in which the whole group loses to a well-coordinated situation in which the individual pursuit of power promotes the collective outcomes. We also investigate a situation where each player in the circuit has an intrinsic heat waste. Interestingly, it is this individual inefficiency which can keep them from the collective failure in power generation. When coping with an efficient opponent with small intrinsic resistance, a rather inefficient player gets more power than efficient one. A circuit with multiple voltage inputs forms the network-based maximum power game. One of our major interests is to figure out, in what kind of the networks the pursuit for private power leads to greater total power. It turns out that the circuits with the scale-free structure is one of the good candidates which generates as much power as close to the possible maximum total.
Growth and maximum size of tiger sharks (Galeocerdo cuvier in Hawaii.
Carl G Meyer
Full Text Available Tiger sharks (Galecerdo cuvier are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL, with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W, in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km, after 366 days at liberty (DAL. We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured. We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.
Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N
2014-01-01
Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites = approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.
Determination of Maximum Follow-up Speed of Electrode System of Resistance Projection Welders
Wu, Pei; Zhang, Wenqi; Bay, Niels
2004-01-01
the weld process settings for the stable production and high quality of products. In this paper, the maximum follow-up speed of electrode system was tested by using a special designed device which can be mounted to all types of machine and easily to be applied in industry, the corresponding mathematical......The maximum follow-up speed of electrode system represents the dynamic mechanical response capacity of resistance projection welding machines, which is important to make the diffrernce from one machine to the other and to consider the individual behavior of machines in designing or optimizing...
Present and Last Glacial Maximum climates as states of maximum entropy production
Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere
2011-01-01
The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
An Interval Maximum Entropy Method for Quadratic Programming Problem
RUI Wen-juan; CAO De-xin; SONG Xie-wu
2005-01-01
With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.
Hutchinson, Thomas H. [Plymouth Marine Laboratory, Prospect Place, The Hoe, Plymouth PL1 3DH (United Kingdom)], E-mail: thom1@pml.ac.uk; Boegi, Christian [BASF SE, Product Safety, GUP/PA, Z470, 67056 Ludwigshafen (Germany); Winter, Matthew J. [AstraZeneca Safety, Health and Environment, Brixham Environmental Laboratory, Devon TQ5 8BA (United Kingdom); Owens, J. Willie [The Procter and Gamble Company, Central Product Safety, 11810 East Miami River Road, Cincinnati, OH 45252 (United States)
2009-02-19
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic
Integer Programming Model for Maximum Clique in Graph
YUAN Xi-bo; YANG You; ZENG Xin-hai
2005-01-01
The maximum clique or maximum independent set of graph is a classical problem in graph theory. Combined with Boolean algebra and integer programming, two integer programming models for maximum clique problem,which improve the old results were designed in this paper. Then, the programming model for maximum independent set is a corollary of the main results. These two models can be easily applied to computer algorithm and software, and suitable for graphs of any scale. Finally the models are presented as Lingo algorithms, verified and compared by several examples.
Counterexamples to convergence theorem of maximum-entropy clustering algorithm
于剑; 石洪波; 黄厚宽; 孙喜晨; 程乾生
2003-01-01
In this paper, we surveyed the development of maximum-entropy clustering algorithm, pointed out that the maximum-entropy clustering algorithm is not new in essence, and constructed two examples to show that the iterative sequence given by the maximum-entropy clustering algorithm may not converge to a local minimum of its objective function, but a saddle point. Based on these results, our paper shows that the convergence theorem of maximum-entropy clustering algorithm put forward by Kenneth Rose et al. does not hold in general cases.
Mericle, Amy A; Casaletto, Kathryn; Knoblach, Dan; Brooks, Adam C; Carise, Deni
2010-10-01
Problem-to-services matching is critical to patient-centered care. Further, the extent to which substance abuse treatment is individualized to meet specific client needs is a key predictor of success and represents "best practice" in substance abuse treatment. The CASPAR Resource Guide, an electronic database of local free and low-cost services, is an evidence-based tool designed to help counselors easily and quickly provide offsite referrals to services not available in most community treatment programs to increase problem-to-service matching. This paper examines system-level barriers to using the CASPAR Resource Guide among 30 counselors and 21 site directors across 16 sites in two different studies. Results from qualitative implementation analyses found that key program components needed to support the implementation of this evidence-based practice (e.g., individualized treatment planning, individual treatment sessions, and individual counselor supervision) were lacking, which jeopardized successful adoption of the CASPAR research interventions and prompted a redesign of the studies in order to enhance each program's ability to support individualized care.
Maximum host survival at intermediate parasite infection intensities.
Martin Stjernman
Full Text Available BACKGROUND: Although parasitism has been acknowledged as an important selective force in the evolution of host life histories, studies of fitness effects of parasites in wild populations have yielded mixed results. One reason for this may be that most studies only test for a linear relationship between infection intensity and host fitness. If resistance to parasites is costly, however, fitness may be reduced both for hosts with low infection intensities (cost of resistance and high infection intensities (cost of parasitism, such that individuals with intermediate infection intensities have highest fitness. Under this scenario one would expect a non-linear relationship between infection intensity and fitness. METHODOLOGY/PRINCIPAL FINDINGS: Using data from blue tits (Cyanistes caeruleus in southern Sweden, we investigated the relationship between the intensity of infection of its blood parasite (Haemoproteus majoris and host survival to the following winter. Presence and intensity of parasite infections were determined by microscopy and confirmed using PCR of a 480 bp section of the cytochrome-b-gene. While a linear model suggested no relationship between parasite intensity and survival (F = 0.01, p = 0.94, a non-linear model showed a significant negative quadratic effect (quadratic parasite intensity: F = 4.65, p = 0.032; linear parasite intensity F = 4.47, p = 0.035. Visualization using the cubic spline technique showed maximum survival at intermediate parasite intensities. CONCLUSIONS/SIGNIFICANCE: Our results indicate that failing to recognize the potential for a non-linear relationship between parasite infection intensity and host fitness may lead to the potentially erroneous conclusion that the parasite is harmless to its host. Here we show that high parasite intensities indeed reduced survival, but this effect was masked by reduced survival for birds heavily suppressing their parasite intensities. Reduced survival among hosts with low
Encounter Probability of Individual Wave Height
Liu, Z.; Burcharth, H. F.
1998-01-01
wave height corresponding to a certain exceedence probability within a structure lifetime (encounter probability), based on the statistical analysis of long-term extreme significant wave height. Then the design individual wave height is calculated as the expected maximum individual wave height...... associated with the design significant wave height, with the assumption that the individual wave heights follow the Rayleigh distribution. However, the exceedence probability of such a design individual wave height within the structure lifetime is unknown. The paper presents a method for the determination...... of the design individual wave height corresponding to an exceedence probability within the structure lifetime, given the long-term extreme significant wave height. The method can also be applied for estimation of the number of relatively large waves for fatigue analysis of constructions....
Resende Rosangela Maria Simeão; Jank Liana; Valle Cacilda Borges do; Bonato Ana Lídia Variani
2004-01-01
The objectives of this work were to estimate the genetic and phenotypic parameters and to predict the genetic and genotypic values of the selection candidates obtained from intraspecific crosses in Panicum maximum as well as the performance of the hybrid progeny of the existing and projected crosses. Seventy-nine intraspecific hybrids obtained from artificial crosses among five apomictic and three sexual autotetraploid individuals were evaluated in a clonal test with two replications and ten ...
Sex-Specific Equations to Estimate Maximum Oxygen Uptake in Cycle Ergometry
Christina G. de Souza e Silva; Araújo,Claudio Gil S.
2015-01-01
Abstract Background: Aerobic fitness, assessed by measuring VO2max in maximum cardiopulmonary exercise testing (CPX) or by estimating VO2max through the use of equations in exercise testing, is a predictor of mortality. However, the error resulting from this estimate in a given individual can be high, affecting clinical decisions. Objective: To determine the error of estimate of VO2max in cycle ergometry in a population attending clinical exercise testing laboratories, and to propose sex-spec...
Rethinking evolutionary individuality.
Ereshefsky, Marc; Pedroso, Makmiller
2015-08-18
This paper considers whether multispecies biofilms are evolutionary individuals. Numerous multispecies biofilms have characteristics associated with individuality, such as internal integrity, division of labor, coordination among parts, and heritable adaptive traits. However, such multispecies biofilms often fail standard reproductive criteria for individuality: they lack reproductive bottlenecks, are comprised of multiple species, do not form unified reproductive lineages, and fail to have a significant division of reproductive labor among their parts. If such biofilms are good candidates for evolutionary individuals, then evolutionary individuality is achieved through other means than frequently cited reproductive processes. The case of multispecies biofilms suggests that standard reproductive requirements placed on individuality should be reconsidered. More generally, the case of multispecies biofilms indicates that accounts of individuality that focus on single-species eukaryotes are too restrictive and that a pluralistic and open-ended account of evolutionary individuality is needed.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...
49 CFR 174.86 - Maximum allowable operating speed.
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Maximum allowable operating speed. 174.86 Section... operating speed. (a) For molten metals and molten glass shipped in packagings other than those prescribed in § 173.247 of this subchapter, the maximum allowable operating speed may not exceed 24 km/hour (15...
Parametric optimization of thermoelectric elements footprint for maximum power generation
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost-perform...
30 CFR 56.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 56.19066 Section 56.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 56.19066 Maximum riders in a conveyance. In shafts inclined over 45...
30 CFR 57.19066 - Maximum riders in a conveyance.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum riders in a conveyance. 57.19066 Section 57.19066 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Hoisting Hoisting Procedures § 57.19066 Maximum riders in a conveyance. In shafts inclined over 45...
Maximum Atmospheric Entry Angle for Specified Retrofire Impulse
T. N. Srivastava
1969-07-01
Full Text Available Maximum atmospheric entry angles for vehicles initially moving in elliptic orbits are investigated and it is shown that tangential retrofire impulse at the apogee results in the maximum entry angle. Equivalence of maximizing the entry angle and minimizing the retrofire impulse is also established.
5 CFR 838.711 - Maximum former spouse survivor annuity.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the...
46 CFR 151.45-6 - Maximum amount of cargo.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Maximum amount of cargo. 151.45-6 Section 151.45-6 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES BARGES CARRYING BULK LIQUID HAZARDOUS MATERIAL CARGOES Operations § 151.45-6 Maximum amount of cargo. (a)...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... rate effective on the date the supplemental annuity begins, before any reduction for a private pension... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52...
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for surge pressures and other variations from normal operations, no operator may operate a pipeline at a...
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Distribution of maximum loss of fractional Brownian motion with drift
Çağlar, Mine; Vardar-Acar, Ceren
2013-01-01
In this paper, we find bounds on the distribution of the maximum loss of fractional Brownian motion with H >= 1/2 and derive estimates on its tail probability. Asymptotically, the tail of the distribution of maximum loss over [0, t] behaves like the tail of the marginal distribution at time t.
48 CFR 436.575 - Maximum workweek-construction schedule.
2010-10-01
...-construction schedule. 436.575 Section 436.575 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE... Maximum workweek-construction schedule. The contracting officer shall insert the clause at 452.236-75, Maximum Workweek-Construction Schedule, if the clause at FAR 52.236-15 is used and the contractor's...
30 CFR 57.5039 - Maximum permissible concentration.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Maximum permissible concentration. 57.5039... Maximum permissible concentration. Except as provided by standard § 57.5005, persons shall not be exposed to air containing concentrations of radon daughters exceeding 1.0 WL in active workings. ...
5 CFR 550.105 - Biweekly maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Biweekly maximum earnings limitation. 550.105 Section 550.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.105 Biweekly...
5 CFR 550.106 - Annual maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Annual maximum earnings limitation. 550.106 Section 550.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.106 Annual...
32 CFR 842.35 - Depreciation and maximum allowances.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide”...
The neurobiology of individuality
de Bivort, Benjamin
2015-03-01
Individuals often display conspicuously different patterns of behavior, even when they are very closely related genetically. These differences give rise to our sense of individuality, but what is their molecular and neurobiological basis? Individuals that are nominally genetically identical differ at various molecular and neurobiological levels: cell-to-cell variation in somatic genomes, cell-to-cell variation in expression patterns, individual-to-individual variation in neuronal morphology and physiology, and individual-to-individual variation in patterns of brain activity. It is unknown which of these levels is fundamentally causal of behavioral differences. To investigate this problem, we use the fruit fly Drosophila melanogaster, whose genetic toolkit allows the manipulation of each of these mechanistic levels, and whose rapid lifecycle and small size allows for high-throughput automation of behavioral assays. This latter point is crucial; identifying inter-individual behavioral differences requires high sample sizes both within and across individual animals. Automated behavioral characterization is at the heart of our research strategy. In every behavior examined, individual flies have individual behavioral preferences, and we have begun to identify both neural genes and circuits that control the degree of behavioral variability between individuals.
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
BS Denadai
2007-06-01
Full Text Available OBJECTIVE: The objective of this study was to analyze the effects of prolonged continuous running performed at the intensity corresponding to the onset of blood lactate accumulation (OBLA, on the peak torque of the knee extensors, analyzed in relation to different types of contraction and movement velocities in active individuals. METHOD: Eight men (23.4 ± 2.1 years; 75.8 ± 8.7 kg; 171.1 ± 4.5 cm participated in this study. First, the subjects performed an incremental test until volitional exhaustion to determine the velocity corresponding to OBLA. Then, the subjects returned to the laboratory on two occasions, separated by at least seven days, to perform five maximal isokinetic contractions of the knee extensors at two angular velocities (60 and 180º.s-1 under eccentric and concentric conditions. Eccentric peak torque (EPT and Concentric peak torque (CPT were measured at each velocity. One session was performed after a standardized warm-up period (5 min at 50% VO2max. The other session was performed after continuous running at OBLA until volitional exhaustion. These sessions were conducted in random order. RESULTS: There was a significant reduction in CPT only at 60º.s-1 (259.0 ± 46.4 and 244.0 ± 41.4 N.m. However, the reduction in EPT was significant at 60º.s-1 (337.3 ± 43.2 and 321.7 ± 60.0 N.m and 180º.s-1 (346.1 ± 38.0 and 319.7 ± 43.6 N.m. The relative strength losses after the running exercise were significant different between contraction types only at 180º.s-1. CONCLUSION: We can conclude that, in active individuals, the reduction in peak torque after prolonged continuous running at OBLA may be dependent on the type of contraction and angular velocity.OBJETIVO: O objetivo deste estudo foi analisar os efeitos da corrida contínua prolongada realizada na intensidade correspondente ao início do acúmulo do lactato no sangue (OBLA sobre o torque máximo dos extensores do joelho analisado em diferentes tipos de contração e
Dong Min Kim
2008-03-01
Full Text Available In Jungian theory, heavily influenced by Zen Buddhism, the developmental stages of human life are symbolized as a circle that represents the wholeness, and the open ended process towards the wholeness is called Individuation. Within the circle there are two stages; the Morning and the Afternoon of Life, and the latter begins at the age of 35, an age at which individuation begins and one that I have reached and passed. Thus, it seemed to be a perfect time for me to begin my own journey towards individuation, especially musical individuation since music had always been such a central part of my life. The first step of individuation is to be aware of oneÃ¢Â€Â™s individual, social, cultural unconscious forces that affect conscious thoughts and behavior. Thus, my musical individuation began with my attempts to be aware of the unconscious forces beneath my conscious thoughts and behaviors.
2010-07-01
... as specified in 40 CFR 1065.610. This is the maximum in-use engine speed used for calculating the NOX... procedures of 40 CFR part 1065, based on the manufacturer's design and production specifications for the..., power density, and maximum in-use engine speed. 1042.140 Section 1042.140 Protection of...
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize
Maximum Likelihood Estimation of the Identification Parameters and Its Correction
无
2002-01-01
By taking the subsequence out of the input-output sequence of a system polluted by white noise, anindependent observation sequence and its probability density are obtained and then a maximum likelihood estimation of theidentification parameters is given. In order to decrease the asymptotic error, a corrector of maximum likelihood (CML)estimation with its recursive algorithm is given. It has been proved that the corrector has smaller asymptotic error thanthe least square methods. A simulation example shows that the corrector of maximum likelihood estimation is of higherapproximating precision to the true parameters than the least square methods.
Maximum frequency of the decametric radiation from Jupiter
Barrow, C. H.; Alexander, J. K.
1980-01-01
The upper frequency limits of Jupiter's decametric radio emission are found to be essentially the same when observed from the earth or, with considerably higher sensitivity, from the Voyager spacecraft close to Jupiter. This suggests that the maximum frequency is a real cut-off corresponding to a maximum gyrofrequency of about 38-40 MHz at Jupiter. It no longer appears to be necessary to specify different cut-off frequencies for the Io and non-Io emission as the maximum frequencies are roughly the same in each case.
Feldman, Alexander [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-04-24
This document describes the development and approach for the radiological characterization of Cf-252 sealed sources for shipment to the Waste Isolation Pilot Plant. The report combines information on the nuclear material content of each individual source (mass or activity and date of manufacture) with information and data on the radionuclide distributions within the originating nuclear material. This approach allows for complete and accurate characterization of the waste container without the need to take additional measurements. The radionuclide uncertainties, developed from acceptable knowledge (AK) information regarding the source material, are applied to the summed activities in the drum. The AK information used in the characterization of Cf-252 sealed sources has been qualified by the peer review process, which has been reviewed and accepted by the Environmental Protection Agency.
Beck Colleen M.,Edwards Susan R.,King Maureen L.
2011-09-01
This document presents the results of nearly six years (2002-2008) of historical research and field studies concerned with evaluating potential environmental liabilities associated with U.S. Atomic Energy Commission projects from the Plowshare and Vela Uniform Programs. The Plowshare Program's primary purpose was to develop peaceful uses for nuclear explosives. The Vela Uniform Program focused on improving the capability of detecting, monitoring and identifying underground nuclear detonations. As a result of the Project Chariot site restoration efforts in the early 1990s, there were concerns that there might be other project locations with potential environmental liabilities. The Desert Research Institute conducted archival research to identify projects, an analysis of project field activities, and completed field studies at locations where substantial fieldwork had been undertaken for the projects. Although the Plowshare and Vela Uniform nuclear projects are well known, the projects that are included in this research are relatively unknown. They are proposed nuclear projects that were not executed, proposed and executed high explosive experiments, and proposed and executed high explosive construction activities off the Nevada Test Site. The research identified 170 Plowshare and Vela Uniform off-site projects and many of these had little or no field activity associated with them. However, there were 27 projects that merited further investigation and field studies were conducted at 15 locations.
Beck Colleen M.,Edwards Susan R.,King Maureen L.
2011-09-01
This document presents the results of nearly six years (2002-2008) of historical research and field studies concerned with evaluating potential environmental liabilities associated with U.S. Atomic Energy Commission projects from the Plowshare and Vela Uniform Programs. The Plowshare Program's primary purpose was to develop peaceful uses for nuclear explosives. The Vela Uniform Program focused on improving the capability of detecting, monitoring and identifying underground nuclear detonations. As a result of the Project Chariot site restoration efforts in the early 1990s, there were concerns that there might be other project locations with potential environmental liabilities. The Desert Research Institute conducted archival research to identify projects, an analysis of project field activities, and completed field studies at locations where substantial fieldwork had been undertaken for the projects. Although the Plowshare and Vela Uniform nuclear projects are well known, the projects that are included in this research are relatively unknown. They are proposed nuclear projects that were not executed, proposed and executed high explosive experiments, and proposed and executed high explosive construction activities off the Nevada Test Site. The research identified 170 Plowshare and Vela Uniform off-site projects and many of these had little or no field activity associated with them. However, there were 27 projects that merited further investigation and field studies were conducted at 15 locations.
Beck Colleen M,Edwards Susan R.,King Maureen L.
2011-09-01
This document presents the results of nearly six years (2002-2008) of historical research and field studies concerned with evaluating potential environmental liabilities associated with U.S. Atomic Energy Commission projects from the Plowshare and Vela Uniform Programs. The Plowshare Program's primary purpose was to develop peaceful uses for nuclear explosives. The Vela Uniform Program focused on improving the capability of detecting, monitoring and identifying underground nuclear detonations. As a result of the Project Chariot site restoration efforts in the early 1990s, there were concerns that there might be other project locations with potential environmental liabilities. The Desert Research Institute conducted archival research to identify projects, an analysis of project field activities, and completed field studies at locations where substantial fieldwork had been undertaken for the projects. Although the Plowshare and Vela Uniform nuclear projects are well known, the projects that are included in this research are relatively unknown. They are proposed nuclear projects that were not executed, proposed and executed high explosive experiments, and proposed and executed high explosive construction activities off the Nevada Test Site. The research identified 170 Plowshare and Vela Uniform off-site projects and many of these had little or no field activity associated with them. However, there were 27 projects that merited further investigation and field studies were conducted at 15 locations.
Site Specific Probable Maximum Precipitation Estimates and Professional Judgement
Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.
2015-12-01
State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially
The Application of Maximum Principle in Supply Chain Cost Optimization
Zhou Ling; Wang Jun
2013-01-01
In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain...
Maximum Principle for Nonlinear Cooperative Elliptic Systems on IR N
LEADI Liamidi; MARCOS Aboubacar
2011-01-01
We investigate in this work necessary and sufficient conditions for having a Maximum Principle for a cooperative elliptic system on the whole (IR)N.Moreover,we prove the existence of solutions by an approximation method for the considered system.
Maximum Likelihood Factor Structure of the Family Environment Scale.
Fowler, Patrick C.
1981-01-01
Presents the maximum likelihood factor structure of the Family Environment Scale. The first bipolar dimension, "cohesion v conflict," measures relationship-centered concerns, while the second unipolar dimension is an index of "organizational and control" activities. (Author)
Multiresolution maximum intensity volume rendering by morphological adjunction pyramids
Roerdink, Jos B.T.M.
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Multiresolution Maximum Intensity Volume Rendering by Morphological Adjunction Pyramids
Roerdink, Jos B.T.M.
2001-01-01
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Changes in context and perception of maximum reaching height.
Wagman, Jeffrey B; Day, Brian M
2014-01-01
Successfully performing a given behavior requires flexibility in both perception and behavior. In particular, doing so requires perceiving whether that behavior is possible across the variety of contexts in which it might be performed. Three experiments investigated how (changes in) context (ie point of observation and intended reaching task) influenced perception of maximum reaching height. The results of experiment 1 showed that perceived maximum reaching height more closely reflected actual reaching ability when perceivers occupied a point of observation that was compatible with that required for the reaching task. The results of experiments 2 and 3 showed that practice perceiving maximum reaching height from a given point of observation improved perception of maximum reaching height from a different point of observation, regardless of whether such practice occurred at a compatible or incompatible point of observation. In general, such findings show bounded flexibility in perception of affordances and are thus consistent with a description of perceptual systems as smart perceptual devices.
Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)
U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Maximum Photovoltaic Penetration Levels on Typical Distribution Feeders: Preprint
Hoke, A.; Butler, R.; Hambrick, J.; Kroposki, B.
2012-07-01
This paper presents simulation results for a taxonomy of typical distribution feeders with various levels of photovoltaic (PV) penetration. For each of the 16 feeders simulated, the maximum PV penetration that did not result in steady-state voltage or current violation is presented for several PV location scenarios: clustered near the feeder source, clustered near the midpoint of the feeder, clustered near the end of the feeder, randomly located, and evenly distributed. In addition, the maximum level of PV is presented for single, large PV systems at each location. Maximum PV penetration was determined by requiring that feeder voltages stay within ANSI Range A and that feeder currents stay within the ranges determined by overcurrent protection devices. Simulations were run in GridLAB-D using hourly time steps over a year with randomized load profiles based on utility data and typical meteorological year weather data. For 86% of the cases simulated, maximum PV penetration was at least 30% of peak load.
16 CFR 1505.8 - Maximum acceptable material temperatures.
2010-01-01
... Association, 155 East 44th Street, New York, NY 10017. Material Degrees C. Degrees F. Capacitors (1) (1) Class... capacitor has no marked temperature limit, the maximum acceptable temperature will be assumed to be 65...
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC GIS Inventory (aka Ramona) — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
PREDICTION OF MAXIMUM DRY DENSITY OF LOCAL GRANULAR ...
methods. A test on a soil of relatively high solid density revealed that the developed relation looses ... where, Pd max is the laboratory maximum dry ... Addis-Jinima Road Rehabilitation. ..... data sets that differ considerably in the magnitude.
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
Solar Panel Maximum Power Point Tracker for Power Utilities
Sandeep Banik,
2014-01-01
Full Text Available ―Solar Panel Maximum Power Point Tracker For power utilities‖ As the name implied, it is a photovoltaic system that uses the photovoltaic array as a source of electrical power supply and since every photovoltaic (PV array has an optimum operating point, called the maximum power point, which varies depending on the insolation level and array voltage. A maximum power point tracker (MPPT is needed to operate the PV array at its maximum power point. The objective of this thesis project is to build a photovoltaic (PV array Of 121.6V DC Voltage(6 cell each 20V, 100watt And convert the DC voltage to Single phase 120v,50Hz AC voltage by switch mode power converter‘s and inverter‘s.
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
On the maximum sufficient range of interstellar vessels
Cartin, Daniel
2011-01-01
This paper considers the likely maximum range of space vessels providing the basis of a mature interstellar transportation network. Using the principle of sufficiency, it is argued that this range will be less than three parsecs for the average interstellar vessel. This maximum range provides access from the Solar System to a large majority of nearby stellar systems, with total travel distances within the network not excessively greater than actual physical distance.
Efficiency at Maximum Power of Interacting Molecular Machines
Golubeva, Natalia; Imparato, Alberto
2012-01-01
We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many- motor...... system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range....
Filtering Additive Measurement Noise with Maximum Entropy in the Mean
Gzyl, Henryk
2007-01-01
The purpose of this note is to show how the method of maximum entropy in the mean (MEM) may be used to improve parametric estimation when the measurements are corrupted by large level of noise. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We compare the performance of our method with the bayesian and maximum likelihood approaches.
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
A tropospheric ozone maximum over the equatorial Southern Indian Ocean
L. Zhang
2012-05-01
Full Text Available We examine the distribution of tropical tropospheric ozone (O_{3} from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O_{3} during 2005 to 2009 reveal a distinct, persistent O_{3} maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O_{3} observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O_{3} maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O_{3} maximum is dominated by the O_{3} production driven by lightning nitrogen oxides (NO_{x} emissions, which accounts for 62% of the tropospheric column O_{3} in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O_{3} maximum are rather small. The O_{3} productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O_{3} maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O_{3} maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.
On the sufficiency of the linear maximum principle
Vidal, Rene Victor Valqui
1987-01-01
Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
Hybrid TOA/AOA Approximate Maximum Likelihood Mobile Localization
Mohamed Zhaounia; Mohamed Adnan Landolsi; Ridha Bouallegue
2010-01-01
This letter deals with a hybrid time-of-arrival/angle-of-arrival (TOA/AOA) approximate maximum likelihood (AML) wireless location algorithm. Thanks to the use of both TOA/AOA measurements, the proposed technique can rely on two base stations (BS) only and achieves better performance compared to the original approximate maximum likelihood (AML) method. The use of two BSs is an important advantage in wireless cellular communication systems because it avoids hearability problems and reduces netw...
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
Jan Werner
Full Text Available We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes strongly differed from Case's study (1978, which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles to 20 (fishes times (in comparison to mammals or even 45 (reptiles to 100 (fishes times (in comparison to birds lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule
Werner, Jan; Griebeler, Eva Maria
2014-01-01
We tested if growth rates of recent taxa are unequivocally separated between endotherms and ectotherms, and compared these to dinosaurian growth rates. We therefore performed linear regression analyses on the log-transformed maximum growth rate against log-transformed body mass at maximum growth for extant altricial birds, precocial birds, eutherians, marsupials, reptiles, fishes and dinosaurs. Regression models of precocial birds (and fishes) strongly differed from Case's study (1978), which is often used to compare dinosaurian growth rates to those of extant vertebrates. For all taxonomic groups, the slope of 0.75 expected from the Metabolic Theory of Ecology was statistically supported. To compare growth rates between taxonomic groups we therefore used regressions with this fixed slope and group-specific intercepts. On average, maximum growth rates of ectotherms were about 10 (reptiles) to 20 (fishes) times (in comparison to mammals) or even 45 (reptiles) to 100 (fishes) times (in comparison to birds) lower than in endotherms. While on average all taxa were clearly separated from each other, individual growth rates overlapped between several taxa and even between endotherms and ectotherms. Dinosaurs had growth rates intermediate between similar sized/scaled-up reptiles and mammals, but a much lower rate than scaled-up birds. All dinosaurian growth rates were within the range of extant reptiles and mammals, and were lower than those of birds. Under the assumption that growth rate and metabolic rate are indeed linked, our results suggest two alternative interpretations. Compared to other sauropsids, the growth rates of studied dinosaurs clearly indicate that they had an ectothermic rather than an endothermic metabolic rate. Compared to other vertebrate growth rates, the overall high variability in growth rates of extant groups and the high overlap between individual growth rates of endothermic and ectothermic extant species make it impossible to rule out either of
Resende Rosangela Maria Simeão
2004-01-01
Full Text Available The objectives of this work were to estimate the genetic and phenotypic parameters and to predict the genetic and genotypic values of the selection candidates obtained from intraspecific crosses in Panicum maximum as well as the performance of the hybrid progeny of the existing and projected crosses. Seventy-nine intraspecific hybrids obtained from artificial crosses among five apomictic and three sexual autotetraploid individuals were evaluated in a clonal test with two replications and ten plants per plot. Green matter yield, total and leaf dry matter yields and leaf percentage were evaluated in five cuts per year during three years. Genetic parameters were estimated and breeding and genotypic values were predicted using the restricted maximum likelihood/best linear unbiased prediction procedure (REML/BLUP. The dominant genetic variance was estimated by adjusting the effect of full-sib families. Low magnitude individual narrow sense heritabilities (0.02-0.05, individual broad sense heritabilities (0.14-0.20 and repeatability measured on an individual basis (0.15-0.21 were obtained. Dominance effects for all evaluated characteristics indicated that breeding strategies that explore heterosis must be adopted. Less than 5% increase in the parameter repeatability was obtained for a three-year evaluation period and may be the criterion to determine the maximum number of years of evaluation to be adopted, without compromising gain per cycle of selection. The identification of hybrid candidates for future cultivars and of those that can be incorporated into the breeding program was based on the genotypic and breeding values, respectively. The prediction of the performance of the hybrid progeny, based on the breeding values of the progenitors, permitted the identification of the best crosses and indicated the best parents to use in crosses.
Shneiderman, Ben
1989-01-01
This reprint from "Designing the User Interface: Strategies for Effective Human-Computer Interaction" (Shneiderman) discusses the impact of computers on individuals and society. Highlights include individual opportunities for learning, entertainment, and cooperation through networking; problems with the use of computer systems; and the…
Transcending Cognitive Individualism
Zerubavel, Eviatar; Smith, Eliot R.
2010-01-01
Advancing knowledge in many areas of psychology and neuroscience, underlined by dazzling images of brain scans, appear to many professionals and to the public to show that people are on the way to explaining cognition purely in terms of processes within the individual's head. Yet while such cognitive individualism still dominates the popular…
Individual Attitudes Towards Trade
Jäkel, Ina Charlotte; Smolka, Marcel
2013-01-01
Using the 2007 wave of the Pew Global Attitudes Project, this paper finds statistically significant and economically large Stolper-Samuelson effects in individuals’ preference formation towards trade policy. High-skilled individuals are substantially more pro-trade than low-skilled individuals...
江洲
2015-01-01
中国人民银行基层行对银行业金融机构的监管手段单一，对其日常动态的非现场评估管理难成为当下亟待解决的问题。从非现场监管实际现状出发，综合梳理分析当前征信业相关监管政策，借鉴国外经验，立足工作实践，设计非现场监管指标体系及相关配套制度具有重要意义。%The means by which the grass-root branches of the PBC conduct supervision over the banking financial institutions is single, and the difficulty in the daily dynamic off-site assessment management of them has become a problem to be solved. Starting from the actual situation of off-site supervision, a comprehensive analysis is made on the current related regulatory policies for credit reference industry. Learning from foreign experiences and based on the practices, the paper designs an off-site supervision indicator system and some related supporting institutions.
Evaluating the time limit at maximum aerobic speed in elite swimmers. Training implications.
Renoux, J C
2001-12-01
The aim of the present study was to make use of the concepts of maximum aerobic speed (MAS) and time limit (tlim) in order to determine the relationship between these two elements, and this in an attempt to significantly improve both speed and swimming performance during a training season. To this same end, an intermittent training model was used, which was adapted to the value obtained for the time limit at maximum aerobic speed. During a 12 week training period, the maximum aerobic speed for a group of 9 top-ranking varsity swimmers was measured on two occasions, as was the tlim. The values generated indicated that: 1) there was an inverse relationship between MAS and the time this speed could be maintained, thus confirming the studies by Billat et al. (1994b); 2) a significant increase in MAS occurred over the 12 week period, although no such evolution was seen for the tlim; 3) there was an improvement in results; 4) the time limit could be used in designing a training program based on intermittent exercises. In addition, results of the present study should allow swimming coaches to draw up individualized training programs for a given swimmer by taking into consideration maximum aerobic speed, time limit and propelling efficiency.
Kuracina Richard
2015-06-01
Full Text Available The article deals with the measurement of maximum explosion pressure and the maximum rate of exposure pressure rise of wood dust cloud. The measurements were carried out according to STN EN 14034-1+A1:2011 Determination of explosion characteristics of dust clouds. Part 1: Determination of the maximum explosion pressure pmax of dust clouds and the maximum rate of explosion pressure rise according to STN EN 14034-2+A1:2012 Determination of explosion characteristics of dust clouds - Part 2: Determination of the maximum rate of explosion pressure rise (dp/dtmax of dust clouds. The wood dust cloud in the chamber is achieved mechanically. The testing of explosions of wood dust clouds showed that the maximum value of the pressure was reached at the concentrations of 450 g / m3 and its value is 7.95 bar. The fastest increase of pressure was observed at the concentrations of 450 g / m3 and its value was 68 bar / s.
Size dependence of efficiency at maximum power of heat engine
Izumida, Y.
2013-10-01
We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.
Predicting species' maximum dispersal distances from simple plant traits.
Tamme, Riin; Götzenberger, Lars; Zobel, Martin; Bullock, James M; Hooftman, Danny A P; Kaasik, Ants; Pärtel, Meelis
2014-02-01
Many studies have shown plant species' dispersal distances to be strongly related to life-history traits, but how well different traits can predict dispersal distances is not yet known. We used cross-validation techniques and a global data set (576 plant species) to measure the predictive power of simple plant traits to estimate species' maximum dispersal distances. Including dispersal syndrome (wind, animal, ant, ballistic, and no special syndrome), growth form (tree, shrub, herb), seed mass, seed release height, and terminal velocity in different combinations as explanatory variables we constructed models to explain variation in measured maximum dispersal distances and evaluated their power to predict maximum dispersal distances. Predictions are more accurate, but also limited to a particular set of species, if data on more specific traits, such as terminal velocity, are available. The best model (R2 = 0.60) included dispersal syndrome, growth form, and terminal velocity as fixed effects. Reasonable predictions of maximum dispersal distance (R2 = 0.53) are also possible when using only the simplest and most commonly measured traits; dispersal syndrome and growth form together with species taxonomy data. We provide a function (dispeRsal) to be run in the software package R. This enables researchers to estimate maximum dispersal distances with confidence intervals for plant species using measured traits as predictors. Easily obtainable trait data, such as dispersal syndrome (inferred from seed morphology) and growth form, enable predictions to be made for a large number of species.
Prediction of three dimensional maximum isometric neck strength.
Fice, Jason B; Siegmund, Gunter P; Blouin, Jean-Sébastien
2014-09-01
We measured maximum isometric neck strength under combinations of flexion/extension, lateral bending and axial rotation to determine whether neck strength in three dimensions (3D) can be predicted from principal axes strength. This would allow biomechanical modelers to validate their neck models across many directions using only principal axis strength data. Maximum isometric neck moments were measured in 9 male volunteers (29±9 years) for 17 directions. The 3D moments were normalized by the principal axis moments, and compared to unity for all directions tested. Finally, each subject's maximum principal axis moments were used to predict their resultant moment in the off-axis directions. Maximum moments were 30±6 N m in flexion, 32±9 N m in lateral bending, 51±11 N m in extension, and 13±5 N m in axial rotation. The normalized 3D moments were not significantly different from unity (95% confidence interval contained one), except for three directions that combined ipsilateral axial rotation and lateral bending; in these directions the normalized moments exceeded one. Predicted resultant moments compared well to the actual measured values (r2=0.88). Despite exceeding unity, the normalized moments were consistent across subjects to allow prediction of maximum 3D neck strength using principal axes neck strength.
The NBA’s Maximum Player Salary and the Distribution of Player Rents
Kelly M. Hastings
2015-03-01
Full Text Available The NBA’s 1999 Collective Bargaining Agreement (CBA included provisions capping individual player pay in addition to team payrolls. This study examines the effect the NBA’s maximum player salary on player rents by comparing player pay from the 1997–1998 and 2003–2004 seasons while controlling for player productivity and other factors related to player pay. The results indicate a large increase in the pay received by teams’ second highest and, to a lesser extent, third highest paid players. We interpret this result as evidence that the adoption of the maximum player salary shifted rents from stars to complementary players. We also show that the 1999 CBA’s rookie contract provisions reduced salaries of early career players.
Sullivan, Terry [Brookhaven National Lab. (BNL), Upton, NY (United States). Biological, Environmental, and Climate Sciences Dept.
2014-12-02
ZionSolutions is in the process of decommissioning the Zion Nuclear Power Plant in order to establish a new water treatment plant. There is some residual radioactive particles from the plant which need to be brought down to levels so an individual who receives water from the new treatment plant does not receive a radioactive dose in excess of 25 mrem/y⁻¹. The objectives of this report are: (a) To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; (b) Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; (c) Estimate the maximum concentration in a well located outside of the fill material; and (d) Perform a sensitivity analysis of key parameters.
Stovel, R T; Sweet, R G
1979-01-01
Current cell sorting machines do not preserve the individual identity of processed cells; after analysis, the cells are assigned to a subpopulation where they are pooled with other similar cells. This paper reports progress on a system that sorts cells individually to precise locations on a microscope slide and preserves them for further observation with a light microscope while recording flow measurement data for each cell. Various electronic and mechanical modifications to an existing sorting machine are described that increase drop placement accuracy and permit individual cell sorting.
High-frequency maximum observable shaking map of Italy from fault sources
Zonno, Gaetano
2012-03-17
We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.
Multi-digit maximum voluntary torque production on a circular object
SHIM, JAE KUN; HUANG, JUNFENG; HOOKE, ALEXANDER W.; LATSH, MARK L.; ZATSIORSKY, VLADIMIR M.
2010-01-01
Individual digit-tip forces and moments during torque production on a mechanically fixed circular object were studied. During the experiments, subjects positioned each digit on a 6-dimensional force/moment sensor attached to a circular handle and produced a maximum voluntary torque on the handle. The torque direction and the orientation of the torque axis were varied. From this study, it is concluded that: (1) the maximum torque in the closing (clockwise) direction was larger than in the opening (counter clockwise) direction; (2) the thumb and little finger had the largest and the smallest share of both total normal force and total moment, respectively; (3) the sharing of total moment between individual digits was not affected by the orientation of the torque axis or by the torque direction, while the sharing of total normal force between the individual digit varied with torque direction; (4) the normal force safety margins were largest and smallest in the thumb and little finger, respectively. PMID:17454086
Predicting Maximum Sunspot Number in Solar Cycle 24
Nipa J Bhatt; Rajmal Jain; Malini Aggarwal
2009-03-01
A few prediction methods have been developed based on the precursor technique which is found to be successful for forecasting the solar activity. Considering the geomagnetic activity aa indices during the descending phase of the preceding solar cycle as the precursor, we predict the maximum amplitude of annual mean sunspot number in cycle 24 to be 111 ± 21. This suggests that the maximum amplitude of the upcoming cycle 24 will be less than cycles 21–22. Further, we have estimated the annual mean geomagnetic activity aa index for the solar maximum year in cycle 24 to be 20.6 ± 4.7 and the average of the annual mean sunspot number during the descending phase of cycle 24 is estimated to be 48 ± 16.8.
Construction and enumeration of Boolean functions with maximum algebraic immunity
ZHANG WenYing; WU ChuanKun; LIU XiangZhong
2009-01-01
Algebraic immunity is a new cryptographic criterion proposed against algebraic attacks. In order to resist algebraic attacks, Boolean functions used in many stream ciphers should possess high algebraic immunity. This paper presents two main results to find balanced Boolean functions with maximum algebraic immunity. Through swapping the values of two bits, and then generalizing the result to swap some pairs of bits of the symmetric Boolean function constructed by Dalai, a new class of Boolean functions with maximum algebraic immunity are constructed. Enumeration of such functions is also given. For a given function p(x) with deg(p(x)) < [n/2], we give a method to construct functions in the form p(x)+q(x) which achieve the maximum algebraic immunity, where every term with nonzero coefficient in the ANF of q(x) has degree no less than [n/2].
Propane spectral resolution enhancement by the maximum entropy method
Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.
1990-01-01
The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.
Mass mortality of the vermetid gastropod Ceraesignum maximum
Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.
2016-09-01
Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.
The optimal polarizations for achieving maximum contrast in radar images
Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.
1988-01-01
There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.
Penalized maximum likelihood estimation and variable selection in geostatistics
Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919
2012-01-01
We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...
Influence of maximum decking charge on intensity of blasting vibration
无
2006-01-01
Based on the character of short-time non-stationary random signal, the relationship between the maximum decking charge and energy distribution of blasting vibration signals was investigated by means of the wavelet packet method. Firstly, the characteristics of wavelet transform and wavelet packet analysis were described. Secondly, the blasting vibration signals were analyzed by wavelet packet based on software MATLAB, and the change of energy distribution curve at different frequency bands were obtained. Finally, the law of energy distribution of blasting vibration signals changing with the maximum decking charge was analyzed. The results show that with the increase of decking charge, the ratio of the energy of high frequency to total energy decreases, the dominant frequency bands of blasting vibration signals tend towards low frequency and blasting vibration does not depend on the maximum decking charge.
The subsequence weight distribution of summed maximum length digital sequences
Weathers, G. D.; Graf, E. R.; Wallace, G. R.
1974-01-01
An attempt is made to develop mathematical formulas to provide the basis for the design of pseudorandom signals intended for applications requiring accurate knowledge of the statistics of the signals. The analysis approach involves calculating the first five central moments of the weight distribution of subsequences of hybrid-sum sequences. The hybrid-sum sequence is formed from the modulo-two sum of k maximum length sequences and is an extension of the sum sequences formed from two maximum length sequences that Gilson (1966) evaluated. The weight distribution of the subsequences serves as an approximation to the filtering process. The basic reason for the analysis of hybrid-sum sequences is to establish a large group of sequences with good statistical properties. It is shown that this can be accomplished much more efficiently using the hybrid-sum approach rather than forming the group strictly from maximum length sequences.
Maximum power point tracking for optimizing energy harvesting process
Akbari, S.; Thang, P. C.; Veselov, D. S.
2016-10-01
There has been a growing interest in using energy harvesting techniques for powering wireless sensor networks. The reason for utilizing this technology can be explained by the sensors limited amount of operation time which results from the finite capacity of batteries and the need for having a stable power supply in some applications. Energy can be harvested from the sun, wind, vibration, heat, etc. It is reasonable to develop multisource energy harvesting platforms for increasing the amount of harvesting energy and to mitigate the issue concerning the intermittent nature of ambient sources. In the context of solar energy harvesting, it is possible to develop algorithms for finding the optimal operation point of solar panels at which maximum power is generated. These algorithms are known as maximum power point tracking techniques. In this article, we review the concept of maximum power point tracking and provide an overview of the research conducted in this area for wireless sensor networks applications.
Proscriptive Bayesian Programming and Maximum Entropy: a Preliminary Study
Koike, Carla Cavalcante
2008-11-01
Some problems found in robotics systems, as avoiding obstacles, can be better described using proscriptive commands, where only prohibited actions are indicated in contrast to prescriptive situations, which demands that a specific command be specified. An interesting question arises regarding the possibility to learn automatically if proscriptive commands are suitable and which parametric function could be better applied. Lately, a great variety of problems in robotics domain are object of researches using probabilistic methods, including the use of Maximum Entropy in automatic learning for robot control systems. This works presents a preliminary study on automatic learning of proscriptive robot control using maximum entropy and using Bayesian Programming. It is verified whether Maximum entropy and related methods can favour proscriptive commands in an obstacle avoidance task executed by a mobile robot.
Multitime maximum principle approach of minimal submanifolds and harmonic maps
Udriste, Constantin
2011-01-01
Some optimization problems coming from the Differential Geometry, as for example, the minimal submanifolds problem and the harmonic maps problem are solved here via interior solutions of appropriate multitime optimal control problems. Section 1 underlines some science domains where appear multitime optimal control problems. Section 2 (Section 3) recalls the multitime maximum principle for optimal control problems with multiple (curvilinear) integral cost functionals and $m$-flow type constraint evolution. Section 4 shows that there exists a multitime maximum principle approach of multitime variational calculus. Section 5 (Section 6) proves that the minimal submanifolds (harmonic maps) are optimal solutions of multitime evolution PDEs in an appropriate multitime optimal control problem. Section 7 uses the multitime maximum principle to show that of all solids having a given surface area, the sphere is the one having the greatest volume. Section 8 studies the minimal area of a multitime linear flow as optimal c...
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Approximate maximum-entropy moment closures for gas dynamics
McDonald, James G.
2016-11-01
Accurate prediction of flows that exist between the traditional continuum regime and the free-molecular regime have proven difficult to obtain. Current methods are either inaccurate in this regime or prohibitively expensive for practical problems. Moment closures have long held the promise of providing new, affordable, accurate methods in this regime. The maximum-entropy hierarchy of closures seems to offer particularly attractive physical and mathematical properties. Unfortunately, several difficulties render the practical implementation of maximum-entropy closures very difficult. This work examines the use of simple approximations to these maximum-entropy closures and shows that physical accuracy that is vastly improved over continuum methods can be obtained without a significant increase in computational cost. Initially the technique is demonstrated for a simple one-dimensional gas. It is then extended to the full three-dimensional setting. The resulting moment equations are used for the numerical solution of shock-wave profiles with promising results.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Kenneth W. K. Lui
2009-01-01
Full Text Available We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Remarks on the strong maximum principle for nonlocal operators
Jerome Coville
2008-05-01
Full Text Available In this note, we study the existence of a strong maximum principle for the nonlocal operator $$ mathcal{M}[u](x :=int_{G}J(gu(x*g^{-1}dmu(g - u(x, $$ where $G$ is a topological group acting continuously on a Hausdorff space $X$ and $u in C(X$. First we investigate the general situation and derive a pre-maximum principle. Then we restrict our analysis to the case of homogeneous spaces (i.e., $ X=G /H$. For such Hausdorff spaces, depending on the topology, we give a condition on $J$ such that a strong maximum principle holds for $mathcal{M}$. We also revisit the classical case of the convolution operator (i.e. $G=(mathbb{R}^n,+, X=mathbb{R}^n, dmu =dy$.
Resource-constrained maximum network throughput on space networks
Yanling Xing; Ning Ge; Youzheng Wang
2015-01-01
This paper investigates the maximum network through-put for resource-constrained space networks based on the delay and disruption-tolerant networking (DTN) architecture. Specifical y, this paper proposes a methodology for calculating the maximum network throughput of multiple transmission tasks under storage and delay constraints over a space network. A mixed-integer linear programming (MILP) is formulated to solve this problem. Simula-tions results show that the proposed methodology can successful y calculate the optimal throughput of a space network under storage and delay constraints, as wel as a clear, monotonic relationship between end-to-end delay and the maximum network throughput under storage constraints. At the same time, the optimization re-sults shine light on the routing and transport protocol design in space communication, which can be used to obtain the optimal network throughput.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
Lui, Kenneth W. K.; So, H. C.
2009-12-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semi-definite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound.
Quality, precision and accuracy of the maximum No. 40 anemometer
Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)
1996-12-31
This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.
The evolution of maximum body size of terrestrial mammals.
Smith, Felisa A; Boyer, Alison G; Brown, James H; Costa, Daniel P; Dayan, Tamar; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; McCain, Christy; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D
2010-11-26
The extinction of dinosaurs at the Cretaceous/Paleogene (K/Pg) boundary was the seminal event that opened the door for the subsequent diversification of terrestrial mammals. Our compilation of maximum body size at the ordinal level by sub-epoch shows a near-exponential increase after the K/Pg. On each continent, the maximum size of mammals leveled off after 40 million years ago and thereafter remained approximately constant. There was remarkable congruence in the rate, trajectory, and upper limit across continents, orders, and trophic guilds, despite differences in geological and climatic history, turnover of lineages, and ecological variation. Our analysis suggests that although the primary driver for the evolution of giant mammals was diversification to fill ecological niches, environmental temperature and land area may have ultimately constrained the maximum size achieved.
The maximum force in a column under constant speed compression
Kuzkin, Vitaly A
2015-01-01
Dynamic buckling of an elastic column under compression at constant speed is investigated assuming the first-mode buckling. Two cases are considered: (i) an imperfect column (Hoff's statement), and (ii) a perfect column having an initial lateral deflection. The range of parameters, where the maximum load supported by a column exceeds Euler static force is determined. In this range, the maximum load is represented as a function of the compression rate, slenderness ratio, and imperfection/initial deflection. Considering the results we answer the following question: "How slowly the column should be compressed in order to measure static load-bearing capacity?" This question is important for the proper setup of laboratory experiments and computer simulations of buckling. Additionally, it is shown that the behavior of a perfect column having an initial deflection differ significantlys form the behavior of an imperfect column. In particular, the dependence of the maximum force on the compression rate is non-monotoni...
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Treating Children as Individuals
... responsibilities, rewards, and punishment—parents must individualize their parenting while trying to remain fair to all. This ... esteem and behavioral style to life goals and career choices. Last Updated 11/21/2015 Source Caring ...
李谷雨
2016-01-01
Among those American symbols like multiculturalism, hi-tech and its powerful status in the world, an important representative one is its individualism. This paper will briefly discuss it based on daily matters.
Estimating the maximum potential revenue for grid connected electricity storage :
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01
The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
Biological Individuality of Man
1974-12-01
RECIPIENT’S CAT * LOO NUMBER Biological Individuality of Man 5 TlrPE OF REPORT a PERIOD COVERED Technical « PERFORMING ORO REPORT...Variability 13 A. Background , 13 B. Slatistictl Approaches to Biological Variability 13 C. Genetic Aspects of Biological Variability . 14 III...ioiological determinants of individuality. Only recently, have genetic infaienccs been investigated and the potentialities for future control of bio
Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules
Gao, Junling; Chen, Min
2013-01-01
Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...
Microcanonical origin of the maximum entropy principle for open systems.
Lee, Julian; Pressé, Steve
2012-10-01
There are two distinct approaches for deriving the canonical ensemble. The canonical ensemble either follows as a special limit of the microcanonical ensemble or alternatively follows from the maximum entropy principle. We show the equivalence of these two approaches by applying the maximum entropy formulation to a closed universe consisting of an open system plus bath. We show that the target function for deriving the canonical distribution emerges as a natural consequence of partial maximization of the entropy over the bath degrees of freedom alone. By extending this mathematical formalism to dynamical paths rather than equilibrium ensembles, the result provides an alternative justification for the principle of path entropy maximization as well.
Information Entropy Production of Spatio-Temporal Maximum Entropy Distributions
Cofre, Rodrigo
2015-01-01
Spiking activity from populations of neurons display causal interactions and memory effects. Therefore, they are expected to show some degree of irreversibility in time. Motivated by the spike train statistics, in this paper we build a framework to quantify the degree of irreversibility of any maximum entropy distribution. Our approach is based on the transfer matrix technique, which enables us to find an homogeneous irreducible Markov chain that shares the same maximum entropy measure. We provide relevant examples in the context of spike train statistics
Semiparametric maximum likelihood for nonlinear regression with measurement errors.
Suh, Eun-Young; Schafer, Daniel W
2002-06-01
This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.
Maximum length scale in density based topology optimization
Lazarov, Boyan Stefanov; Wang, Fengwen
2017-01-01
The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...
On the Effect of Mortgages of Maximum Amount
YangZongping
2005-01-01
Since the enactment of the PRC Guarantee Law, mortgages of maximum amount has won wide application in a variety of business occupations and particularly in banking. Compared with the rich content of the 21clause statute on mortgages of maximum amount in Japan's Civil Law, the Chinese law has only four principled clauses. Its lack of operability plus its legislative gaps and defects has a severe impact on the positive effectiveness of the law. The core issue is the question of effectiveness. Because the principles stipulated in the Law run counter to the diversity of its actual practices,
A Maximum Entropy Method for a Robust Portfolio Problem
Yingying Xu
2014-06-01
Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.
On the maximum grain size entrained by photoevaporative winds
Hutchison, Mark A; Maddison, Sarah T
2016-01-01
We model the behaviour of dust grains entrained by photoevaporation-driven winds from protoplanetary discs assuming a non-rotating, plane-parallel disc. We obtain an analytic expression for the maximum entrainable grain size in extreme-UV radiation-driven winds, which we demonstrate to be proportional to the mass loss rate of the disc. When compared with our hydrodynamic simulations, the model reproduces almost all of the wind properties for the gas and dust. In typical turbulent discs, the entrained grain sizes in the wind are smaller than the theoretical maximum everywhere but the inner disc due to dust settling.
Modified maximum likelihood registration based on information fusion
Yongqing Qi; Zhongliang Jing; Shiqiang Hu
2007-01-01
The bias estimation of passive sensors is considered based on information fusion in multi-platform multisensor tracking system. The unobservable problem of bearing-only tracking in blind spot is analyzed. A modified maximum likelihood method, which uses the redundant information of multi-sensor system to calculate the target position, is investigated to estimate the biases. Monte Carlo simulation results show that the modified method eliminates the effect of unobservable problem in the blind spot and can estimate the biases more rapidly and accurately than maximum likelihood method. It is statistically efficient since the standard deviation of bias estimation errors meets the theoretical lower bounds.
Maximum-entropy distributions of correlated variables with prespecified marginals.
Larralde, Hernán
2012-12-01
The problem of determining the joint probability distributions for correlated random variables with prespecified marginals is considered. When the joint distribution satisfying all the required conditions is not unique, the "most unbiased" choice corresponds to the distribution of maximum entropy. The calculation of the maximum-entropy distribution requires the solution of rather complicated nonlinear coupled integral equations, exact solutions to which are obtained for the case of Gaussian marginals; otherwise, the solution can be expressed as a perturbation around the product of the marginals if the marginal moments exist.
A discussion on maximum entropy production and information theory
Bruers, Stijn [Instituut voor Theoretische Fysica, Celestijnenlaan 200D, Katholieke Universiteit Leuven, B-3001 Leuven (Belgium)
2007-07-06
We will discuss the maximum entropy production (MaxEP) principle based on Jaynes' information theoretical arguments, as was done by Dewar (2003 J. Phys. A: Math. Gen. 36 631-41, 2005 J. Phys. A: Math. Gen. 38 371-81). With the help of a simple mathematical model of a non-equilibrium system, we will show how to derive minimum and maximum entropy production. Furthermore, the model will help us to clarify some confusing points and to see differences between some MaxEP studies in the literature.
Generalized Relativistic Wave Equations with Intrinsic Maximum Momentum
Ching, Chee Leong
2013-01-01
We examine the nonperturbative effect of maximum momentum on the relativistic wave equations. In momentum representation, we obtain the exact eigen-energies and wavefunctions of one-dimensional Klein-Gordon and Dirac equation with linear confining potentials, and the Dirac oscillator. Bound state solutions are only possible when the strength of scalar potential are stronger than vector potential. The energy spectrum of the systems studied are bounded from above, whereby classical characteristics are observed in the uncertainties of position and momentum operators. Also, there is a truncation in the maximum number of bound states that is allowed. Some of these quantum-gravitational features may have future applications.
Generalized relativistic wave equations with intrinsic maximum momentum
Ching, Chee Leong; Ng, Wei Khim
2014-05-01
We examine the nonperturbative effect of maximum momentum on the relativistic wave equations. In momentum representation, we obtain the exact eigen-energies and wave functions of one-dimensional Klein-Gordon and Dirac equation with linear confining potentials, and the Dirac oscillator. Bound state solutions are only possible when the strength of scalar potential is stronger than vector potential. The energy spectrum of the systems studied is bounded from above, whereby classical characteristics are observed in the uncertainties of position and momentum operators. Also, there is a truncation in the maximum number of bound states that is allowed. Some of these quantum-gravitational features may have future applications.
Parameter estimation in X-ray astronomy using maximum likelihood
Wachter, K.; Leach, R.; Kellogg, E.
1979-01-01
Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.
A maximum in the strength of nanocrystalline copper
Schiøtz, Jakob; Jacobsen, Karsten Wedel
2003-01-01
We used molecular dynamics simulations with system sizes up to 100 million atoms to simulate plastic deformation of nanocrystalline copper. By varying the grain size between 5 and 50 nanometers, we show that the flow stress and thus the strength exhibit a maximum at a grain size of 10 to 15...... nanometers. This maximum is because of a shift in the microscopic deformation mechanism from dislocation-mediated plasticity in the coarse-grained material to grain boundary sliding in the nanocrystalline region. The simulations allow us to observe the mechanisms behind the grain-size dependence...
Efficiency of autonomous soft nanomachines at maximum power.
Seifert, Udo
2011-01-14
We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
The SIS and SIR stochastic epidemic models: a maximum entropy approach.
Artalejo, J R; Lopez-Herrero, M J
2011-12-01
We analyze the dynamics of infectious disease spread by formulating the maximum entropy (ME) solutions of the susceptible-infected-susceptible (SIS) and the susceptible-infected-removed (SIR) stochastic models. Several scenarios providing helpful insight into the use of the ME formalism for epidemic modeling are identified. The ME results are illustrated with respect to several descriptors, including the number of recovered individuals and the time to extinction. An application to infectious data from outbreaks of extended spectrum beta lactamase (ESBL) in a hospital is also considered.
S.B. Ross; R.E. Best; S.J. Maheras; T.I. McSweeney
2001-08-17
Accidents could occur during the transportation of spent nuclear fuel and high-level radioactive waste. This paper describes the risks and consequences to the public from accidents that are highly unlikely but that could have severe consequences. The impact of these accidents would include those to a collective population and to hypothetical maximally exposed individuals (MEIs). This document discusses accidents with conditions that have a chance of occurring more often than 1 in 10 million times in a year, called ''maximum reasonably foreseeable accidents''. Accidents and conditions less likely than this are not considered to be reasonably foreseeable.
A Brooks type theorem for the maximum local edge connectivity
Stiebitz, Michael; Toft, Bjarne
2017-01-01
For a graph $G$, let $\\cn(G)$ and $\\la(G)$ denote the chromatic number of $G$ and the maximum local edge connectivity of $G$, respectively. A result of Dirac \\cite{Dirac53} implies that every graph $G$ satisfies $\\cn(G)\\leq \\la(G)+1$. In this paper we characterize the graphs $G$ for which $\\cn(G)...
Prediction of Maximum Oxygen Consumption from Walking, Jogging, or Running.
Larsen, Gary E.; George, James D.; Alexander, Jeffrey L.; Fellingham, Gilbert W.; Aldana, Steve G.; Parcell, Allen C.
2002-01-01
Developed a cardiorespiratory endurance test that retained the inherent advantages of submaximal testing while eliminating reliance on heart rate measurement in predicting maximum oxygen uptake (VO2max). College students completed three exercise tests. The 1.5-mile endurance test predicted VO2max from submaximal exercise without requiring heart…
On the maximum backscattering cross section of passive linear arrays
Solymar, L.; Appel-Hansen, Jørgen
1974-01-01
The maximum backscattering cross section of an equispaced linear array connected to a reactive network and consisting of isotropic radiators is calculated forn = 2, 3, and 4 elements as a function of the incident angle and of the distance between the elements. On the basis of the results obtained...
Scientific substantination of maximum allowable concentration of fluopicolide in water
Pelo I.М.
2014-03-01
Full Text Available In order to substantiate fluopicolide maximum allowable concentration in the water of water reservoirs the research was carried out. Methods of study: laboratory hygienic experiment using organoleptic and sanitary-chemical, sanitary-toxicological, sanitary-microbiological and mathematical methods. The results of fluopicolide influence on organoleptic properties of water, sanitary regimen of reservoirs for household purposes were given and its subthreshold concentration in water by sanitary and toxicological hazard index was calculated. The threshold concentration of the substance by the main hazard criteria was established, the maximum allowable concentration in water was substantiated. The studies led to the following conclusions: fluopicolide threshold concentration in water by organoleptic hazard index (limiting criterion – the smell – 0.15 mg/dm3, general sanitary hazard index (limiting criteria – impact on the number of saprophytic microflora, biochemical oxygen demand and nitrification – 0.015 mg/dm3, the maximum noneffective concentration – 0.14 mg/dm3, the maximum allowable concentration - 0.015 mg/dm3.
Effects of bruxism on the maximum bite force
Todić Jelena T.
2017-01-01
Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.
A Unified Maximum Likelihood Approach to Document Retrieval.
Bodoff, David; Enache, Daniel; Kambil, Ajit; Simon, Gary; Yukhimets, Alex
2001-01-01
Addresses the query- versus document-oriented dichotomy in information retrieval. Introduces a maximum likelihood approach to utilizing feedback data that can be used to construct a concrete object function that estimates both document and query parameters in accordance with all available feedback data. (AEF)
Sequential and Parallel Algorithms for Finding a Maximum Convex Polygon
Fischer, Paul
1997-01-01
such a polygon which is maximal with respect to area can be found in time O(n³ log n). With the same running time one can also find such a polygon which contains a maximum number of positive points. If, in addition, the number of vertices of the polygon is restricted to be at most M, then the running time...
Prediction of Maximum Oxygen Consumption from Walking, Jogging, or Running.
Larsen, Gary E.; George, James D.; Alexander, Jeffrey L.; Fellingham, Gilbert W.; Aldana, Steve G.; Parcell, Allen C.
2002-01-01
Developed a cardiorespiratory endurance test that retained the inherent advantages of submaximal testing while eliminating reliance on heart rate measurement in predicting maximum oxygen uptake (VO2max). College students completed three exercise tests. The 1.5-mile endurance test predicted VO2max from submaximal exercise without requiring heart…
34 CFR 682.204 - Maximum loan amounts.
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false Maximum loan amounts. 682.204 Section 682.204 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF POSTSECONDARY EDUCATION, DEPARTMENT OF EDUCATION FEDERAL FAMILY EDUCATION LOAN (FFEL) PROGRAM General Provisions § 682.204...
Triadic conceptual structure of the maximum entropy approach to evolution.
Herrmann-Pillath, Carsten; Salthe, Stanley N
2011-03-01
Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law.
Maximum-entropy probability distributions under Lp-norm constraints
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
The constraint rule of the maximum entropy principle
Uffink, J.
2001-01-01
The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distribut
On the maximum entropy principle in non-extensive thermostatistics
Naudts, Jan
2004-01-01
It is possible to derive the maximum entropy principle from thermodynamic stability requirements. Using as a starting point the equilibrium probability distribution, currently used in non-extensive thermostatistics, it turns out that the relevant entropy function is Renyi's alpha-entropy, and not Tsallis' entropy.
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
1994-01-01
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the est
Maximum-Entropy Inference with a Programmable Annealer.
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2016-03-03
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
40 CFR 35.145 - Maximum federal share.
2010-07-01
... STATE AND LOCAL ASSISTANCE Environmental Program Grants Air Pollution Control (section 105) § 35.145 Maximum federal share. (a) The Regional Administrator may provide air pollution control agencies, as... programs for the prevention and control of air pollution or implementing national primary and...
Maximum Safety Regenerative Power Tracking for DC Traction Power Systems
Guifu Du
2017-02-01
Full Text Available Direct current (DC traction power systems are widely used in metro transport systems, with running rails usually being used as return conductors. When traction current flows through the running rails, a potential voltage known as “rail potential” is generated between the rails and ground. Currently, abnormal rises of rail potential exist in many railway lines during the operation of railway systems. Excessively high rail potentials pose a threat to human life and to devices connected to the rails. In this paper, the effect of regenerative power distribution on rail potential is analyzed. Maximum safety regenerative power tracking is proposed for the control of maximum absolute rail potential and energy consumption during the operation of DC traction power systems. The dwell time of multiple trains at each station and the trigger voltage of the regenerative energy absorbing device (READ are optimized based on an improved particle swarm optimization (PSO algorithm to manage the distribution of regenerative power. In this way, the maximum absolute rail potential and energy consumption of DC traction power systems can be reduced. The operation data of Guangzhou Metro Line 2 are used in the simulations, and the results show that the scheme can reduce the maximum absolute rail potential and energy consumption effectively and guarantee the safety in energy saving of DC traction power systems.
A MAXIMUM ENTROPY METHOD FOR CONSTRAINED SEMI-INFINITEPROGRAMMING PROBLEMS
ZHOU Guanglu; WANG Changyu; SHI Zhenjun; SUN Qingying
1999-01-01
This paper presents a new method, called the maximum entropy method,for solving semi-infinite programming problems, in which thesemi-infinite programming problem is approximated by one with a singleconstraint. The convergence properties for this method are discussed.Numerical examples are given to show the high effciency of thealgorithm.
Closed form maximum likelihood estimator of conditional random fields
Zhu, Zhemin; Hiemstra, Djoerd; Apers, Peter M.G.; Wombacher, Andreas
2013-01-01
Training Conditional Random Fields (CRFs) can be very slow for big data. In this paper, we present a new training method for CRFs called {\\em Empirical Training} which is motivated by the concept of co-occurrence rate. We show that the standard training (unregularized) can have many maximum likeliho
Heteroscedastic one-factor models and marginal maximum likelihood estimation
Hessen, D.J.; Dolan, C.V.
2009-01-01
In the present paper, a general class of heteroscedastic one-factor models is considered. In these models, the residual variances of the observed scores are explicitly modelled as parametric functions of the one-dimensional factor score. A marginal maximum likelihood procedure for parameter estimati
Estruturas silicosas na gramínea Panicum maximum
Pedro Fontana Junior
1957-05-01
Full Text Available The silica structure of the grass Panicum maximum were studied by electron and phase contrast microscopy. Several interesting kinds of silica formation (spiklets were found. The most ineresting structures ressembels the "Schaumstruktur" found by HELMCKE in diatoms. Another interesting structure was described in the "silica cells" and a detailed study of the mophology of some different kinds of spiklets was made.
Bias Correction for Alternating Iterative Maximum Likelihood Estimators
Gang YU; Wei GAO; Ningzhong SHI
2013-01-01
In this paper,we give a definition of the alternating iterative maximum likelihood estimator (AIMLE) which is a biased estimator.Furthermore we adjust the AIMLE to result in asymptotically unbiased and consistent estimators by using a bootstrap iterative bias correction method as in Kuk (1995).Two examples and simulation results reported illustrate the performance of the bias correction for AIMLE.
21 CFR 801.415 - Maximum acceptable level of ozone.
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Maximum acceptable level of ozone. 801.415 Section... level of ozone. (a) Ozone is a toxic gas with no known useful medical application in specific, adjunctive, or preventive therapy. In order for ozone to be effective as a germicide, it must be present in...
The maximum number of minimal codewords in long codes
Alahmadi, A.; Aldred, R.E.L.; dela Cruz, R.;
2013-01-01
Upper bounds on the maximum number of minimal codewords in a binary code follow from the theory of matroids. Random coding provides lower bounds. In this paper, we compare these bounds with analogous bounds for the cycle code of graphs. This problem (in the graphic case) was considered in 1981...
A Monte Carlo Evaluation of Maximum Likelihood Multidimensional Scaling Methods
Bijmolt, T.H.A.; Wedel, M.
1996-01-01
We compare three alternative Maximum Likelihood Multidimensional Scaling methods for pairwise dissimilarity ratings, namely MULTISCALE, MAXSCAL, and PROSCAL in a Monte Carlo study.The three MLMDS methods recover the true con gurations very well.The recovery of the true dimensionality depends on the
Quantum-dot Carnot engine at maximum power.
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; Van den Broeck, Christian
2010-04-01
We evaluate the efficiency at maximum power of a quantum-dot Carnot heat engine. The universal values of the coefficients at the linear and quadratic order in the temperature gradient are reproduced. Curzon-Ahlborn efficiency is recovered in the limit of weak dissipation.
Maximum likelihood estimation of phase-type distributions
Esparza, Luz Judith R
This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions ...
Mechanical Sun-Tracking Technique Implemented for Maximum ...
The solar panel is allowed to move from east to west and back forth with a maximum allowable angle of 180o. Its movement is in only one axis. The prototype built carries the panel from eastward to westward tracking the sun movement from ...
24 CFR 941.306 - Maximum project cost.
2010-04-01
...) project costs that are subject to the TDC limit (i.e., Housing Construction Costs and Community Renewal Costs); and (2) project costs that are not subject to the TDC limit (i.e., Additional Project Costs... expended for the project, and this becomes the maximum project cost for purposes of the ACC. (b) TDC...
Adaptive Statistical Language Modeling; A Maximum Entropy Approach
1994-04-19
recognition systems were built that could recognize vowels or digits, but they could not be successfully extended to handle more realistic language...maximum likelihood of gener- ating the training data. The identity of the ML and ME solutions, apart from being aesthetically pleasing, is extremely
33 CFR 401.3 - Maximum vessel dimensions.
2010-07-01
..., and having dimensions that do not exceed the limits set out in the block diagram in appendix I of this... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Maximum vessel dimensions. 401.3 Section 401.3 Navigation and Navigable Waters SAINT LAWRENCE SEAWAY DEVELOPMENT CORPORATION, DEPARTMENT...
A relationship between maximum packing of particles and particle size
Fedors, R. F.
1979-01-01
Experimental data indicate that the volume fraction of particles in a packed bed (i.e. maximum packing) depends on particle size. One explanation for this is based on the idea that particle adhesion is the primary factor. In this paper, however, it is shown that entrainment and immobilization of liquid by the particles can also account for the facts.
Maximum Likelihood Estimation of Nonlinear Structural Equation Models.
Lee, Sik-Yum; Zhu, Hong-Tu
2002-01-01
Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)
On the query complexity of finding a local maximum point
Rastsvelaev, A.L.; Beklemishev, L.D.
2008-01-01
We calculate the minimal number of queries sufficient to find a local maximum point of a functiun on a discrete interval for a model with M parallel queries, M≥1. Matching upper and lower bounds are obtained. The bounds are formulated in terms of certain Fibonacci type sequences of numbers.
A Remark on the Omori-Yau Maximum Principle
Borbely, Albert
2012-01-01
A Riemannian manifold $M$ is said to satisfy the Omori-Yau maximum principle if for any $C^2$ bounded function $g:M\\to \\Bbb R$ there is a sequence $x_n\\in M$, such that $\\lim_{n\\to \\infty}g(x_n)=\\sup_M g$, $ \\lim_{n\\to \\infty}|\
Maximum likelihood estimation of the attenuated ultrasound pulse
Rasmussen, Klaus Bolding
1994-01-01
The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...
AN INVERSE MAXIMUM CAPACITY PATH PROBLEM WITH LOWER BOUND CONSTRAINTS
杨超; 陈学旗
2002-01-01
The computational complexity of inverse mimimum capacity path problem with lower bound on capacity of maximum capacity path is examined, and it is proved that solution of this problem is NP-complete. A strong polynomial algorithm for a local optimal solution is provided.
Maximum super angle optimization method for array antenna pattern synthesis
Wu, Ji; Roederer, A. G
1991-01-01
Different optimization criteria related to antenna pattern synthesis are discussed. Based on the maximum criteria and vector space representation, a simple and efficient optimization method is presented for array and array fed reflector power pattern synthesis. A sector pattern synthesized by a 20...