WorldWideScience

Sample records for maximum extent practicable

  1. The Extent of Educational Technology's Influence on Contemporary Educational Practices

    OpenAIRE

    Kim, Bradford-Watts

    2005-01-01

    This paper investigates how advances in educational technologies have influenced contemporary educational practices.It discusses the nature of educational technology, the limitations imposed by the digital divide and other factors of uptake, and the factors leading to successful implementation of educational technologies.The extent of influence is then discussed,together with the probable implications for educational sites for the future.

  2. The timing of the maximum extent of the Rhone Glacier at Wangen a.d. Aare

    Energy Technology Data Exchange (ETDEWEB)

    Ivy-Ochs, S.; Schluechter, C. [Bern Univ. (Switzerland); Kubik, P.W. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Beer, J. [EAWAG, Duebendorf (Switzerland)

    1997-09-01

    Erratic blocks found in the region of Wangen a.d. Aare delineate the maximum position of the Solothurn lobe of the Rhone Glacier. {sup 10}Be and {sup 26}Al exposure ages of three of these blocks show that the glacier withdraw from its maximum position at or slightly before 20,000{+-}1800 years ago. (author) 1 fig., 5 refs.

  3. Training Research: Practical Recommendations for Maximum Impact

    Science.gov (United States)

    Beidas, Rinad S.; Koerner, Kelly; Weingardt, Kenneth R.; Kendall, Philip C.

    2011-01-01

    This review offers practical recommendations regarding research on training in evidence-based practices for mental health and substance abuse treatment. When designing training research, we recommend: (a) aligning with the larger dissemination and implementation literature to consider contextual variables and clearly defining terminology, (b) critically examining the implicit assumptions underlying the stage model of psychotherapy development, (c) incorporating research methods from other disciplines that embrace the principles of formative evaluation and iterative review, and (d) thinking about how technology can be used to take training to scale throughout all stages of a training research project. An example demonstrates the implementation of these recommendations. PMID:21380792

  4. The extent of permafrost in China during the local Last Glacial Maximum (LLGM)

    NARCIS (Netherlands)

    Zhao, L.; Jin, H.; Li, C.; Cui, Z.; Chang, X.; Marchenko, S.S.; Vandenberghe, J.; Zhang, T.; Luo, D.; Liu, G.; Yi, C.

    2014-01-01

    Recent investigations into relict periglacial phenomena in northern and western China and on the Qinghai-Tibet Plateau provide information for delineating the extent of permafrost in China during the Late Pleistocene. Polygonal and wedge-shaped structures indicate that, during the local Last Glacial

  5. GIS-based maps and area estimates of Northern Hemisphere permafrost extent during the Last Glacial Maximum

    NARCIS (Netherlands)

    Lindgren, A.; Hugelius, G.; Kuhry, P.; Christensen, T.R.; Vandenberghe, J.F.

    2016-01-01

    This study presents GIS-based estimates of permafrost extent in the northern circumpolar region during the Last Glacial Maximum (LGM), based on a review of previously published maps and compilations of field evidence in the form of ice-wedge pseudomorphs and relict sand wedges. We focus on field

  6. Preconditioning of Antarctic maximum sea-ice extent by upper-ocean stratification on a seasonal timescale

    OpenAIRE

    Su, Zhan

    2017-01-01

    This study uses an observationally constrained and dynamically consistent ocean and sea ice state estimate. The author presents a remarkable agreement between the location of the edge of Antarctic maximum sea ice extent, reached in September, and the narrow transition band for the upper ocean (0–100 m depths) stratification, as early as April to June. To the south of this edge, the upper ocean has high stratification, which forbids convective fluxes to cross through; consequently, the ocean h...

  7. Mapping Daily and Maximum Flood Extents at 90-m Resolution During Hurricanes Harvey and Irma Using Passive Microwave Remote Sensing

    Science.gov (United States)

    Galantowicz, J. F.; Picton, J.; Root, B.

    2017-12-01

    Passive microwave remote sensing can provided a distinct perspective on flood events by virtue of wide sensor fields of view, frequent observations from multiple satellites, and sensitivity through clouds and vegetation. During Hurricanes Harvey and Irma, we used AMSR2 (Advanced Microwave Scanning Radiometer 2, JAXA) data to map flood extents starting from the first post-storm rain-free sensor passes. Our standard flood mapping algorithm (FloodScan) derives flooded fraction from 22-km microwave data (AMSR2 or NASA's GMI) in near real time and downscales it to 90-m resolution using a database built from topography, hydrology, and Global Surface Water Explorer data and normalized to microwave data footprint shapes. During Harvey and Irma we tested experimental versions of the algorithm designed to map the maximum post-storm flood extent rapidly and made a variety of map products available immediately for use in storm monitoring and response. The maps have several unique features including spanning the entire storm-affected area and providing multiple post-storm updates as flood water shifted and receded. From the daily maps we derived secondary products such as flood duration, maximum flood extent (Figure 1), and flood depth. In this presentation, we describe flood extent evolution, maximum extent, and local details as detected by the FloodScan algorithm in the wake of Harvey and Irma. We compare FloodScan results to other available flood mapping resources, note observed shortcomings, and describe improvements made in response. We also discuss how best-estimate maps could be updated in near real time by merging FloodScan products and data from other remote sensing systems and hydrological models.

  8. Timing of maximum glacial extent and deglaciation from HualcaHualca volcano (southern Peru), obtained with cosmogenic 36Cl.

    Science.gov (United States)

    Alcalá, Jesus; Palacios, David; Vazquez, Lorenzo; Juan Zamorano, Jose

    2015-04-01

    Andean glacial deposits are key records of climate fluctuations in the southern hemisphere. During the last decades, in situ cosmogenic nuclides have provided fresh and significant dates to determine past glacier behavior in this region. But still there are many important discrepancies such as the impact of Last Glacial Maximum or the influence of Late Glacial climatic events on glacial mass balances. Furthermore, glacial chronologies from many sites are still missing, such as HualcaHualca (15° 43' S; 71° 52' W; 6,025 masl), a high volcano of the Peruvian Andes located 70 km northwest of Arequipa. The goal of this study is to establish the age of the Maximum Glacier Extent (MGE) and deglaciation at HualcaHualca volcano. To achieve this objetive, we focused in four valleys (Huayuray, Pujro Huayjo, Mollebaya and Mucurca) characterized by a well-preserved sequence of moraines and roches moutonnées. The method is based on geomorphological analysis supported by cosmogenic 36Cl surface exposure dating. 36Cl ages have been estimated with the CHLOE calculator and were compared with other central Andean glacial chronologies as well as paleoclimatological proxies. In Huayuray valley, exposure ages indicates that MGE occurred ~ 18 - 16 ka. Later, the ice mass gradually retreated but this process was interrupted by at least two readvances; the last one has been dated at ~ 12 ka. In the other hand, 36Cl result reflects a MGE age of ~ 13 ka in Mollebaya valley. Also, two samples obtained in Pujro-Huayjo and Mucurca valleys associated with MGE have an exposure age of 10-9 ka, but likely are moraine boulders affected by exhumation or erosion processes. Deglaciation in HualcaHualca volcano began abruptly ~ 11.5 ka ago according to a 36Cl age from a polished and striated bedrock in Pujro Huayjo valley, presumably as a result of reduced precipitation as well as a global increase of temperatures. The glacier evolution at HualcaHualca volcano presents a high correlation with

  9. Multi-Temporal Independent Component Analysis and Landsat 8 for Delineating Maximum Extent of the 2013 Colorado Front Range Flood

    Directory of Open Access Journals (Sweden)

    Stephen M. Chignell

    2015-07-01

    Full Text Available Maximum flood extent—a key data need for disaster response and mitigation—is rarely quantified due to storm-related cloud cover and the low temporal resolution of optical sensors. While change detection approaches can circumvent these issues through the identification of inundated land and soil from post-flood imagery, their accuracy can suffer in the narrow and complex channels of increasingly developed and heterogeneous floodplains. This study explored the utility of the Operational Land Imager (OLI and Independent Component Analysis (ICA for addressing these challenges in the unprecedented 2013 Flood along the Colorado Front Range, USA. Pre- and post-flood images were composited and transformed with an ICA to identify change classes. Flooded pixels were extracted using image segmentation, and the resulting flood layer was refined with cloud and irrigated agricultural masks derived from the ICA. Visual assessment against aerial orthophotography showed close agreement with high water marks and scoured riverbanks, and a pixel-to-pixel validation with WorldView-2 imagery captured near peak flow yielded an overall accuracy of 87% and Kappa of 0.73. Additional tests showed a twofold increase in flood class accuracy over the commonly used modified normalized water index. The approach was able to simultaneously distinguish flood-related water and soil moisture from pre-existing water bodies and other spectrally similar classes within the narrow and braided channels of the study site. This was accomplished without the use of post-processing smoothing operations, enabling the important preservation of nuanced inundation patterns. Although flooding beneath moderate and sparse riparian vegetation canopy was captured, dense vegetation cover and paved regions of the floodplain were main sources of omission error, and commission errors occurred primarily in pixels of mixed land use and along the flood edge. Nevertheless, the unsupervised nature of ICA

  10. Total Quality Management in Secondary Schools in Kenya: Extent of Practice

    Science.gov (United States)

    Ngware, Moses Waithanji; Wamukuru, David Kuria; Odebero, Stephen Onyango

    2006-01-01

    Purpose: To investigate the extent to which secondary schools practiced aspects of total quality management (TQM). Design/methodology/approach: A cross-sectional research design was used in this study. A sample of 300 teachers in a residential session during a school holiday provided their perceptions on the practice of TQM in their schools. Data…

  11. ORIGINAL ARTICLES Surgical practice in a maximum security prison

    African Journals Online (AJOL)

    Prison Clinic, Mangaung Maximum Security Prison, Bloemfontein. F Kleinhans, BA (Cur) .... HIV positivity rate and the use of the rectum to store foreign objects. ... fruit in sunlight. Other positive health-promoting factors may also play a role,.

  12. Surgical practice in a maximum security prison – unique and ...

    African Journals Online (AJOL)

    The practice of general surgery in a prison population differs considerably from that in a general surgical practice. We audited surgical consultations at the Mangaung Correctional Centre from December 2003 to April 2009. We found a high incidence of foreign object ingestion and anal pathology. Understanding the medical ...

  13. Optical ages indicate the southwestern margin of the Green Bay Lobe in Wisconsin, USA, was at its maximum extent until about 18,500 years ago

    Science.gov (United States)

    Attig, J.W.; Hanson, P.R.; Rawling, J.E.; Young, A.R.; Carson, E.C.

    2011-01-01

    Samples for optical dating were collected to estimate the time of sediment deposition in small ice-marginal lakes in the Baraboo Hills of Wisconsin. These lakes formed high in the Baraboo Hills when drainage was blocked by the Green Bay Lobe when it was at or very near its maximum extent. Therefore, these optical ages provide control for the timing of the thinning and recession of the Green Bay Lobe from its maximum position. Sediment that accumulated in four small ice-marginal lakes was sampled and dated. Difficulties with field sampling and estimating dose rates made the interpretation of optical ages derived from samples from two of the lake basins problematic. Samples from the other two lake basins-South Bluff and Feltz basins-responded well during laboratory analysis and showed reasonably good agreement between the multiple ages produced at each site. These ages averaged 18.2. ka (n= 6) and 18.6. ka (n= 6), respectively. The optical ages from these two lake basins where we could carefully select sediment samples provide firm evidence that the Green Bay Lobe stood at or very near its maximum extent until about 18.5. ka.The persistence of ice-marginal lakes in these basins high in the Baraboo Hills indicates that the ice of the Green Bay Lobe had not experienced significant thinning near its margin prior to about 18.5. ka. These ages are the first to directly constrain the timing of the maximum extent of the Green Bay Lobe and the onset of deglaciation in the area for which the Wisconsin Glaciation was named. ?? 2011 Elsevier B.V.

  14. Are inundation limit and maximum extent of sand useful for differentiating tsunamis and storms? An example from sediment transport simulations on the Sendai Plain, Japan

    Science.gov (United States)

    Watanabe, Masashi; Goto, Kazuhisa; Bricker, Jeremy D.; Imamura, Fumihiko

    2018-02-01

    We examined the quantitative difference in the distribution of tsunami and storm deposits based on numerical simulations of inundation and sediment transport due to tsunami and storm events on the Sendai Plain, Japan. The calculated distance from the shoreline inundated by the 2011 Tohoku-oki tsunami was smaller than that inundated by storm surges from hypothetical typhoon events. Previous studies have assumed that deposits observed farther inland than the possible inundation limit of storm waves and storm surge were tsunami deposits. However, confirming only the extent of inundation is insufficient to distinguish tsunami and storm deposits, because the inundation limit of storm surges may be farther inland than that of tsunamis in the case of gently sloping coastal topography such as on the Sendai Plain. In other locations, where coastal topography is steep, the maximum inland inundation extent of storm surges may be only several hundred meters, so marine-sourced deposits that are distributed several km inland can be identified as tsunami deposits by default. Over both gentle and steep slopes, another difference between tsunami and storm deposits is the total volume deposited, as flow speed over land during a tsunami is faster than during a storm surge. Therefore, the total deposit volume could also be a useful proxy to differentiate tsunami and storm deposits.

  15. Oblique map showing maximum extent of 20,000-year-old (Tioga) glaciers, Yosemite National Park, central Sierra Nevada, California

    Science.gov (United States)

    Alpha, T.R.; Wahrhaftig, Clyde; Huber, N.K.

    1987-01-01

    This map shows the alpine ice field and associated valley glaciers at their maximum extent during the Tioga glaciation. The Tioga glaciation, which peaked about 15,000-20,OOO years ago, was the last major glaciation in the Sierra Nevada. The Tuolumne ice field fed not only the trunk glacier that moved down the Tuolumne River canyon through the present-day Hetch Hetchy Reservoir, but it also overflowed major ridge crests into many adjoining drainage systems. Some of the ice flowed over low passes to augment the flows moving from the Merced basin down through little Yosemite Valley. Tuolumne ice flowed southwest down the Tuolumne River into the Tenaya Lake basin and then down Tenaya Canyon to join the Merced glacier in Yosemite Valley. During the Tioga glaciation, the glacier in Yosemite Valley reached only as far as Bridalveil Meadow, although during a much earlier glaciation, a glacier extended about 10 miles farther down the Merced River to the vicinity of El Portal. Ice of the Tioga glaciation also flowed eastward from the summit region to cascade down the canyons that cut into the eastern escarpment of the Sierra Nevada [see errata, below]. Southeast of the present-day Yosemite Park, glaciers formed in the Mount Lyell region flowed east onto the Mono lowland and southeast and south down the Middle and North Forks of the San Joaquin River. In the southern part of the park, glaciers nearly reached to the present-day site of Wawona along the South Fork of the Merced River. At the time of the maximum extent of the Tioga glaciation, Lake Russell (Pleistocene Mono Lake) had a surface elevation of 6,800 feet, 425 feet higher than the 1980 elevation and 400 feet lower than its maximum level at the end of the Tioga glaciation. Only a few volcanic domes of the Mono Craters existed at the time of the Tioga glaciation. The distribution of vegetation, as suggested by the green overprint, is based on our interpretation. Forests were restricted to lower elevations than present

  16. The Last Permafrost Maximum (LPM) map of the northern hemisphere: permafrost extent and mean annual air temperatures, 25-17 ka BP

    NARCIS (Netherlands)

    Vandenberghe, J.; French, H.M.; Gorbunov, A.; Velichko, A.A.; Jin, H.; Cui, Z.; Zhang, T.; Wan, X.

    2014-01-01

    This paper accompanies a map that shows the extent of permafrost in the Northern Hemisphere between 25 and 17 thousand years ago. The map is based upon existing archival data, common throughout the Northern Hemisphere, that include ice-wedge pseudomorphs, sand wedges and large cryoturbations. Where

  17. Quality and extent of locum tenens coverage in pediatric surgical practices.

    Science.gov (United States)

    Nolan, Tracy L; Kandel, Jessica J; Nakayama, Don K

    2015-04-01

    The prevalence and quality of locum tenens coverage in pediatric surgery have not been determined. An Internet-based survey of American Pediatric Surgical Association members was conducted: 1) practice description; 2) use and frequency of locum tenens coverage; 4) whether the surgeon provided such coverage; and 5) Likert scale responses (strongly disagree, disagree, neutral, agree, strongly agree) to statements addressing its acceptability and quality (two × five contingency table and χ(2) analyses, significance at P view it as a stopgap solution to the surgical workforce shortage.

  18. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  19. The extent to which Latina/o preservice teachers demonstrate culturally responsive teaching practices during science and mathematics instruction

    Science.gov (United States)

    Hernandez, Cecilia M.

    2011-12-01

    Complex social, racial, economic, and political issues involved in the practice of teaching today require beginning teachers to be informed, skilled, and culturally responsive when entering the classroom. Teacher educators must educate future teachers in ways that will help them teach all children regardless of language, cultural background, or prior knowledge. The purpose of this study was to explore the extent to which culturally and linguistically diverse (CLD) novice teachers described and demonstrated culturally responsive teaching strategies using their students' cultural and academic profiles to inform practice in science and mathematics instruction. This qualitative exploratory case study considered the culturally responsive teaching practices of 12, non-traditional, Latina/o students as they progressed through a distance-based collaborative teacher education program. Qualitative techniques used throughout this exploratory case study investigated cultural responsiveness of these student teachers as they demonstrated their abilities to: a) integrate content and facilitate knowledge construction; b) illustrate social justice and prejudice reduction; and c) develop students academically. In conclusion, student teachers participating in this study demonstrated their ability to integrate content by: (1) including content from other cultures, (2) building positive teacher-student relationships, and (3) holding high expectations for all students. They also demonstrated their ability to facilitate knowledge construction by building on what students knew. Since there is not sufficient data to support the student teachers' abilities to assist students in learning to be critical, independent thinkers who are open to other ways of knowing, no conclusions regarding this subcategory could be drawn. Student teachers in this study illustrated prejudice reduction by: (1) using native language support to assist students in learning and understanding science and math content

  20. A practical exact maximum compatibility algorithm for reconstruction of recent evolutionary history

    OpenAIRE

    Cherry, Joshua L.

    2017-01-01

    Background Maximum compatibility is a method of phylogenetic reconstruction that is seldom applied to molecular sequences. It may be ideal for certain applications, such as reconstructing phylogenies of closely-related bacteria on the basis of whole-genome sequencing. Results Here I present an algorithm that rapidly computes phylogenies according to a compatibility criterion. Although based on solutions to the maximum clique problem, this algorithm deals properly with ambiguities in the data....

  1. Multidetector computed tomography of the head in acute stroke: predictive value of different patterns of the dense artery sign revealed by maximum intensity projection reformations for location and extent of the infarcted area

    Energy Technology Data Exchange (ETDEWEB)

    Gadda, Davide; Vannucchi, Letizia; Niccolai, Franco; Neri, Anna T.; Carmignani, Luca; Pacini, Patrizio [Ospedale del Ceppo, U.O. Radiodiagnostica, Pistoia (Italy)

    2005-12-01

    Maximum intensity projections reconstructions from 2.5 mm unenhanced multidetector computed tomography axial slices were obtained from 49 patients within the first 6 h of anterior-circulation cerebral strokes to identify different patterns of the dense artery sign and their prognostic implications for location and extent of the infarcted areas. The dense artery sign was found in 67.3% of cases. Increased density of the whole M1 segment with extension to M2 of the middle cerebral artery was associated with a wider extension of cerebral infarcts in comparison to M1 segment alone or distal M1 and M2. A dense sylvian branch of the middle cerebral artery pattern was associated with a more restricted extension of infarct territory. We found 62.5% of patients without a demonstrable dense artery to have a limited peripheral cortical or capsulonuclear lesion. In patients with a 7-10 points on the Alberta Stroke Early Programme Computed Tomography Score and a dense proximal MCA in the first hours of ictus the mean decrease in the score between baseline and follow-up was 5.09{+-}1.92 points. In conclusion, maximum intensity projections from thin-slice images can be quickly obtained from standard computed tomography datasets using a multidetector scanner and are useful in identifying and correctly localizing the dense artery sign, with prognostic implications for the entity of cerebral damage. (orig.)

  2. A practical exact maximum compatibility algorithm for reconstruction of recent evolutionary history.

    Science.gov (United States)

    Cherry, Joshua L

    2017-02-23

    Maximum compatibility is a method of phylogenetic reconstruction that is seldom applied to molecular sequences. It may be ideal for certain applications, such as reconstructing phylogenies of closely-related bacteria on the basis of whole-genome sequencing. Here I present an algorithm that rapidly computes phylogenies according to a compatibility criterion. Although based on solutions to the maximum clique problem, this algorithm deals properly with ambiguities in the data. The algorithm is applied to bacterial data sets containing up to nearly 2000 genomes with several thousand variable nucleotide sites. Run times are several seconds or less. Computational experiments show that maximum compatibility is less sensitive than maximum parsimony to the inclusion of nucleotide data that, though derived from actual sequence reads, has been identified as likely to be misleading. Maximum compatibility is a useful tool for certain phylogenetic problems, such as inferring the relationships among closely-related bacteria from whole-genome sequence data. The algorithm presented here rapidly solves fairly large problems of this type, and provides robustness against misleading characters than can pollute large-scale sequencing data.

  3. The Extent of Practicing the Skills of Team Work Leadership among Heads of Departments in Directorate of Education in Methnb, Saudi Arabia: A Field Study

    Science.gov (United States)

    Alotaibi, Norah Muhayya; Tayeb, Aziza

    2016-01-01

    Sound leadership has an important role in achieving the success of any institution; so the leader must possess some work team leadership skills such as decision-taking, communication, motivation, conflict management and meeting management. The current study is an attempt to identify the extent of practicing team work leadership skills among the…

  4. A practical method for estimating maximum shear modulus of cemented sands using unconfined compressive strength

    Science.gov (United States)

    Choo, Hyunwook; Nam, Hongyeop; Lee, Woojin

    2017-12-01

    The composition of naturally cemented deposits is very complicated; thus, estimating the maximum shear modulus (Gmax, or shear modulus at very small strains) of cemented sands using the previous empirical formulas is very difficult. The purpose of this experimental investigation is to evaluate the effects of particle size and cement type on the Gmax and unconfined compressive strength (qucs) of cemented sands, with the ultimate goal of estimating Gmax of cemented sands using qucs. Two sands were artificially cemented using Portland cement or gypsum under varying cement contents (2%-9%) and relative densities (30%-80%). Unconfined compression tests and bender element tests were performed, and the results from previous studies of two cemented sands were incorporated in this study. The results of this study demonstrate that the effect of particle size on the qucs and Gmax of four cemented sands is insignificant, and the variation of qucs and Gmax can be captured by the ratio between volume of void and volume of cement. qucs and Gmax of sand cemented with Portland cement are greater than those of sand cemented with gypsum. However, the relationship between qucs and Gmax of the cemented sand is not affected by the void ratio, cement type and cement content, revealing that Gmax of the complex naturally cemented soils with unknown in-situ void ratio, cement type and cement content can be estimated using qucs.

  5. Examining the Extent to Which Select Teacher Preparation Experiences Inform Technology and Engineering Educators’ Teaching of Science Content and Practices

    OpenAIRE

    Love, Tyler Scott

    2015-01-01

    With the recent release of the Next Generation Science Standards (NGSS) (NGSS Lead States, 2014b) science educators were expected to teach engineering content and practices within their curricula. However, technology and engineering (T&E) educators have been expected to teach content and practices from engineering and other disciplines since the release of the Standards for Technological Literacy (ITEA/ITEEA, 2000/2002/2007). Requisite to the preparation of globally competitive...

  6. Cavern disposal concepts for HLW/SF: assuring operational practicality and safety with maximum programme flexibility

    International Nuclear Information System (INIS)

    McKinley, Ian G.; Apted, Mick; Umeki, Hiroyuki; Kawamura, Hideki

    2008-01-01

    Most conventional engineered barrier system (EBS) designs for HLW/SF repositories are based on concepts developed in the 1970s and 1980s that assured feasibility with high margins of safety, in order to convince national decision makers to proceed with geological disposal despite technological uncertainties. In the interval since the advent of such 'feasibility designs', significant progress has been made in reducing technological uncertainties, which has lead to a growing awareness of other, equally important uncertainties in operational implementation and challenges regarding social acceptance in many new, emerging national repository programs. As indicated by the NUMO repository concept catalogue study (NUMO, 2004), there are advantages in reassessing how previous designs can be modified and optimised in the light of improved system understanding, allowing a robust EBS to be flexibly implemented to meet nation-specific and site-specific conditions. Full-scale emplacement demonstrations, particularly those carried out underground, have highlighted many of the practical issues to be addressed; e.g., handling of compacted bentonite in humid conditions, use of concrete for support infrastructure, remote handling of heavy radioactive packages in confined conditions, quality inspection, monitoring / ease of retrieval of emplaced packages and institutional control. The CAvern REtrievable (CARE) concept reduces or avoids such issues by emplacement of HLW or SF within multi-purpose transportation / storage / disposal casks in large ventilated caverns at a depth of several hundred metres. The facility allows the caverns to serve as inspectable stores for an extended period of time (up to a few hundred years) until a decision is made to close them. At this point the caverns are backfilled and sealed as a final repository, effectively with the same safety case components as conventional 'feasibility designs'. In terms of operational practicality an d safety, the CARE

  7. Innovative Work Behavior: To What Extent and How Can HRM Practices Contribute to Higher Levels of Innovation Within SMEs?

    NARCIS (Netherlands)

    Bücker, J.J.L.E.; Horst, E. van der; Mura, L.

    2017-01-01

    In this chapter, the influence of HR practices and more specifically the Ned Herrmanns development tool HBDI on the development of innovative work behavior (IWB) is described. Innovative work behavior today is important for organizations to stay in a competitive position. Also for small and

  8. Eastside forest management practices: historical overview, extent of their application, and their effects on sustainability of ecosystems.

    Science.gov (United States)

    Chadwick D. Oliver; Larry L. Irwin; Walter H. Knapp

    1994-01-01

    Forest management of eastern Oregon and Washington began in the late 1800s as extensive utilization of forests for grazing, timber, and irrigation water. With time, protection of these values developed into active management for these and other values such as recreation. Silvicultural and administrative practices, developed to solve problems at a particular time have...

  9. ASSESSMENT OF PRACTICE AT RETAIL PHARMACIES IN PAKISTAN: EXTENT OF COMPLIANCE WITH THE PREVAILING DRUG LAW OF PAKISTAN.

    Science.gov (United States)

    Ullah, Hanif; Zada, Wahid; Khan, Muhammad Sona; Iqbal, Muhammad; Chohan, Osaam; Raza, Naeem; Khawaja, Naeem Raza; Abid, Syed Mobasher Ali; Murtazai, Ghulam

    2016-01-01

    The main objective of this study was to assess the practice at retail pharmacies in Pakistan and to compare the same in rural and urban areas. The maintenance of pharmacy and drug inspectors' visit was also assessed. This cross sectional study was conducted in Abbottabad, Pakistan during October-November, 2012. A sample of 215 drug sellers or drug stores was selected by employing convenient sampling method. With a response rate of 91.6%, 197 drug sellers participated in this study. All the drug sellers were male. Overall, 35% (n = 197) of the drug sellers did not have any professional qualification. A majority of the drug sellers were involved in various malpractices like selling of medicines without prescription (80.7%), prescribing practice (60.9%), prescription intervention (62.4%) and selling of controlled substances (66%) without a license for selling it. These malpractices were significantly higher in rural area than that in urban area.

  10. Survey of Extent of Translation of Oral Healthcare Guidelines for ICU Patients into Clinical Practice by Nursing Staff

    Directory of Open Access Journals (Sweden)

    Vivek Agarwal

    2017-01-01

    Full Text Available Nosocomial infections in critically ill/ventilated patients result from bacterial load in oropharyngeal regions. Oral decontamination serves as the easiest effective means of controlling infections. Knowledge, attitude, and practices followed by healthcare personnel in intensive care settings need to be assessed to implement concrete measures in health-care. Survey questionnaire was constructed and implemented following its validation on seventy nursing and paramedical staff working in government and private intensive care units throughout Lucknow city. 21-item questionnaire consisted of three parts of seven questions each. 78% of respondents had knowledge regarding oral care and its importance in critical settings but 44% of respondents considered it to be unpleasant task. 36% of respondents claimed to have provided oral care to all patients in ICU. Uniform guidelines for translation of oral healthcare in ICU settings are not being implemented. Previous studies in literature from various geographic diverse regions also point out to similar lacunae. Based on present survey, most respondents were aware of importance of oral care with protocols covered in academic curriculum. Attitude towards oral care is positive but respondents feel a need for specialised training. Practice for oral care is not sufficient and needs improvement and proper implementation.

  11. Australian employer usage of the practice of offering reduced working hours to workers close to retirement: Extent and determinants.

    Science.gov (United States)

    Taylor, Philip; Earl, Catherine; McLoughlin, Christopher

    2016-06-01

    This study aimed to determine factors associated with the implementation by employers of the practice of offering reduced working hours for workers nearing retirement. Data came from a survey of 2000 employers of more than 50 employees each (30% response rate). A minority (33%) of employers offered reduced working hours to older workers nearing retirement. Factors associated with offering reduced working hours were: expecting workforce ageing to cause a loss of staff to retirement; being a large employer; being a public/not-for-profit sector employer; not experiencing difficulties recruiting labourers; having a larger proportion of workers aged over 50; experiencing national competition for labour; not experiencing difficulties recruiting machinery operators/drivers; not expecting workforce ageing to increase workplace injuries; and experiencing difficulties with the quality of candidates. A minority of employers were found to offer reduced working hours to those nearing retirement. Factors associated with their propensity to do so included industry sector, size of employer, concerns about labour supply and the effects of workforce ageing. © 2016 AJA Inc.

  12. Educating Farmers' Market Consumers on Best Practices for Retaining Maximum Nutrient and Phytonutrient Levels in Local Produce

    Science.gov (United States)

    Ralston, Robin A.; Orr, Morgan; Goard, Linnette M.; Taylor, Christopher A.; Remley, Dan

    2016-01-01

    Few farmers' market consumers are aware of how to retain optimal nutritional quality of produce following purchase. Our objective was to develop and evaluate educational materials intended to inform market consumers about best practices for storing, preserving, and consuming local produce to maximize nutrients and phytonutrients. Printed…

  13. Practical aspects of a maximum likelihood estimation method to extract stability and control derivatives from flight data

    Science.gov (United States)

    Iliff, K. W.; Maine, R. E.

    1976-01-01

    A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.

  14. Estimation of the players maximum heart rate in real game situations in team sports: a practical propose

    Directory of Open Access Journals (Sweden)

    Jorge Cuadrado Reyes

    2011-05-01

    Full Text Available Abstract   This  research developed a logarithms  for calculating the maximum heart rate (max. HR for players in team sports in  game situations. The sample was made of  thirteen players (aged 24 ± 3   to a  Division Two Handball team. HR was initially measured by Course Navette test.  Later, twenty one training sessions were conducted  in which HR and Rate of Perceived Exertion (RPE, were  continuously monitored, in each task. A lineal regression analysis was done  to help find a max. HR prediction equation from the max. HR of the three highest intensity sessions. Results from  this equation correlate significantly with data obtained in the Course Navette test and with those obtained by other indirect methods. The conclusion of this research is that this equation provides a very useful and easy way to measure the max. HR in real game situations, avoiding non-specific analytical tests and, therefore laboratory testing..   Key words: workout control, functional evaluation, prediction equation.

  15. ROC [Receiver Operating Characteristics] study of maximum likelihood estimator human brain image reconstructions in PET [Positron Emission Tomography] clinical practice

    International Nuclear Information System (INIS)

    Llacer, J.; Veklerov, E.; Nolan, D.; Grafton, S.T.; Mazziotta, J.C.; Hawkins, R.A.; Hoh, C.K.; Hoffman, E.J.

    1990-10-01

    This paper will report on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator (MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP. A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of 18 F -- fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. We report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and we propose a variation based on reading three consecutive slices at a time, rating only the center slice. 9 refs., 2 figs., 1 tab

  16. Reconciling nature conservation and traditional farming practices: a spatially explicit framework to assess the extent of High Nature Value farmlands in the European countryside

    NARCIS (Netherlands)

    Araujo Rodrigues Lomba, de Angela; Alves, Paulo; Jongman, R.H.G.; McCracken, D.

    2015-01-01

    Agriculture constitutes a dominant land cover worldwide, and rural landscapes
    under extensive farming practices acknowledged due to high biodiversity
    levels. The High Nature Value farmland (HNVf) concept has been
    highlighted in the EU environmental and rural policies due to their

  17. To What Extent Do Improved Practices Increase Productivity of Small-Scale Rice Cultivation in A Rain-fed Area? : Evidence from Tanzania

    OpenAIRE

    Yuko Nakano; Yuki Tanaka; Keijiro Otsuka

    2014-01-01

    This paper investigates the impact of training provided by a large-scale private farm on the performance of surrounding small-scale rice farmers in a rain-fed area in Tanzania. We found that the training effectively enhances the adoption of improved rice cultivation practices, paddy yield, and profit of rice cultivation by small-holder farmers. In fact, the trainees achieve paddy yield of 5 tons per hectare on average, which is remarkably high for rain-fed rice cultivation. Our results sugges...

  18. Work site health promotion research: to what extent can we generalize the results and what is needed to translate research to practice?

    Science.gov (United States)

    Bull, Sheana Salyers; Gillette, Cynthia; Glasgow, Russell E; Estabrooks, Paul

    2003-10-01

    Information on external validity of work site health promotion research is essential to translate research findings to practice. The authors provide a literature review of work site health behavior interventions. Using the RE-AIM framework, they summarize characteristics and results of these studies to document reporting of intervention reach, adoption, implementation, and maintenance. The authors reviewed a total of 24 publications from 11 leading health behavior journals. They found that participation rates among eligible employees were reported in 87.5% of studies; only 25% of studies reported on intervention adoption. Data on characteristics of participants versus nonparticipants were reported in fewer than 10% of studies. Implementation data were reported in 12.5% of the studies. Only 8% of studies reported any type of maintenance data. Stronger emphasis is needed on representativeness of employees, work site settings studied, and longer term results. Examples of how this can be done are provided.

  19. To What Extent do Clinical Practice Guidelines Respond to the Needs and Preferences of Patients Diagnosed with Obsessive-Compulsive Disorder?

    Science.gov (United States)

    Villena-Jimena, Amelia; Gómez-Ocaña, Clara; Amor-Mercado, Gisela; Núñez-Vega, Amanda; Morales-Asencio, José Miguel; Hurtado, María Magdalena

    The number of Clinical Practice Guidelines (CPG) to help in making clinical decisions is increasing. However, there is currently a lack of CPG for Obsessive-Compulsive Disorder that take into account the requirements and expectations of the patients. The aim of the present study was to determine whether recommendations of the NICE guideline, "Obsessive-compulsive disorder: core interventions in the treatment of obsessive-compulsive disorder and body dysmorphic disorder" agrees with the needs and preferences of patients diagnosed with OCD in the mental health service. Two focal groups were formed with a total of 12 participants. They were asked about the impact of the disorder in their lives, their experiences with the mental health services, their satisfaction with treatments, and about their psychological resources. Preferences and needs were compared with the recommendations of the guidelines, and to facilitate their analysis, they were classified into four topics: information, accessibility, treatments, and therapeutic relationship. The results showed a high agreement between recommendations and patients preferences, particularly as regards high-intensity psychological interventions. Some discrepancies included the lack of prior low-intensity psychological interventions in mental health service, and the difficulty of rapid access the professionals. There is significant concordance between recommendations and patients preferences and demands, which are only partially responded to by the health services. Copyright © 2017 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.

  20. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  1. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  2. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  3. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  4. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  5. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  6. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  7. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  8. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  9. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  10. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  11. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  12. Best Practice Life Expectancy

    DEFF Research Database (Denmark)

    Medford, Anthony

    2017-01-01

    been reported previously by various authors. Though remarkable, this is simply an empirical observation. Objective: We examine best-practice life expectancy more formally by using extreme value theory. Methods: Extreme value distributions are fit to the time series (1900 to 2012) of maximum life......Background: Whereas the rise in human life expectancy has been extensively studied, the evolution of maximum life expectancies, i.e., the rise in best-practice life expectancy in a group of populations, has not been examined to the same extent. The linear rise in best-practice life expectancy has...... expectancies at birth and age 65, for both sexes, using data from the Human Mortality Database and the United Nations. Conclusions: Generalized extreme value distributions offer a theoretically justified way to model best-practice life expectancies. Using this framework one can straightforwardly obtain...

  13. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  14. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  15. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  16. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  17. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  18. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  19. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  20. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  1. Veterinary drug prescriptions: to what extent do pet owners comply ...

    African Journals Online (AJOL)

    Separate questionnaires were designed for pet owners (clients) and veterinarians to ascertain the existence and extent of noncompliance in veterinary practice in lbadan and to elucidate the influence of such factors as logistics, education, economy, attitudes and veterinarian/client relationship on non-compliance. Analyses ...

  2. Extents of sharp practices in credit allocation and utilization among ...

    African Journals Online (AJOL)

    One of the strategies employed in the implementation of Agricultural Transformation Agenda (ATA) is to harness the roles of major stakeholders along the nodes of agricultural value chain. Pivotal among these are the financial institutions, one of which is the Bank of Agriculture (BOA). However, financial institutions are not ...

  3. Maximum Power Training and Plyometrics for Cross-Country Running.

    Science.gov (United States)

    Ebben, William P.

    2001-01-01

    Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…

  4. 27 CFR 4.2 - Territorial extent.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Territorial extent. 4.2 Section 4.2 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS LABELING AND ADVERTISING OF WINE Scope § 4.2 Territorial extent. This part...

  5. Best Practice Life Expectancy:An Extreme value Approach

    OpenAIRE

    Medford, Anthony

    2017-01-01

    Background: Whereas the rise in human life expectancy has been extensively studied, the evolution of maximum life expectancies, i.e., the rise in best-practice life expectancy in a group of populations, has not been examined to the same extent. The linear rise in best-practice life expectancy has been reported previously by various authors. Though remarkable, this is simply an empirical observation. Objective: We examine best-practice life expectancy more formally by using extreme value th...

  6. Less ice on the Baltic reduces the extent of hypoxic bottom waters and sedimentary phosphorus release

    NARCIS (Netherlands)

    Vermaat, J.E.; Bouwer, L.M.

    2009-01-01

    A significant relation was established between the maximum extent of sea ice covering the Baltic Sea and the hypoxic area in the deeper parts of the Baltic Proper, with a lag of 2 years: for the period 1970-2000, less ice was correlated with a smaller anoxic area. At the same time, maximum ice

  7. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  8. Maximum entropy principal for transportation

    International Nuclear Information System (INIS)

    Bilich, F.; Da Silva, R.

    2008-01-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  9. Real-time flood extent maps based on social media

    Science.gov (United States)

    Eilander, Dirk; van Loenen, Arnejan; Roskam, Ruud; Wagemaker, Jurjen

    2015-04-01

    During a flood event it is often difficult to get accurate information about the flood extent and the people affected. This information is very important for disaster risk reduction management and crisis relief organizations. In the post flood phase, information about the flood extent is needed for damage estimation and calibrating hydrodynamic models. Currently, flood extent maps are derived from a few sources such as satellite images, areal images and post-flooding flood marks. However, getting accurate real-time or maximum flood extent maps remains difficult. With the rise of social media, we now have a new source of information with large numbers of observations. In the city of Jakarta, Indonesia, the intensity of unique flood related tweets during a flood event, peaked at 8 tweets per second during floods in early 2014. A fair amount of these tweets also contains observations of water depth and location. Our hypothesis is that based on the large numbers of tweets it is possible to generate real-time flood extent maps. In this study we use tweets from the city of Jakarta, Indonesia, to generate these flood extent maps. The data-mining procedure looks for tweets with a mention of 'banjir', the Bahasa Indonesia word for flood. It then removes modified and retweeted messages in order to keep unique tweets only. Since tweets are not always sent directly from the location of observation, the geotag in the tweets is unreliable. We therefore extract location information using mentions of names of neighborhoods and points of interest. Finally, where encountered, a mention of a length measure is extracted as water depth. These tweets containing a location reference and a water level are considered to be flood observations. The strength of this method is that it can easily be extended to other regions and languages. Based on the intensity of tweets in Jakarta during a flood event we can provide a rough estimate of the flood extent. To provide more accurate flood extend

  10. Extent of reaction in open systems with multiple heterogeneous reactions

    Science.gov (United States)

    Friedly, John C.

    1991-01-01

    The familiar batch concept of extent of reaction is reexamined for systems of reactions occurring in open systems. Because species concentrations change as a result of transport processes as well as reactions in open systems, the extent of reaction has been less useful in practice in these applications. It is shown that by defining the extent of the equivalent batch reaction and a second contribution to the extent of reaction due to the transport processes, it is possible to treat the description of the dynamics of flow through porous media accompanied by many chemical reactions in a uniform, concise manner. This approach tends to isolate the reaction terms among themselves and away from the model partial differential equations, thereby enabling treatment of large problems involving both equilibrium and kinetically controlled reactions. Implications on the number of coupled partial differential equations necessary to be solved and on numerical algorithms for solving such problems are discussed. Examples provided illustrate the theory applied to solute transport in groundwater flow.

  11. Updated Vertical Extent of Collision Damage

    DEFF Research Database (Denmark)

    Tagg, R.; Bartzis, P.; Papanikolaou, P.

    2002-01-01

    The probabilistic distribution of the vertical extent of collision damage is an important and somewhat controversial component of the proposed IMO harmonized damage stability regulations for cargo and passenger ships. The only pre-existing vertical distribution, currently used in the international...

  12. The Geographic Extent of Global Supply Chains

    DEFF Research Database (Denmark)

    Machikita, Tomohiro; Ueki, Yasushi

    2012-01-01

    We study the extent to which inter-firm relationships are locally concentrated and what determines firm differences in geographic proximity to domestic or foreign suppliers and customers. From micro-data on selfreported customer and supplier data of firms in Indonesia, the Philippines, Thailand, ...

  13. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  14. Is Eurasian October snow cover extent increasing?

    International Nuclear Information System (INIS)

    Brown, R D; Derksen, C

    2013-01-01

    A number of recent studies present evidence of an increasing trend in Eurasian snow cover extent (SCE) in the October snow onset period based on analysis of the National Oceanic and Atmospheric Administration (NOAA) historical satellite record. These increases are inconsistent with fall season surface temperature warming trends across the region. Using four independent snow cover data sources (surface observations, two reanalyses, satellite passive microwave retrievals) we show that the increasing SCE is attributable to an internal trend in the NOAA CDR dataset to chart relatively more October snow cover extent over the dataset overlap period (1982–2005). Adjusting the series for this shift results in closer agreement with other independent datasets, stronger correlation with continentally-averaged air temperature anomalies, and a decrease in SCE over 1982–2011 consistent with surface air temperature warming trends over the same period. (letter)

  15. The extent of forest in dryland biomes

    Science.gov (United States)

    Jean-Francois Bastin; Nora Berrahmouni; Alan Grainger; Danae Maniatis; Danilo Mollicone; Rebecca Moore; Chiara Patriarca; Nicolas Picard; Ben Sparrow; Elena Maria Abraham; Kamel Aloui; Ayhan Atesoglu; Fabio Attore; Caglar Bassullu; Adia Bey; Monica Garzuglia; Luis G. GarcÌa-Montero; Nikee Groot; Greg Guerin; Lars Laestadius; Andrew J. Lowe; Bako Mamane; Giulio Marchi; Paul Patterson; Marcelo Rezende; Stefano Ricci; Ignacio Salcedo; Alfonso Sanchez-Paus Diaz; Fred Stolle; Venera Surappaeva; Rene Castro

    2017-01-01

    Dryland biomes cover two-fifths of Earth’s land surface, but their forest area is poorly known. Here, we report an estimate of global forest extent in dryland biomes, based on analyzing more than 210,000 0.5-hectare sample plots through a photo-interpretation approach using large databases of satellite imagery at (i) very high spatial resolution and (ii) very high...

  16. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  17. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  18. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  19. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  20. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  1. Research Misconduct—Definitions, Manifestations and Extent

    Directory of Open Access Journals (Sweden)

    Lutz Bornmann

    2013-10-01

    Full Text Available In recent years, the international scientific community has been rocked by a number of serious cases of research misconduct. In one of these, Woo Suk Hwang, a Korean stem cell researcher published two articles on research with ground-breaking results in Science in 2004 and 2005. Both articles were later revealed to be fakes. This paper provides an overview of what research misconduct is generally understood to be, its manifestations and the extent to which they are thought to exist.

  2. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  3. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  4. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  5. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  6. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  7. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  8. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  9. Direct maximum parsimony phylogeny reconstruction from genotype data

    OpenAIRE

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-01-01

    Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of ge...

  10. System for memorizing maximum values

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  11. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  12. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  13. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  14. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  15. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  16. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  17. Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation

    OpenAIRE

    Rajiv D. Banker

    1993-01-01

    This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...

  18. Last Glacial Maximum Salinity Reconstruction

    Science.gov (United States)

    Homola, K.; Spivack, A. J.

    2016-12-01

    It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

  19. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  20. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  1. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  2. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....

  3. Multisensor Analyzed Sea Ice Extent - Northern Hemisphere (MASIE-NH)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Multisensor Analyzed Sea Ice Extent Northern Hemisphere (MASIE-NH) products provide measurements of daily sea ice extent and sea ice edge boundary for the...

  4. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  5. 25 CFR 141.36 - Maximum finance charges on pawn transactions.

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Maximum finance charges on pawn transactions. 141.36... PRACTICES ON THE NAVAJO, HOPI AND ZUNI RESERVATIONS Pawnbroker Practices § 141.36 Maximum finance charges on pawn transactions. No pawnbroker may impose an annual finance charge greater than twenty-four percent...

  6. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  7. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  8. To What Extent Can Motor Imagery Replace Motor Execution While Learning a Fine Motor Skill?

    NARCIS (Netherlands)

    Sobierajewicz, Jagna; Szarkiewicz, Sylwia; Prekoracka-Krawczyk, Anna; Jaskowski, Wojciech; van der Lubbe, Robert Henricus Johannes

    2016-01-01

    Motor imagery is generally thought to share common mechanisms with motor execution. In the present study, we examined to what extent learning a fine motor skill by motor imagery may substitute physical practice. Learning effects were assessed by manipulating the proportion of motor execution and

  9. a research tool for analysing and monitoring the Extent to which ...

    African Journals Online (AJOL)

    practice at the majority of under-resourced rural schools in the country. ... into the extent to which they integrate natural resource management issues. ... education is promoted as the best educational strategy to deal with the ... environmental sustainability and human wellbeing. ..... Comparative Education, 38(2), 171–187.

  10. Distracted walking: Examining the extent to pedestrian safety problems

    Directory of Open Access Journals (Sweden)

    Judith Mwakalonge

    2015-10-01

    Full Text Available Pedestrians, much like drivers, have always been engaged in multi-tasking like using hand-held devices, listening to music, snacking, or reading while walking. The effects are similar to those experienced by distracted drivers. However, distracted walking has not received similar policies and effective interventions as distracted driving to improve pedestrian safety. This study reviewed the state-of-practice on policies, campaigns, available data, identified research needs, and opportunities pertaining to distracted walking. A comprehensive review of literature revealed that some of the agencies/organizations disseminate useful information about certain distracting activities that pedestrians should avoid while walking to improve their safety. Various walking safety rules/tips have been given, such as not wearing headphones or talking on a cell phone while crossing a street, keeping the volume down, hanging up the phone while walking, being aware of traffic, and avoiding distractions like walking with texting. The majority of the past observational-based and experimental-based studies reviewed in this study on distracted walking is in agreement that there is a positive correlation between distraction and unsafe walking behavior. However, limitations of the existing crash data suggest that distracted walking may not be a severe threat to the public health. Current pedestrian crash data provide insufficient information for researchers to examine the extent to which distracted walking causes and/or contributes to actual pedestrian safety problems.

  11. The nature and extent of college student hazing.

    Science.gov (United States)

    Allan, Elizabeth J; Madden, Mary

    2012-01-01

    This study explored the nature and extent of college student hazing in the USA. Hazing, a form of interpersonal violence, can jeopardize the health and safety of students. Using a web-based survey, data were collected from 11,482 undergraduate students, aged 18-25 years, who attended one of 53 colleges and universities. Additionally, researchers interviewed 300 students and staff at 18 of the campuses. Results reveal hazing among USA college students is widespread and involves a range of student organizations and athletic teams. Alcohol consumption, humiliation, isolation, sleep-deprivation and sex acts are hazing practices common across student groups. Furthermore, there is a large gap between the number of students who report experience with hazing behaviors and those that label their experience as hazing. To date, hazing prevention efforts in post-secondary education have focused largely on students in fraternities/sororities and intercollegiate athletes. Findings from this study can inform development of more comprehensive and research-based hazing prevention efforts that target a wider range of student groups. Further, data can serve as a baseline from which to measure changes in college student hazing over time.

  12. MEGA5: Molecular Evolutionary Genetics Analysis Using Maximum Likelihood, Evolutionary Distance, and Maximum Parsimony Methods

    Science.gov (United States)

    Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir

    2011-01-01

    Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353

  13. Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules

    DEFF Research Database (Denmark)

    Gao, Junling; Chen, Min

    2013-01-01

    Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...... that the main cause is the influence of various currents on the produced electromotive potential. A simple and effective calibration method is proposed to minimize the deviations in specifying the maximum power. Experimental results validate the method with improved estimation accuracy....

  14. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  15. Best-practice life expectancy: An extreme value approach

    Directory of Open Access Journals (Sweden)

    Anthony Medford

    2017-03-01

    Full Text Available Background: Whereas the rise in human life expectancy has been extensively studied, the evolution of maximum life expectancies, i.e., the rise in best-practice life expectancy in a group of populations, has not been examined to the same extent. The linear rise in best-practice life expectancy has been reported previously by various authors. Though remarkable, this is simply an empirical observation. Objective: We examine best-practice life expectancy more formally by using extreme value theory. Methods: Extreme value distributions are fit to the time series (1900 to 2012 of maximum life expectancies at birth and age 65, for both sexes, using data from the Human Mortality Database and the United Nations. Conclusions: Generalized extreme value distributions offer a theoretically justified way to model best-practice life expectancies. Using this framework one can straightforwardly obtain probability estimates of best-practice life expectancy levels or make projections about future maximum life expectancy. Comments: Our findings may be useful for policymakers and insurance/pension analysts who would like to obtain estimates and probabilities of future maximum life expectancies.

  16. On the maximum entropy distributions of inherently positive nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.

    2017-05-11

    The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.

  17. A method for determining the extent of thermal burns in elephants

    Directory of Open Access Journals (Sweden)

    A. Shakespeare

    2006-06-01

    Full Text Available A practical method was developed to assess the extent of burns suffered by elephants caught in bush fires. In developing this method, the surface areas of the different body parts of juvenile, subadult and adult elephants were first determined using standard equations, and then expressed as a percentage of the total body surface area. When viewed from a distance, the burnt proportion of all body segments is estimated, converted to percentages of total body surface area, and then summed to determine the extent of burns suffered.

  18. Global Harmonization of Maximum Residue Limits for Pesticides.

    Science.gov (United States)

    Ambrus, Árpád; Yang, Yong Zhen

    2016-01-13

    International trade plays an important role in national economics. The Codex Alimentarius Commission develops harmonized international food standards, guidelines, and codes of practice to protect the health of consumers and to ensure fair practices in the food trade. The Codex maximum residue limits (MRLs) elaborated by the Codex Committee on Pesticide Residues are based on the recommendations of the FAO/WHO Joint Meeting on Pesticides (JMPR). The basic principles applied currently by the JMPR for the evaluation of experimental data and related information are described together with some of the areas in which further developments are needed.

  19. Occurrence and Impact of Insects in Maximum Growth Plantations

    Energy Technology Data Exchange (ETDEWEB)

    Nowak, J.T.; Berisford, C.W.

    2001-01-01

    Investigation of the relationships between intensive management practices and insect infestation using maximum growth potential studies of loblolly pine constructed over five years using a hierarchy of cultural treatments-monitoring differences in growth and insect infestation levels related to the increasing management intensities. This study shows that tree fertilization can increase coneworm infestation and demonstrated that tip moth management tree growth, at least initially.

  20. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  1. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  2. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  3. Assessing the extent of non-stationary biases in GCMs

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-06-01

    General circulation models (GCMs) are the main tools for estimating changes in the climate for the future. The imperfect representation of climate models introduces biases in the simulations that need to be corrected prior to their use for impact assessments. Bias correction methods generally assume that the bias calculated over the historical period does not change and can be applied to the future. This study investigates this assumption by considering the extent and nature of bias non-stationarity using 20th century precipitation and temperature simulations from six CMIP5 GCMs across Australia. Four statistics (mean, standard deviation, 10th and 90th quantiles) in monthly and seasonal biases are obtained for three different time window lengths (10, 25 and 33 years) to examine the properties of bias over time. This approach is repeated for two different phases of the Interdecadal Pacific Oscillation (IPO), which is known to have strong influences on the Australian climate. It is found that bias non-stationarity at decadal timescales is indeed an issue over some of Australia for some GCMs. When considering interdecadal variability there are significant difference in the bias between positive and negative phases of the IPO. Regional analyses confirmed these findings with the largest differences seen on the east coast of Australia, where IPO impacts tend to be the strongest. The nature of the bias non-stationarity found in this study suggests that it will be difficult to modify existing bias correction approaches to account for non-stationary biases. A more practical approach for impact assessments that use bias correction maybe to use a selection of GCMs where the assumption of bias non-stationarity holds.

  4. LGM permafrost thickness and extent in the Northern Hemisphere derived from the earth system model iLOVECLIM

    NARCIS (Netherlands)

    Kitover, D.C.; van Balen, R.T.; Vandenberghe, J.F.; Roche, D.M.V.A.P.; Renssen, H.

    2016-01-01

    An estimate of permafrost extent and thickness in the northern hemisphere during the Last Glacial Maximum (LGM, ~ 21 ka) has been produced using the VU University Amsterdam Permafrost Snow (VAMPERS) model, forced by iLOVECLIM, an Earth System Model of Intermediate Complexity. We present model

  5. Maximum nondiffracting propagation distance of aperture-truncated Airy beams

    Science.gov (United States)

    Chu, Xingchun; Zhao, Shanghong; Fang, Yingwu

    2018-05-01

    Airy beams have called attention of many researchers due to their non-diffracting, self-healing and transverse accelerating properties. A key issue in research of Airy beams and its applications is how to evaluate their nondiffracting propagation distance. In this paper, the critical transverse extent of physically realizable Airy beams is analyzed under the local spatial frequency methodology. The maximum nondiffracting propagation distance of aperture-truncated Airy beams is formulated and analyzed based on their local spatial frequency. The validity of the formula is verified by comparing the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam, aperture-truncated exponentially decaying Airy beam and exponentially decaying Airy beam. Results show that the formula can be used to evaluate accurately the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam. Therefore, it can guide us to select appropriate parameters to generate Airy beams with long nondiffracting propagation distance that have potential application in the fields of laser weapons or optical communications.

  6. Detection of maximum loadability limits and weak buses using Chaotic PSO considering security constraints

    International Nuclear Information System (INIS)

    Acharjee, P.; Mallick, S.; Thakur, S.S.; Ghoshal, S.P.

    2011-01-01

    Highlights: → The unique cost function is derived considering practical Security Constraints. → New innovative formulae of PSO parameters are developed for better performance. → The inclusion and implementation of chaos in PSO technique is original and unique. → Weak buses are identified where FACTS devices can be implemented. → The CPSO technique gives the best performance for all the IEEE standard test systems. - Abstract: In the current research chaotic search is used with the optimization technique for solving non-linear complicated power system problems because Chaos can overcome the local optima problem of optimization technique. Power system problem, more specifically voltage stability, is one of the practical examples of non-linear, complex, convex problems. Smart grid, restructured energy system and socio-economic development fetch various uncertain events in power systems and the level of uncertainty increases to a great extent day by day. In this context, analysis of voltage stability is essential. The efficient method to assess the voltage stability is maximum loadability limit (MLL). MLL problem is formulated as a maximization problem considering practical security constraints (SCs). Detection of weak buses is also important for the analysis of power system stability. Both MLL and weak buses are identified by PSO methods and FACTS devices can be applied to the detected weak buses for the improvement of stability. Three particle swarm optimization (PSO) techniques namely General PSO (GPSO), Adaptive PSO (APSO) and Chaotic PSO (CPSO) are presented for the comparative study with obtaining MLL and weak buses under different SCs. In APSO method, PSO-parameters are made adaptive with the problem and chaos is incorporated in CPSO method to obtain reliable convergence and better performances. All three methods are applied on standard IEEE 14 bus, 30 bus, 57 bus and 118 bus test systems to show their comparative computing effectiveness and

  7. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  8. Efficacy of helical CT in evaluating local tumor extent of breast cancer

    International Nuclear Information System (INIS)

    Ozaki, Yutaka

    2001-01-01

    The purpose of this study is to clarify the diagnostic accuracy of helical CT (HCT) in the determination of local tumor extent of breast cancer. One hundred forty consecutive patients with breast cancer, including 87 invasive ductal carcinomas without extensive intraductal components (EIC), 44 invasive ductal carcinomas with EIC, 2 non-invasive ductal carcinomas, and 7 invasive lobular carcinomas, were included in the study. Three-dimensional tumor diameter including whole extent was measured on HCT, and the amount of invasion to fat tissue, skin, pectoral muscle, and chest wall was estimated using a three-step scale. These results were then compared with the pathological findings. Breast cancers appeared as areas of high attenuation compared with the surrounding breast tissue in all patients. Tumor extent was correctly diagnosed by HCT to within a maximum difference of 1 cm in 88 patients (63%) and within 2 cm in 122 patients (87%). Sensitivity, specificity, and accuracy in diagnosing muscular invasion of breast cancer using HCT were 100%, 99%, and 99%, respectively. Sensitivity, specificity, and accuracy in diagnosing skin invasion of breast cancer using HCT were 84%, 93%, and 91%, respectively. HCT was able to visualize all of the tumors and detect the correct tumor extent in most patients. (author)

  9. Efficacy of helical CT in evaluating local tumor extent of breast cancer

    Energy Technology Data Exchange (ETDEWEB)

    Ozaki, Yutaka [Juntendo Univ., Chiba (Japan). Urayasu Hospital

    2001-04-01

    The purpose of this study is to clarify the diagnostic accuracy of helical CT (HCT) in the determination of local tumor extent of breast cancer. One hundred forty consecutive patients with breast cancer, including 87 invasive ductal carcinomas without extensive intraductal components (EIC), 44 invasive ductal carcinomas with EIC, 2 non-invasive ductal carcinomas, and 7 invasive lobular carcinomas, were included in the study. Three-dimensional tumor diameter including whole extent was measured on HCT, and the amount of invasion to fat tissue, skin, pectoral muscle, and chest wall was estimated using a three-step scale. These results were then compared with the pathological findings. Breast cancers appeared as areas of high attenuation compared with the surrounding breast tissue in all patients. Tumor extent was correctly diagnosed by HCT to within a maximum difference of 1 cm in 88 patients (63%) and within 2 cm in 122 patients (87%). Sensitivity, specificity, and accuracy in diagnosing muscular invasion of breast cancer using HCT were 100%, 99%, and 99%, respectively. Sensitivity, specificity, and accuracy in diagnosing skin invasion of breast cancer using HCT were 84%, 93%, and 91%, respectively. HCT was able to visualize all of the tumors and detect the correct tumor extent in most patients. (author)

  10. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  11. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  12. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  13. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  14. Strategic environmental assessment in tourism planning - Extent of application and quality of documentation

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho Lemos, Clara, E-mail: clara@sc.usp.br [Environmental Engineering Science, Engineering School of Sao Carlos, University of Sao Paulo, Av. Trabalhador Saocarlense, 400, Caixa Postal 292, Sao Carlos, Sao Paulo, 13566-590 (Brazil); Fischer, Thomas B., E-mail: fischer@liverpool.ac.uk [Department of Civic Design, University of Liverpool, 74 Bedford Street South, Liverpool, L69 7ZQ (United Kingdom); Pereira Souza, Marcelo, E-mail: mps@usp.br [Environmental Engineering Science, Engineering School of Sao Carlos, University of Sao Paulo, Av. Trabalhador Saocarlense, 400, Caixa Postal 292, Sao Carlos, Sao Paulo, 13566-590 (Brazil)

    2012-07-15

    Strategic environmental assessment (SEA) has been applied throughout the world in different sectors and in various ways. This paper reports on results of a PhD research on SEA applied to tourism development planning, reflecting the situation in mid-2010. First, the extent of tourism specific SEA application world-wide is established. Then, based on a review of the quality of 10 selected SEA reports, good practice, as well as challenges, trends and opportunities for tourism specific SEA are identified. Shortcomings of SEA in tourism planning are established and implications for future research are outlined. - Highlights: Black-Right-Pointing-Pointer The extent of tourism specific SEA practice is identified. Black-Right-Pointing-Pointer Selected SEA/Tourism reports are evaluated. Black-Right-Pointing-Pointer SEA application to tourism planning is still limited. Black-Right-Pointing-Pointer A number of shortcomings can be pointed out.

  15. Strategic environmental assessment in tourism planning — Extent of application and quality of documentation

    International Nuclear Information System (INIS)

    Carvalho Lemos, Clara; Fischer, Thomas B.; Pereira Souza, Marcelo

    2012-01-01

    Strategic environmental assessment (SEA) has been applied throughout the world in different sectors and in various ways. This paper reports on results of a PhD research on SEA applied to tourism development planning, reflecting the situation in mid-2010. First, the extent of tourism specific SEA application world-wide is established. Then, based on a review of the quality of 10 selected SEA reports, good practice, as well as challenges, trends and opportunities for tourism specific SEA are identified. Shortcomings of SEA in tourism planning are established and implications for future research are outlined. - Highlights: ► The extent of tourism specific SEA practice is identified. ► Selected SEA/Tourism reports are evaluated. ► SEA application to tourism planning is still limited. ► A number of shortcomings can be pointed out.

  16. Exercise-induced maximum metabolic rate scaled to body mass by ...

    African Journals Online (AJOL)

    user

    2016-10-27

    Oct 27, 2016 ... maximum aerobic metabolic rate (MMR) is proportional to the fractal extent ... metabolic rate with body mass can be obtained by taking body .... blood takes place. ..... MMR and BMR is that MMR is owing mainly to respiration in skeletal .... the spectra of surface area scaling strategies of cells and organisms:.

  17. Reconstructed North American Snow Extent, 1900-1993

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains reconstructed monthly North American snow extent values for November through March, 1900-1993. Investigators used a combination of satellite...

  18. Exploring the extent to which ELT students utilise smartphones for ...

    African Journals Online (AJOL)

    Zehra

    2015-11-09

    Nov 9, 2015 ... aimed to explore the extent to which English Language Teaching (ELT) students utilise ... Given the fact that almost all students have a personal smartphone, and use it ..... ears as a disadvantage for smartphones (Kétyi,.

  19. Maximum-performance fiber-optic irradiation with nonimaging designs.

    Science.gov (United States)

    Fang, Y; Feuermann, D; Gordon, J M

    1997-10-01

    A range of practical nonimaging designs for optical fiber applications is presented. Rays emerging from a fiber over a restricted angular range (small numerical aperture) are needed to illuminate a small near-field detector at maximum radiative efficiency. These designs range from pure reflector (all-mirror), to pure dielectric (refractive and based on total internal reflection) to lens-mirror combinations. Sample designs are shown for a specific infrared fiber-optic irradiation problem of practical interest. Optical performance is checked with computer three-dimensional ray tracing. Compared with conventional imaging solutions, nonimaging units offer considerable practical advantages in compactness and ease of alignment as well as noticeably superior radiative efficiency.

  20. Extent of Cropland and Related Soil Erosion Risk in Rwanda

    Directory of Open Access Journals (Sweden)

    Fidele Karamage

    2016-06-01

    Full Text Available Land conversion to cropland is one of the major causes of severe soil erosion in Africa. This study assesses the current cropland extent and the related soil erosion risk in Rwanda, a country that experienced the most rapid population growth and cropland expansion in Africa over the last decade. The land cover land use (LCLU map of Rwanda in 2015 was developed using Landsat-8 imagery. Based on the obtained LCLU map and the spatial datasets of precipitation, soil properties and elevation, the soil erosion rate of Rwanda was assessed at 30-m spatial resolution, using the Revised Universal Soil Loss Equation (RUSLE model. According to the results, the mean soil erosion rate was 250 t·ha−1·a−1 over the entire country, with a total soil loss rate of approximately 595 million tons per year. The mean soil erosion rate over cropland, which occupied 56% of the national land area, was estimated at 421 t·ha−1·a−1 and was responsible for about 95% of the national soil loss. About 24% of the croplands in Rwanda had a soil erosion rate larger than 300 t·ha−1·a−1, indicating their unsuitability for cultivation. With a mean soil erosion rate of 1642 t·ha−1·a−1, these unsuitable croplands were responsible for 90% of the national soil loss. Most of the unsuitable croplands are distributed in the Congo Nile Ridge, Volcanic Range mountain areas in the west and the Buberuka highlands in the north, regions characterized by steep slopes (>30% and strong rainfall. Soil conservation practices, such as the terracing cultivation method, are paramount to preserve the soil. According to our assessment, terracing alone could reduce the mean cropland soil erosion rate and the national soil loss by 79% and 75%, respectively. After terracing, only a small proportion of 7.6% of the current croplands would still be exposed to extreme soil erosion with a rate >300 t·ha−1·a−1. These irremediable cropland areas should be returned to mountain forest to

  1. Maximum Recoverable Gas from Hydrate Bearing Sediments by Depressurization

    KAUST Repository

    Terzariol, Marco

    2017-11-13

    The estimation of gas production rates from hydrate bearing sediments requires complex numerical simulations. This manuscript presents a set of simple and robust analytical solutions to estimate the maximum depressurization-driven recoverable gas. These limiting-equilibrium solutions are established when the dissociation front reaches steady state conditions and ceases to expand further. Analytical solutions show the relevance of (1) relative permeabilities between the hydrate free sediment, the hydrate bearing sediment, and the aquitard layers, and (2) the extent of depressurization in terms of the fluid pressures at the well, at the phase boundary, and in the far field. Close form solutions for the size of the produced zone allow for expeditious financial analyses; results highlight the need for innovative production strategies in order to make hydrate accumulations an economically-viable energy resource. Horizontal directional drilling and multi-wellpoint seafloor dewatering installations may lead to advantageous production strategies in shallow seafloor reservoirs.

  2. Optimal Portfolio Strategy under Rolling Economic Maximum Drawdown Constraints

    Directory of Open Access Journals (Sweden)

    Xiaojian Yu

    2014-01-01

    Full Text Available This paper deals with the problem of optimal portfolio strategy under the constraints of rolling economic maximum drawdown. A more practical strategy is developed by using rolling Sharpe ratio in computing the allocation proportion in contrast to existing models. Besides, another novel strategy named “REDP strategy” is further proposed, which replaces the rolling economic drawdown of the portfolio with the rolling economic drawdown of the risky asset. The simulation tests prove that REDP strategy can ensure the portfolio to satisfy the drawdown constraint and outperforms other strategies significantly. An empirical comparison research on the performances of different strategies is carried out by using the 23-year monthly data of SPTR, DJUBS, and 3-month T-bill. The investment cases of single risky asset and two risky assets are both studied in this paper. Empirical results indicate that the REDP strategy successfully controls the maximum drawdown within the given limit and performs best in both return and risk.

  3. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  4. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  5. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  6. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  7. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  8. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  9. Identification of Biokinetic Models Using the Concept of Extents.

    Science.gov (United States)

    Mašić, Alma; Srinivasan, Sriniketh; Billeter, Julien; Bonvin, Dominique; Villez, Kris

    2017-07-05

    The development of a wide array of process technologies to enable the shift from conventional biological wastewater treatment processes to resource recovery systems is matched by an increasing demand for predictive capabilities. Mathematical models are excellent tools to meet this demand. However, obtaining reliable and fit-for-purpose models remains a cumbersome task due to the inherent complexity of biological wastewater treatment processes. In this work, we present a first study in the context of environmental biotechnology that adopts and explores the use of extents as a way to simplify and streamline the dynamic process modeling task. In addition, the extent-based modeling strategy is enhanced by optimal accounting for nonlinear algebraic equilibria and nonlinear measurement equations. Finally, a thorough discussion of our results explains the benefits of extent-based modeling and its potential to turn environmental process modeling into a highly automated task.

  10. Extent, accuracy, and credibility of breastfeeding information on the Internet.

    Science.gov (United States)

    Shaikh, Ulfat; Scott, Barbara J

    2005-05-01

    Our objective was to test and describe a model for evaluating Websites related to breastfeeding. Forty Websites most likely to be accessed by the public were evaluated for extent, accuracy, credibility, presentation, ease of use, and adherence to ethical and medical Internet publishing standards. Extent and accuracy of Website content were determined by a checklist of critical information. The majority of Websites reviewed provided accurate information and complied with the International Code of Marketing of Breast-milk Substitutes. Approximately half the Websites complied with standards of medical Internet publishing. While much information on breastfeeding on the Internet is accurate, there is wide variability in the extent of information, usability of Websites, and compliance with standards of medical Internet publishing. Results of this study may be helpful to health care professionals as a model for evaluating breastfeeding-related Websites and to highlight considerations when recommending or designing Websites.

  11. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  12. Maximum speed of dewetting on a fiber

    NARCIS (Netherlands)

    Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

  13. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  14. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  15. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  16. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  17. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  18. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  19. 5 CFR 534.203 - Maximum stipends.

    Science.gov (United States)

    2010-01-01

    ... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

  20. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  1. Flood Extent Mapping for Namibia Using Change Detection and Thresholding with SAR

    Science.gov (United States)

    Long, Stephanie; Fatoyinbo, Temilola E.; Policelli, Frederick

    2014-01-01

    A new method for flood detection change detection and thresholding (CDAT) was used with synthetic aperture radar (SAR) imagery to delineate the extent of flooding for the Chobe floodplain in the Caprivi region of Namibia. This region experiences annual seasonal flooding and has seen a recent renewal of severe flooding after a long dry period in the 1990s. Flooding in this area has caused loss of life and livelihoods for the surrounding communities and has caught the attention of disaster relief agencies. There is a need for flood extent mapping techniques that can be used to process images quickly, providing near real-time flooding information to relief agencies. ENVISAT/ASAR and Radarsat-2 images were acquired for several flooding seasons from February 2008 to March 2013. The CDAT method was used to determine flooding from these images and includes the use of image subtraction, decision based classification with threshold values, and segmentation of SAR images. The total extent of flooding determined for 2009, 2011 and 2012 was about 542 km2, 720 km2, and 673 km2 respectively. Pixels determined to be flooded in vegetation were typically flooding in vegetation was much greater (almost one third of the total flooded area). The time to maximum flooding for the 2013 flood season was determined to be about 27 days. Landsat water classification was used to compare the results from the new CDAT with SAR method; the results show good spatial agreement with Landsat scenes.

  2. A method for predicting the extent of petroleum hydrocarbon biodegradation in contaminated soils

    International Nuclear Information System (INIS)

    Huesemann, M.H.

    1994-01-01

    A series of solid- and slurry-phase soil bioremediation experiments involving different crude oils and refined petroleum products were performed to investigate the factors which affect the maximum extent of total petroleum hydrocarbon (TPH) biodegradation. The authors used a comprehensive petroleum hydrocarbon characterization procedure involving group-type separation analyses, boiling-point distributions, and hydrocarbon typing by field ionization mass spectroscopy. Initial and final concentrations of specified hydrocarbon classes were determined in each of seven different bioremediation treatments. Generally, they found that the degree of TPH biodegradation was affected mainly by the type of hydrocarbons in the contaminant matrix. In contrast, the influence of experimental variables such as soil type, fertilizer concentrations, microbial plate counts, and treatment type (slurry versus landfarming) on the overall extent of TPH biodegradation was insignificant. Based on these findings, a predictive algorithm was developed to estimate the extent of TPH biodegradation from the average reduction of 86 individual hydrocarbon classes and their respective initial concentrations. Model predictions for gravimetric TPH removals were in close agreement with analytical results from two independent laboratories

  3. LEVEL AND EXTENT OF MERCURY CONTAMINATION IN OREGON LOTIC FISH

    Science.gov (United States)

    As part of the U.S. EPA's EMAP Oregon Pilot project, we conducted a probability survey of 154 Oregon streams and rivers to assess the spatial extent of mercury (Hg) contamination in fish tissue across the state. Samples consisted of whole fish analyses of both small (< 120 mm) a...

  4. Spatial extent in demographic research - approach and problems

    Directory of Open Access Journals (Sweden)

    Knežević Aleksandar

    2015-01-01

    Full Text Available One of the starting methodological problems in demographic research is the definition of spatial extent, which mostly doesn’t correspond to spatial extent already defined by different levels of administrative-territorial unitsthat are used for distribution of usable statistical data. That’s why determining the spatial extent of a demographic research is closely tied with administrative-territorial division of the territory that is being researched, wherein the fact that differentiation of demographic phenomena and processes cannot be the only basis of setting the principles of regionalization must be strictly acknowledged. This problem is particularly common in historical demographic analyses of geographically determined wholes, which are in administratively-territorial sense represented by one or more smaller territorial units, with their borders changing through the history, which directly affects comparability of the statistical data, and makes it considerably more difficult to track demographic change through longer time intervals. The result of these efforts is usually a solution based on a compromise which enables us to examine the dynamics of population change with little deviation from already defined borders of regional geographic wholes. For that reason in this paper the problem of defining spatial extent in demographic research is examined trough several different approaches in case of Eastern Serbia, as a geographically determined region, a historic area, a spatially functioning whole and as a statistical unit for demographic research, with no judgment calls in regard to any of the regionalization principles. [Projekat Ministarstva nauke Republike Srbije, br. III 47006

  5. The Extent of Immature Fish Harvesting by the Commercial Fishery ...

    African Journals Online (AJOL)

    The sustainability of a given fishery is a function of the number of sexually matured fish present in water. If there is intensive immature fishing, the population of fish reaching the stage of recruitment will decrease, which in turn results in lower yield and biomass. The present study was conducted to estimate the extent of ...

  6. Does Trust Influence the Extent of Inter-Organizational Barter?

    DEFF Research Database (Denmark)

    Sudzina, Frantisek

    2014-01-01

    The 1999 World Business Environment Survey investigated, among many other things, the extent of inter-organizational barter in various countries. Reported values differed a lot, e.g. it was less than 1% in Hungary but more than 30% in neighboring Croatia. Since in many such contracts goods and...

  7. To what extent does banks' credit stimulate economic growth ...

    African Journals Online (AJOL)

    This study examines the extent to which banks' credit affects economic growth in Nigeria. The data used was collected from the Central Bank of Nigeria statistical bulletin for a period of 24 years from 1990 to 2013. We used credit to the private sector, credit to the public sector and inflation to proxy commercial bank credit ...

  8. Extent and Distribution of Groundwater Resources in Parts of ...

    African Journals Online (AJOL)

    The extent and distribution of groundwater resources in parts of Anambra State, Nigeria has been investigated. The results show that the study area is directly underlain by four different geological formations including, Alluvial Plain Sands, Ogwashi-Asaba Formation, Ameki/Nanka Sands and Imo Shale, with varying water ...

  9. Extent of implementation of collection development policies in ...

    African Journals Online (AJOL)

    The study is a survey research on the extent of implementation of collection development policies in academic libraries in Imo state. The population of the study comprises five (5) academic libraries in the area of study. The academic libraries understudy are: Imo State University Owerri (IMSU), Federal University of ...

  10. An investigation into Nigerian teacher's attitude towards and extent ...

    African Journals Online (AJOL)

    The attitude of Biology teachers towards and their extent of improvisation, were investigated 80 teachers from 50 randomly selected secondary schools in Oyo state of Nigeria participated in the study. Analysis of the twenty item questionnaire administered to the teachers revealed that though many of them exhibited positive ...

  11. Extent of implementation of Collection Development Policies (CDP ...

    African Journals Online (AJOL)

    The study was on the extent of implementation of collection development policies by public University libraries in the Niger Delta Area, Nigeria. Descriptive survey design was employed. Population for the study consisted of all the 16 Colle ction Development Librarians in the Area studied. No sample was used because the ...

  12. The extent of groundwater use for domestic and irrigation activities ...

    African Journals Online (AJOL)

    AKMENSAH

    2015-06-04

    Jun 4, 2015 ... Albert Kobina Mensah1*, Evans Appiah Kissi2, Kwabena Krah3 and Okoree D. Mireku4. 1Department of Geography, Kenyatta University, Nairobi. 2Department of .... catchment in Kiambu County in Kenya had limited themselves to the assessment of water quality. Little work has been done on the extent to ...

  13. Forest extent and deforestation in tropical Africa since 1900.

    Science.gov (United States)

    Aleman, Julie C; Jarzyna, Marta A; Staver, A Carla

    2018-01-01

    Accurate estimates of historical forest extent and associated deforestation rates are crucial for quantifying tropical carbon cycles and formulating conservation policy. In Africa, data-driven estimates of historical closed-canopy forest extent and deforestation at the continental scale are lacking, and existing modelled estimates diverge substantially. Here, we synthesize available palaeo-proxies and historical maps to reconstruct forest extent in tropical Africa around 1900, when European colonization accelerated markedly, and compare these historical estimates with modern forest extent to estimate deforestation. We find that forests were less extensive in 1900 than bioclimatic models predict. Resultantly, across tropical Africa, ~ 21.7% of forests have been deforested, yielding substantially slower deforestation than previous estimates (35-55%). However, deforestation was heterogeneous: West and East African forests have undergone almost complete decline (~ 83.3 and 93.0%, respectively), while Central African forests have expanded at the expense of savannahs (~ 1.4% net forest expansion, with ~ 135,270 km 2 of savannahs encroached). These results suggest that climate alone does not determine savannah and forest distributions and that many savannahs hitherto considered to be degraded forests are instead relatively old. These data-driven reconstructions of historical biome distributions will inform tropical carbon cycle estimates, carbon mitigation initiatives and conservation planning in both forest and savannah systems.

  14. The Extent of Reversibility of Polychlorinated Biphenyl Adsorption

    Science.gov (United States)

    The extent of reversibility of PCB bonding to sediments has been characterized in studies on the partitioning behavior of a hexachlorobiphenyl isomer. Linear non-singular isotherms have been observed for the adsorption and desorption of 2.4.5.2?,4?,5? hexachlorobiphenyl (HCBP) to...

  15. Determining wetland spatial extent and seasonal variations of the ...

    African Journals Online (AJOL)

    This study, done in the Witbank Dam Catchment in Mpumalanga Province of South Africa, explores a remote-sensing technique to delineate wetland extent and assesses the seasonal variations of the inundated area. The objective was to monitor the spatio-temporal changes of wetlands over time through remote sensing ...

  16. 32 CFR 728.12 - Extent of care.

    Science.gov (United States)

    2010-07-01

    ... § 728.12 Extent of care. Members who are away from their duty stations or are on duty where there is no... providing authorization for non-Federal care at DHHS expense. (b) Maternity episode for active duty female... facilities (once the mother has been admitted to the USMTF) from funds available for care of active duty...

  17. 27 CFR 24.158 - Extent of relief.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Extent of relief. 24.158 Section 24.158 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT..., until all tax is fully paid. (d) Wine vinegar plant bond. The surety will be relieved of liability for...

  18. Subsidies to energy in the world: their extent, their efficiency and their necessary reorientation

    International Nuclear Information System (INIS)

    Finon, D.

    2010-10-01

    This report aims at analyzing the extent of subsidies to energy in the world, at assessing theoretical and practical arguments against different forms of subsidy, and at synthesizing reflections on reforms of subsidies to energy, mainly in developing countries. In the first part, the author recalls the theoretical and practical backgrounds of subsidies to energy, indicates the different forms of support to energy production and consumption, and discusses the existing assessments in the world and in some regions while specifying subsidies to fuels in the transport sector. In a second part, he addresses theoretical and practical critics of subsidies (notably in terms of environmental and economical inefficiency), assessments of economical and environmental benefits of their withdrawal, and ways of reorienting subsidies for fossil fuels in developing countries

  19. 29 CFR 37.10 - To what extent are employment practices covered by this part?

    Science.gov (United States)

    2010-07-01

    ... AND EQUAL OPPORTUNITY PROVISIONS OF THE WORKFORCE INVESTMENT ACT OF 1998 (WIA) General Provisions § 37... Opportunity Commission (EEOC) regulations, guidance and appropriate case law in determining whether a... Immigration and Nationality Act should be aware of the obligations imposed by that provision. See 8 U.S.C...

  20. Application of the Maximum Entropy Method to Risk Analysis of Mergers and Acquisitions

    Science.gov (United States)

    Xie, Jigang; Song, Wenyun

    The maximum entropy (ME) method can be used to analyze the risk of mergers and acquisitions when only pre-acquisition information is available. A practical example of the risk analysis of China listed firms’ mergers and acquisitions is provided to testify the feasibility and practicality of the method.

  1. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  2. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  3. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  4. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  5. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  6. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  7. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  8. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  9. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  10. The Role of Deposition in Limiting the Hazard Extent of Dense-Gas Plumes

    Energy Technology Data Exchange (ETDEWEB)

    Dillon, M B

    2008-01-29

    Accidents involving release of large (multi-ton) quantities of toxic industrial chemicals often yield far fewer fatalities and causalities than standard, widely-used assessment and emergency response models predict. While recent work has suggested that models should incorporate the protection provided by buildings, more refined health effect methodologies, and more detailed consideration of the release process; investigations into the role of deposition onto outdoor surfaces has been lacking. In this paper, we examine the conditions under which dry deposition may significantly reduce the extent of the downwind hazard zone. We provide theoretical arguments that in congested environments (e.g. suburbs, forests), deposition to vertical surfaces (such as building walls) may play a significant role in reducing the hazard zone extent--particularly under low-wind, stable atmospheric conditions which are often considered to be the worst-case scenario for these types of releases. Our analysis suggests that in these urban or suburban environments, the amount of toxic chemicals lost to earth's surface is typically a small fraction of overall depositional losses. For isothermal gases such as chlorine, the degree to which the chemicals stick to (or react with) surfaces (i.e. surface resistance) is demonstrated to be a key parameter controlling hazard extent (the maximum distance from the release at which hazards to human health are expected). This analysis does not consider the depositional effects associated with particulate matter or gases that undergo significant thermal change in the atmosphere. While no controlled experiments were available to validate our hypothesis, our analysis results are qualitatively consistent with the observed downwind extent of vegetation damage in two chlorine accidents.

  11. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  12. Assessing the Global Extent of Rivers Observable by SWOT

    Science.gov (United States)

    Pavelsky, T.; Durand, M. T.; Andreadis, K.; Beighley, E.; Allen, G. H.; Miller, Z.

    2013-12-01

    Flow of water through rivers is among the key fluxes in the global hydrologic cycle and its knowledge would advance the understanding of flood hazards, water resources management, ecology, and climate. However, gauges providing publicly accessible measurements of river stage or discharge remain sparse in many regions. The Surface Water and Ocean Topography (SWOT) satellite mission is a joint project of NASA and the French Centre National d'Etudes Spatiales (CNES) that would provide the first high-resolution images of simultaneous terrestrial water surface height, inundation extent, and ocean surface elevation. Among SWOT's primary goals is the direct observation of variations in river water surface elevation and, where possible, estimation of river discharge from SWOT measurements. The mission science requirements specify that rivers wider than 100 m would be observed globally, with a goal of observing rivers wider than 50m. However, the extent of anticipated SWOT river observations remains fundamentally unknown because no high-resolution, global dataset of river widths exists. Here, we estimate the global extent of rivers wider than 50 m-100 m thresholds using established relationships among river width, discharge, and drainage area. We combine a global digital elevation model with in situ river discharge data to estimate the global extent of SWOT-observable rivers, and validate these estimates against satellite-derived measurements of river width in two large river basins (the Yukon and the Ohio). We then compare the extent of SWOT-observed rivers with the current publicly-available, global gauge network included in the Global Runoff Data Centre (GRDC) database to examine the impact of SWOT on the availability of river observation over continental and global scales. Results suggest that if SWOT observes 100 m wide rivers, river basins with areas greater than 50,000 km2 will commonly be measured. If SWOT could observe 50 m wide rivers, then most 10,000 km2 basins

  13. Probable Maximum Earthquake Magnitudes for the Cascadia Subduction

    Science.gov (United States)

    Rong, Y.; Jackson, D. D.; Magistrale, H.; Goldfinger, C.

    2013-12-01

    The concept of maximum earthquake magnitude (mx) is widely used in seismic hazard and risk analysis. However, absolute mx lacks a precise definition and cannot be determined from a finite earthquake history. The surprising magnitudes of the 2004 Sumatra and the 2011 Tohoku earthquakes showed that most methods for estimating mx underestimate the true maximum if it exists. Thus, we introduced the alternate concept of mp(T), probable maximum magnitude within a time interval T. The mp(T) can be solved using theoretical magnitude-frequency distributions such as Tapered Gutenberg-Richter (TGR) distribution. The two TGR parameters, β-value (which equals 2/3 b-value in the GR distribution) and corner magnitude (mc), can be obtained by applying maximum likelihood method to earthquake catalogs with additional constraint from tectonic moment rate. Here, we integrate the paleoseismic data in the Cascadia subduction zone to estimate mp. The Cascadia subduction zone has been seismically quiescent since at least 1900. Fortunately, turbidite studies have unearthed a 10,000 year record of great earthquakes along the subduction zone. We thoroughly investigate the earthquake magnitude-frequency distribution of the region by combining instrumental and paleoseismic data, and using the tectonic moment rate information. To use the paleoseismic data, we first estimate event magnitudes, which we achieve by using the time interval between events, rupture extent of the events, and turbidite thickness. We estimate three sets of TGR parameters: for the first two sets, we consider a geographically large Cascadia region that includes the subduction zone, and the Explorer, Juan de Fuca, and Gorda plates; for the third set, we consider a narrow geographic region straddling the subduction zone. In the first set, the β-value is derived using the GCMT catalog. In the second and third sets, the β-value is derived using both the GCMT and paleoseismic data. Next, we calculate the corresponding mc

  14. The extent of emphysema in patients with COPD

    DEFF Research Database (Denmark)

    Shaker, Saher Burhan; Stavngaard, Trine; Hestad, Marianne

    2009-01-01

    BACKGROUND AND AIMS: The global initiative for COPD (GOLD) adopted the degree of airway obstruction as a measure of the severity of the disease. The objective of this study was to apply CT to assess the extent of emphysema in patients with chronic obstructive pulmonary disease (COPD) and relate...... measurement and visual and quantitative assessment of CT, from which the relative area of emphysema below -910 Hounsfield units (RA-910) was extracted. RESULTS: Mean RA-910 was 7.4% (n = 5) in patients with GOLD stage I, 17.0% (n = 119) in stage II, 24.2% (n = 79) in stage III and 33.9% (n = 6) in stage IV....... Regression analysis showed a change in RA-910 of 7.8% with increasing severity according to GOLD stage (P emphysema, whereas 25 patients had no emphysema. CONCLUSION: The extent of emphysema...

  15. Extent of hippocampal atrophy predicts degree of deficit in recall.

    Science.gov (United States)

    Patai, Eva Zita; Gadian, David G; Cooper, Janine M; Dzieciol, Anna M; Mishkin, Mortimer; Vargha-Khadem, Faraneh

    2015-10-13

    Which specific memory functions are dependent on the hippocampus is still debated. The availability of a large cohort of patients who had sustained relatively selective hippocampal damage early in life enabled us to determine which type of mnemonic deficit showed a correlation with extent of hippocampal injury. We assessed our patient cohort on a test that provides measures of recognition and recall that are equated for difficulty and found that the patients' performance on the recall tests correlated significantly with their hippocampal volumes, whereas their performance on the equally difficult recognition tests did not and, indeed, was largely unaffected regardless of extent of hippocampal atrophy. The results provide new evidence in favor of the view that the hippocampus is essential for recall but not for recognition.

  16. A report on the extent of radioisotope usage in Malaysia

    International Nuclear Information System (INIS)

    1983-04-01

    A market survey was carried out to study the extent of radioisotope usage in Malaysia. From the survey, the radioisotopes and their activities/quantities that are used in Industry, Medicine and Research were identified. The radioisotopes that are frequently needed or routinely used were also determined and this formed the basis of the recommendations put forward in this report. It is proposed that PUSPATI adopt the concept of a Distribution Centre in order to provide a service to the Malaysian community. (author)

  17. The extent and impact of outsourcing: evidence from Germany

    OpenAIRE

    Craig P. Aubuchon; Subhayu Bandyopadhyay; Sumon Bhaumik

    2012-01-01

    The authors use data from several sources, including plant-level data from the manufacturing sector in Germany, to expand the literature on outsourcing. They find that, in Germany, the extent of outsourcing among manufacturing industries is higher than among service industries and that the outsourcing intensity of these industries did not change much between 1995 and 2005. They also find a statistically significantly positive impact of industry-level outsourcing intensity on German plant-leve...

  18. Statistics of Radial Ship Extent as Seen by a Seeker

    Science.gov (United States)

    2014-06-01

    Auckland in pure and applied mathematics and physics, and a Master of Science in physics from the same university with a thesis in applied accelerator...does not demand contributions from two angle bins to one extent bin, unlike the rectangle; this is a very big advantage of the ellipse model. However...waveform that mimics the full length of a ship. This allows more economical use to be made of available false-target generation resources. I wish to

  19. To what extent can the nuclear public relations be effective?

    Energy Technology Data Exchange (ETDEWEB)

    Ohnishi, Teruaki [CRC Research Institute, Tokyo (Japan)

    1996-06-01

    The effect of public relations (PRs) on the public`s attitude to nuclear energy was assessed using a model developed under the assumption that the extent of attitude change of the public by the PRs activity is essentially the same as that by the nuclear information released by the newsmedia. The attitude change of the public was quantitatively estimated by setting variables explicitly manifesting the activities such as the circulation of exclusive publicity and the area of advertising messages in the newspaper as parameters. The public`s attitude became clear to have a nonlinear dependence on the amount of activity, the extent of its change being varied considerably with demographic classes. Under a given financial condition, the offer of PRs information to the people, as many as possible in a target region, in spite of its little force of appeal, was found to be more effective for the amelioration of public attitude than the repeated offer of the information to a limited member of the public. It also became clear that there exists the most effective media mix for the activity depending on the extent of target region and on the target class of demography, therefore, it is quite significant to determine beforehand the proper conditions for the activity to be executed, such a situation indicating the need for the introduction of nuclear PRs management. (Author).

  20. Regional Mapping of Plantation Extent Using Multisensor Imagery

    Science.gov (United States)

    Torbick, N.; Ledoux, L.; Hagen, S.; Salas, W.

    2016-12-01

    Industrial forest plantations are expanding rapidly across the tropics and monitoring extent is critical for understanding environmental and socioeconomic impacts. In this study, new, multisensor imagery were evaluated and integrated to extract the strengths of each sensor for mapping plantation extent at regional scales. Three distinctly different landscapes with multiple plantation types were chosen to consider scalability and transferability. These were Tanintharyi, Myanmar, West Kalimantan, Indonesia, and southern Ghana. Landsat-8 Operational Land Imager (OLI), Phased Array L-band Synthetic Aperture Radar-2 (PALSAR-2), and Sentinel-1A images were fused within a Classification and Regression Tree (CART) framework using random forest and high-resolution surveys. Multi-criteria evaluations showed both L-and C-band gamma nought γ° backscatter decibel (dB), Landsat reflectance ρλ, and texture indices were useful for distinguishing oil palm and rubber plantations from other land types. The classification approach identified 750,822 ha or 23% of the Taninathryi, Myanmar, and 216,086 ha or 25% of western West Kalimantan as plantation with very high cross validation accuracy. The mapping approach was scalable and transferred well across the different geographies and plantation types. As archives for Sentinel-1, Landsat-8, and PALSAR-2 continue to grow, mapping plantation extent and dynamics at moderate resolution over large regions should be feasible.

  1. To what extent can the nuclear public relations be effective?

    International Nuclear Information System (INIS)

    Ohnishi, Teruaki

    1996-01-01

    The effect of public relations (PRs) on the public's attitude to nuclear energy was assessed using a model developed under the assumption that the extent of attitude change of the public by the PRs activity is essentially the same as that by the nuclear information released by the newsmedia. The attitude change of the public was quantitatively estimated by setting variables explicitly manifesting the activities such as the circulation of exclusive publicity and the area of advertising messages in the newspaper as parameters. The public's attitude became clear to have a nonlinear dependence on the amount of activity, the extent of its change being varied considerably with demographic classes. Under a given financial condition, the offer of PRs information to the people, as many as possible in a target region, in spite of its little force of appeal, was found to be more effective for the amelioration of public attitude than the repeated offer of the information to a limited member of the public. It also became clear that there exists the most effective media mix for the activity depending on the extent of target region and on the target class of demography, therefore, it is quite significant to determine beforehand the proper conditions for the activity to be executed, such a situation indicating the need for the introduction of nuclear PRs management. (Author)

  2. Exploring the Origin, Extent, and Future of Life

    Science.gov (United States)

    Bertka, Constance M.

    2009-09-01

    1. Astrobiology in societal context Constance Bertka; Part I. Origin of Life: 2. Emergence and the experimental pursuit of the origin of life Robert Hazen; 3. From Aristotle to Darwin, to Freeman Dyson: changing definitions of life viewed in historical context James Strick; 4. Philosophical aspects of the origin-of-life problem: the emergence of life and the nature of science Iris Fry; 5. The origin of terrestrial life: a Christian perspective Ernan McMullin; 6. The alpha and the omega: reflections on the origin and future of life from the perspective of Christian theology and ethics Celia Deane-Drummond; Part II. Extent of Life: 7. A biologist's guide to the Solar System Lynn Rothschild; 8. The quest for habitable worlds and life beyond the Solar System Carl Pilcher; 9. A historical perspective on the extent and search for life Steven J. Dick; 10. The search for extraterrestrial life: epistemology, ethics, and worldviews Mark Lupisella; 11. The implications of discovering extraterrestrial life: different searches, different issues Margaret S. Race; 12. God, evolution, and astrobiology Cynthia S. W. Crysdale; Part III. Future of Life: 13. Planetary ecosynthesis on Mars: restoration ecology and environmental ethics Christopher P. McKay; 14. The trouble with intrinsic value: an ethical primer for astrobiology Kelly C. Smith; 15. God's preferential option for life: a Christian perspective on astrobiology Richard O. Randolph; 16. Comparing stories about the origin, extent, and future of life: an Asian religious perspective Francisca Cho; Index.

  3. Monitoring the Extent of Forests on National to Global Scales

    Science.gov (United States)

    Townshend, J.; Townshend, J.; Hansen, M.; DeFries, R.; DeFries, R.; Sohlberg, R.; Desch, A.; White, B.

    2001-05-01

    Information on forest extent and change is important for many purposes, including understanding the global carbon cycle and managing natural resources. International statistics on forest extent are generated using many different sources often producing inconsistent results spatially and through time. Results will be presented comparing forest extent derived from the recent global Food and Agricultural Organization's (FAO) FRA 2000 report with products derived using wall-to-wall Landsat, AVHRR and MODIS data sets. The remotely sensed data sets provide consistent results in terms of total area despite considerable differences in spatial resolution. Although the location of change can be satisfactorily detected with all three remotely sensed data sets, reliable measurement of change can only be achieved through use of Landsat-resolution data. Contrary to the FRA 2000 results we find evidence of an increase in deforestation rates in the late 1990s in several countries. Also we have found evidence of considerable changes in some countries for which little or no change is reported by FAO. The results indicate the benefits of globally consistent analyses of forest cover based on multiscale remotely sensed data sets rather than a reliance on statistics generated by individual countries with very different definitions of forest and methods used to derive them.

  4. The extent of emphysema in patients with COPD.

    Science.gov (United States)

    Shaker, Saher Burhan; Stavngaard, Trine; Hestad, Marianne; Bach, Karen Skjoelstrup; Tonnesen, Philip; Dirksen, Asger

    2009-01-01

    The global initiative for COPD (GOLD) adopted the degree of airway obstruction as a measure of the severity of the disease. The objective of this study was to apply CT to assess the extent of emphysema in patients with chronic obstructive pulmonary disease (COPD) and relate this extent to the GOLD stage of airway obstruction. We included 209 patients with COPD. COPD was defined as FEV(1)/FVC or=20 pack-years. Patients were assessed by lung function measurement and visual and quantitative assessment of CT, from which the relative area of emphysema below -910 Hounsfield units (RA-910) was extracted. Mean RA-910 was 7.4% (n = 5) in patients with GOLD stage I, 17.0% (n = 119) in stage II, 24.2% (n = 79) in stage III and 33.9% (n = 6) in stage IV. Regression analysis showed a change in RA-910 of 7.8% with increasing severity according to GOLD stage (P < 0.001). Combined visual and quantitative assessment of CT showed that 184 patients had radiological evidence of emphysema, whereas 25 patients had no emphysema. The extent of emphysema increases with increasing severity of COPD and most patients with COPD have emphysema. Tissue destruction by emphysema is therefore an important determinant of disease severity in COPD.

  5. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  6. Maximum power operation of interacting molecular motors

    DEFF Research Database (Denmark)

    Golubeva, Natalia; Imparato, Alberto

    2013-01-01

    , as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...

  7. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  8. On the maximum drawdown during speculative bubbles

    Science.gov (United States)

    Rotundo, Giulia; Navarra, Mauro

    2007-08-01

    A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.

  9. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  10. Conductivity maximum in a charged colloidal suspension

    Energy Technology Data Exchange (ETDEWEB)

    Bastea, S

    2009-01-27

    Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.

  11. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  12. Maximum Temperature Detection System for Integrated Circuits

    Science.gov (United States)

    Frankiewicz, Maciej; Kos, Andrzej

    2015-03-01

    The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.

  13. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  14. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  15. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  16. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  17. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  18. Objective Bayesianism and the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    Jon Williamson

    2013-09-01

    Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.

  19. Field sampling for monitoring, migration and defining the areal extent of chemical contamination

    International Nuclear Information System (INIS)

    Thomas, J.M.; Skalski, J.R.; Eberhardt, L.L.; Simmons, M.A.

    1984-01-01

    As part of two studies funded by the U.S. Nuclear Regulatory Commission and the USEPA, the authors have investigated field sampling strategies and compositing as a means of detecting spills or migration at commercial low-level radioactive and chemical waste disposal sites and bioassays for detecting contamination at chemical waste sites. Compositing (pooling samples) for detection is discussed first, followed by the development of a statistical test to determine whether any component of a composite exceeds a prescribed maximum acceptable level. Subsequently, the authors explore the question of optimal field sampling designs and present the features of a microcomputer program designed to show the difficulties in constructing efficient field designs and using compositing schemes. Finally, they propose the use of bioassays as an adjunct or replacement for chemical analysis as a means of detecting and defining the areal extent of chemical migration

  20. Computing multi-species chemical equilibrium with an algorithm based on the reaction extents

    DEFF Research Database (Denmark)

    Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.

    2013-01-01

    -negative constrains. The residual function, representing the distance to the equilibrium, is defined from the chemical potential (or Gibbs energy) of the chemical system. Local minimums are potentially avoided by the prioritization of the aqueous reactions with respect to the heterogeneous reactions. The formation......A mathematical model for the solution of a set of chemical equilibrium equations in a multi-species and multiphase chemical system is described. The computer-aid solution of model is achieved by means of a Newton-Raphson method enhanced with a line-search scheme, which deals with the non...... and release of gas bubbles is taken into account in the model, limiting the concentration of volatile aqueous species to a maximum value, given by the gas solubility constant.The reaction extents are used as state variables for the numerical method. As a result, the accepted solution satisfies the charge...

  1. A study on the strength of leg extention and leg curl exercise measured with the "Tremax System" and the "ADR" (Aero Dynamic Resistance System)

    OpenAIRE

    安藤, 勝英

    1995-01-01

    This is a study on the nature of the extensor and flexor muscles by the leg extention and leg curl exercise. The strength of the static muscles were measured with the "Tremax system" and for the strength of the dynamic muscles the "ADR" was used. Measuring the strength of the extensor muscles at bar no.1 to 4 of the Tremax System,it comes to a maximum at bar no.2 (extention 60°) but it declines radically when approaching extention 0°. Compared to the extensor muscles, the flexor muscules show...

  2. A Global Geospatial Database of 5000+ Historic Flood Event Extents

    Science.gov (United States)

    Tellman, B.; Sullivan, J.; Doyle, C.; Kettner, A.; Brakenridge, G. R.; Erickson, T.; Slayback, D. A.

    2017-12-01

    A key dataset that is missing for global flood model validation and understanding historic spatial flood vulnerability is a global historical geo-database of flood event extents. Decades of earth observing satellites and cloud computing now make it possible to not only detect floods in near real time, but to run these water detection algorithms back in time to capture the spatial extent of large numbers of specific events. This talk will show results from the largest global historical flood database developed to date. We use the Dartmouth Flood Observatory flood catalogue to map over 5000 floods (from 1985-2017) using MODIS, Landsat, and Sentinel-1 Satellites. All events are available for public download via the Earth Engine Catalogue and via a website that allows the user to query floods by area or date, assess population exposure trends over time, and download flood extents in geospatial format.In this talk, we will highlight major trends in global flood exposure per continent, land use type, and eco-region. We will also make suggestions how to use this dataset in conjunction with other global sets to i) validate global flood models, ii) assess the potential role of climatic change in flood exposure iii) understand how urbanization and other land change processes may influence spatial flood exposure iv) assess how innovative flood interventions (e.g. wetland restoration) influence flood patterns v) control for event magnitude to assess the role of social vulnerability and damage assessment vi) aid in rapid probabilistic risk assessment to enable microinsurance markets. Authors on this paper are already using the database for the later three applications and will show examples of wetland intervention analysis in Argentina, social vulnerability analysis in the USA, and micro insurance in India.

  3. Extent of pyrolysis impacts on fast pyrolysis biochar properties.

    Science.gov (United States)

    Brewer, Catherine E; Hu, Yan-Yan; Schmidt-Rohr, Klaus; Loynachan, Thomas E; Laird, David A; Brown, Robert C

    2012-01-01

    A potential concern about the use of fast pyrolysis rather than slow pyrolysis biochars as soil amendments is that they may contain high levels of bioavailable C due to short particle residence times in the reactors, which could reduce the stability of biochar C and cause nutrient immobilization in soils. To investigate this concern, three corn ( L.) stover fast pyrolysis biochars prepared using different reactor conditions were chemically and physically characterized to determine their extent of pyrolysis. These biochars were also incubated in soil to assess their impact on soil CO emissions, nutrient availability, microorganism population growth, and water retention capacity. Elemental analysis and quantitative solid-state C nuclear magnetic resonance spectroscopy showed variation in O functional groups (associated primarily with carbohydrates) and aromatic C, which could be used to define extent of pyrolysis. A 24-wk incubation performed using a sandy soil amended with 0.5 wt% of corn stover biochar showed a small but significant decrease in soil CO emissions and a decrease in the bacteria:fungi ratios with extent of pyrolysis. Relative to the control soil, biochar-amended soils had small increases in CO emissions and extractable nutrients, but similar microorganism populations, extractable NO levels, and water retention capacities. Corn stover amendments, by contrast, significantly increased soil CO emissions and microbial populations, and reduced extractable NO. These results indicate that C in fast pyrolysis biochar is stable in soil environments and will not appreciably contribute to nutrient immobilization. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  4. Detecting the Extent of Eutectoid Transformation in U-10Mo

    Energy Technology Data Exchange (ETDEWEB)

    Devaraj, Arun [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Jana, Saumyadeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McInnis, Colleen A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lombardo, Nicholas J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Joshi, Vineet V. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sweet, Lucas E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Manandhar, Sandeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lavender, Curt A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-08-31

    During eutectoid transformation of U-10Mo alloy, uniform metastable γ UMo phase is expected to transform to a mixture of α-U and γ’-U2Mo phase. The presence of transformation products in final U-10Mo fuel, especially the α phase is considered detrimental for fuel irradiation performance, so it is critical to accurately evaluate the extent of transformation in the final U-10Mo alloy. This phase transformation can cause a volume change that induces a density change in final alloy. To understand this density and volume change, we developed a theoretical model to calculate the volume expansion and resultant density change of U-10Mo alloy as a function of the extent of eutectoid transformation. Based on the theoretically calculated density change for 0 to 100% transformation, we conclude that an experimental density measurement system will be challenging to employ to reliably detect and quantify the extent of transformation. Subsequently, to assess the ability of various methods to detect the transformation in U-10Mo, we annealed U-10Mo alloy samples at 500°C for various times to achieve in low, medium, and high extent of transformation. After the heat treatment at 500°C, the samples were metallographically polished and subjected to optical microscopy and x-ray diffraction (XRD) methods. Based on our assessment, optical microscopy and image processing can be used to determine the transformed area fraction, which can then be correlated with the α phase volume fraction measured by XRD analysis. XRD analysis of U-10Mo aged at 500°C detected only α phase and no γ’ was detected. To further validate the XRD results, atom probe tomography (APT) was used to understand the composition of transformed regions in U-10Mo alloys aged at 500°C for 10 hours. Based on the APT results, the lamellar transformation product was found to comprise α phase with close to 0 at% Mo and γ phase with 28–32 at% Mo, and the Mo concentration was highest at the

  5. Extent of the Immirzi ambiguity in quantum general relativity

    International Nuclear Information System (INIS)

    Marugan, Guillermo A Mena

    2002-01-01

    The Ashtekar-Barbero formulation of general relativity admits a one-parameter family of canonical transformations that preserves the expressions of the Gauss and diffeomorphism constraints. The loop quantization of the connection formalism based on each of these canonical sets leads to different predictions. This phenomenon is called the Immirzi ambiguity. It has been recently argued that this ambiguity could be generalized to the extent of a spatially dependent function instead of a parameter. This would ruin the predictability of loop quantum gravity. We prove that such expectations are not realized, so that the Immirzi ambiguity introduces exclusively a freedom in the choice of a real number. (letter to the editor)

  6. Determination of extent of surgical intervention for endometrial carcinoma

    International Nuclear Information System (INIS)

    Smakhtina, O.L.; Nugmanova, M.I.; Nigaj, S.V.

    1986-01-01

    Clinical, cytologic, histologic and X-ray procedures were used in examining 120 patients with endometrial carcinoma. The results of pre- and intraoperative determination of clinical stage were compared in 65 cases of uterine extirpation with appendages and lymphadenectomy. Errors in preoperative identification of the extent of tumor expansion were made in 9 cases (13.8+-4.3%). It was found that determinations of the site and expansion of tumor make the case for hysterocervico-angiolymphography whereas identification of tumor pattern and degree of cell differentiation-for cytologic and histologic assays

  7. Detecting the Extent of Eutectoid Transformation in U-10Mo

    International Nuclear Information System (INIS)

    Devaraj, Arun; Jana, Saumyadeep; McInnis, Colleen A.; Lombardo, Nicholas J.; Joshi, Vineet V.; Sweet, Lucas E.; Manandhar, Sandeep; Lavender, Curt A.

    2016-01-01

    During eutectoid transformation of U-10Mo alloy, uniform metastable ? UMo phase is expected to transform to a mixture of ?-U and ?'-U_2Mo phase. The presence of transformation products in final U-10Mo fuel, especially the ? phase is considered detrimental for fuel irradiation performance, so it is critical to accurately evaluate the extent of transformation in the final U-10Mo alloy. This phase transformation can cause a volume change that induces a density change in final alloy. To understand this density and volume change, we developed a theoretical model to calculate the volume expansion and resultant density change of U-10Mo alloy as a function of the extent of eutectoid transformation. Based on the theoretically calculated density change for 0 to 100% transformation, we conclude that an experimental density measurement system will be challenging to employ to reliably detect and quantify the extent of transformation. Subsequently, to assess the ability of various methods to detect the transformation in U-10Mo, we annealed U-10Mo alloy samples at 500°C for various times to achieve in low, medium, and high extent of transformation. After the heat treatment at 500°C, the samples were metallographically polished and subjected to optical microscopy and x-ray diffraction (XRD) methods. Based on our assessment, optical microscopy and image processing can be used to determine the transformed area fraction, which can then be correlated with the ? phase volume fraction measured by XRD analysis. XRD analysis of U-10Mo aged at 500°C detected only ? phase and no ?' was detected. To further validate the XRD results, atom probe tomography (APT) was used to understand the composition of transformed regions in U-10Mo alloys aged at 500°C for 10 hours. Based on the APT results, the lamellar transformation product was found to comprise ? phase with close to 0 at% Mo and ? phase with 28-32 at% Mo, and the Mo concentration was highest at the ?/? interface.

  8. Measurement of extent of intense ion beam charge neutralization

    Energy Technology Data Exchange (ETDEWEB)

    Engelko, V [Efremov Institute of Electrophysical Apparatus, St. Petersburg (Russian Federation); Giese, H; Schalk, S [Forschungszentrum Karlsruhe (Germany). INR

    1997-12-31

    Various diagnostic tools were employed to study and optimize the extent of space charge neutralization in the pulsed intense proton beam facility PROFA, comprising Langmuir probes, capacitive probes, and a novel type of the three electrode collector. The latter does not only allow us to measure ion and electron beam current densities in a high magnetic field environment, but also to deduce the density spectrum of the beam electrons. Appropriate operating conditions were identified to attain a complete space charge neutralisation. (author). 5 figs., 4 refs.

  9. Rapid Estimates of Rupture Extent for Large Earthquakes Using Aftershocks

    Science.gov (United States)

    Polet, J.; Thio, H. K.; Kremer, M.

    2009-12-01

    The spatial distribution of aftershocks is closely linked to the rupture extent of the mainshock that preceded them and a rapid analysis of aftershock patterns therefore has potential for use in near real-time estimates of earthquake impact. The correlation between aftershocks and slip distribution has frequently been used to estimate the fault dimensions of large historic earthquakes for which no, or insufficient, waveform data is available. With the advent of earthquake inversions that use seismic waveforms and geodetic data to constrain the slip distribution, the study of aftershocks has recently been largely focused on enhancing our understanding of the underlying mechanisms in a broader earthquake mechanics/dynamics framework. However, in a near real-time earthquake monitoring environment, in which aftershocks of large earthquakes are routinely detected and located, these data may also be effective in determining a fast estimate of the mainshock rupture area, which would aid in the rapid assessment of the impact of the earthquake. We have analyzed a considerable number of large recent earthquakes and their aftershock sequences and have developed an effective algorithm that determines the rupture extent of a mainshock from its aftershock distribution, in a fully automatic manner. The algorithm automatically removes outliers by spatial binning, and subsequently determines the best fitting “strike” of the rupture and its length by projecting the aftershock epicenters onto a set of lines that cross the mainshock epicenter with incremental azimuths. For strike-slip or large dip-slip events, for which the surface projection of the rupture is recti-linear, the calculated strike correlates well with the strike of the fault and the corresponding length, determined from the distribution of aftershocks projected onto the line, agrees well with the rupture length. In the case of a smaller dip-slip rupture with an aspect ratio closer to 1, the procedure gives a measure

  10. Extent and application of patient diaries in Austria

    DEFF Research Database (Denmark)

    Heindl, Patrik; Bachlechner, Adelbert; Nydahl, Peter

    2017-01-01

    Background: Diaries written for patients in the intensive care unit (ICU) are offered in many European countries. In Austria, ICU diaries have been relatively unknown, but since 2012, they have started to emerge. Aim: The aim of this study was to explore the extent and application of ICU diaries...... in Austria in 2015. Method: The study had a prospective multiple methods design of survey and interviews. All ICUs in Austria were surveyed in 2015 to identify which ICUs used diaries. ICUs using diaries were selected for semi-structured key-informant telephone interviews on the application of ICU diaries...

  11. Integrating remotely sensed surface water extent into continental scale hydrology.

    Science.gov (United States)

    Revilla-Romero, Beatriz; Wanders, Niko; Burek, Peter; Salamon, Peter; de Roo, Ad

    2016-12-01

    In hydrological forecasting, data assimilation techniques are employed to improve estimates of initial conditions to update incorrect model states with observational data. However, the limited availability of continuous and up-to-date ground streamflow data is one of the main constraints for large-scale flood forecasting models. This is the first study that assess the impact of assimilating daily remotely sensed surface water extent at a 0.1° × 0.1° spatial resolution derived from the Global Flood Detection System (GFDS) into a global rainfall-runoff including large ungauged areas at the continental spatial scale in Africa and South America. Surface water extent is observed using a range of passive microwave remote sensors. The methodology uses the brightness temperature as water bodies have a lower emissivity. In a time series, the satellite signal is expected to vary with changes in water surface, and anomalies can be correlated with flood events. The Ensemble Kalman Filter (EnKF) is a Monte-Carlo implementation of data assimilation and used here by applying random sampling perturbations to the precipitation inputs to account for uncertainty obtaining ensemble streamflow simulations from the LISFLOOD model. Results of the updated streamflow simulation are compared to baseline simulations, without assimilation of the satellite-derived surface water extent. Validation is done in over 100 in situ river gauges using daily streamflow observations in the African and South American continent over a one year period. Some of the more commonly used metrics in hydrology were calculated: KGE', NSE, PBIAS%, R 2 , RMSE, and VE. Results show that, for example, NSE score improved on 61 out of 101 stations obtaining significant improvements in both the timing and volume of the flow peaks. Whereas the validation at gauges located in lowland jungle obtained poorest performance mainly due to the closed forest influence on the satellite signal retrieval. The conclusion is that

  12. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  13. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  14. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  15. Community senior first aid training in Western Australia: its extent and effect on knowledge and skills.

    Science.gov (United States)

    Lynch, Dania M; Gennat, Hanni C; Celenza, Tony; Jacobs, Ian G; O'Brien, Debra; Jelinek, George A

    2006-04-01

    To define the extent of Senior First Aid training in a sample of the Western Australian community, and to evaluate the effect of previous training on first aid knowledge and skills. A telephone survey of a random sample from suburban Perth and rural Western Australia; and practical assessment of first aid skills in a subsample of those surveyed. 30.4% of respondents had completed a Senior First Aid certificate. Trained individuals performed consistently better in theoretical tests (p=0.0001) and practical management of snakebite (p=0.021) than untrained. However, many volunteers, both trained and untrained, demonstrated poor skills in applying pressure immobilisation bandaging and splinting the limb adequately despite electing to do so in theory. Overall knowledge and performance of first aid skills by the community are poor, but are improved by first aid training courses.

  16. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  17. Maximum Profit Configurations of Commercial Engines

    Directory of Open Access Journals (Sweden)

    Yiran Chen

    2011-06-01

    Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.

  18. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  19. Modeling Mediterranean Ocean climate of the Last Glacial Maximum

    Directory of Open Access Journals (Sweden)

    U. Mikolajewicz

    2011-03-01

    Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the Last Glacial Maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions complicated. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of salinity in the Mediterranean in spite of reduced net evaporation.

  20. Flood extent mapping for Namibia using change detection and thresholding with SAR

    International Nuclear Information System (INIS)

    Long, Stephanie; Fatoyinbo, Temilola E; Policelli, Frederick

    2014-01-01

    A new method for flood detection change detection and thresholding (CDAT) was used with synthetic aperture radar (SAR) imagery to delineate the extent of flooding for the Chobe floodplain in the Caprivi region of Namibia. This region experiences annual seasonal flooding and has seen a recent renewal of severe flooding after a long dry period in the 1990s. Flooding in this area has caused loss of life and livelihoods for the surrounding communities and has caught the attention of disaster relief agencies. There is a need for flood extent mapping techniques that can be used to process images quickly, providing near real-time flooding information to relief agencies. ENVISAT/ASAR and Radarsat-2 images were acquired for several flooding seasons from February 2008 to March 2013. The CDAT method was used to determine flooding from these images and includes the use of image subtraction, decision-based classification with threshold values, and segmentation of SAR images. The total extent of flooding determined for 2009, 2011 and 2012 was about 542 km 2 , 720 km 2 , and 673 km 2 respectively. Pixels determined to be flooded in vegetation were typically <0.5% of the entire scene, with the exception of 2009 where the detection of flooding in vegetation was much greater (almost one third of the total flooded area). The time to maximum flooding for the 2013 flood season was determined to be about 27 days. Landsat water classification was used to compare the results from the new CDAT with SAR method; the results show good spatial agreement with Landsat scenes. (paper)

  1. Reassessing the extent of the Q classification for containment paint

    International Nuclear Information System (INIS)

    Spires, G.

    1995-01-01

    A mounting number of site-specific paint debris transport and screen clogging analyses submitted to justify substandard containment paint work have been deemed persuasive by virtue of favorable U.S. Nuclear Regulatory Commission safety evaluation report (SER) findings. These lay a strong foundation for a standardized approach to redefining the extent to which paint in containment needs to be considered open-quotes Q.close quotes This information justifies an initiative by licensees to roll back paint work quality commitments made at the design phase. This paper questions the validity of the basic premise that all primary containment paint can significantly compromise core and containment cooling [emergency core cooling system/engineered safeguard feature (ECCS/ESF)]. It is posited that the physical extent of painted containment surfaces for which extant material qualification and quality control (QC) structures need apply can be limited to zones relatively proximate to ECCS/ESF suction points. For other painted containment surfaces, simplified criteria should be allowed

  2. The regional extent of suppression: strabismics versus nonstrabismics.

    Science.gov (United States)

    Babu, Raiju Jacob; Clavagnier, Simon R; Bobier, William; Thompson, Benjamin; Hess, Robert F

    2013-10-09

    Evidence is accumulating that suppression may be the cause of amblyopia rather than a secondary consequence of mismatched retinal images. For example, treatment interventions that target suppression may lead to better binocular and monocular outcomes. Furthermore, it has recently been demonstrated that the measurement of suppression may have prognostic value for patching therapy. For these reasons, the measurement of suppression in the clinic needs to be improved beyond the methods that are currently available, which provide a binary outcome. We describe a novel quantitative method for measuring the regional extent of suppression that is suitable for clinical use. The method involves a dichoptic perceptual matching procedure at multiple visual field locations. We compare a group of normal controls (mean age: 28 ± 5 years); a group with strabismic amblyopia (four with microesotropia, five with esotropia, and one with exotropia; mean age: 35 ± 10 years); and a group with nonstrabismic anisometropic amblyopia (mean age: 33 ± 12 years). The extent and magnitude of suppression was similar for observers with strabismic and nonstrabismic amblyopia. Suppression was strongest within the central field and extended throughout the 20° field that we measured. Suppression extends throughout the central visual field in both strabismic and anisometropic forms of amblyopia. The strongest suppression occurs within the region of the visual field corresponding to the fovea of the fixing eye.

  3. Corticocortical feedback increases the spatial extent of normalization.

    Science.gov (United States)

    Nassi, Jonathan J; Gómez-Laberge, Camille; Kreiman, Gabriel; Born, Richard T

    2014-01-01

    Normalization has been proposed as a canonical computation operating across different brain regions, sensory modalities, and species. It provides a good phenomenological description of non-linear response properties in primary visual cortex (V1), including the contrast response function and surround suppression. Despite its widespread application throughout the visual system, the underlying neural mechanisms remain largely unknown. We recently observed that corticocortical feedback contributes to surround suppression in V1, raising the possibility that feedback acts through normalization. To test this idea, we characterized area summation and contrast response properties in V1 with and without feedback from V2 and V3 in alert macaques and applied a standard normalization model to the data. Area summation properties were well explained by a form of divisive normalization, which computes the ratio between a neuron's driving input and the spatially integrated activity of a "normalization pool." Feedback inactivation reduced surround suppression by shrinking the spatial extent of the normalization pool. This effect was independent of the gain modulation thought to mediate the influence of contrast on area summation, which remained intact during feedback inactivation. Contrast sensitivity within the receptive field center was also unaffected by feedback inactivation, providing further evidence that feedback participates in normalization independent of the circuit mechanisms involved in modulating contrast gain and saturation. These results suggest that corticocortical feedback contributes to surround suppression by increasing the visuotopic extent of normalization and, via this mechanism, feedback can play a critical role in contextual information processing.

  4. Corticocortical feedback increases the spatial extent of normalization

    Science.gov (United States)

    Nassi, Jonathan J.; Gómez-Laberge, Camille; Kreiman, Gabriel; Born, Richard T.

    2014-01-01

    Normalization has been proposed as a canonical computation operating across different brain regions, sensory modalities, and species. It provides a good phenomenological description of non-linear response properties in primary visual cortex (V1), including the contrast response function and surround suppression. Despite its widespread application throughout the visual system, the underlying neural mechanisms remain largely unknown. We recently observed that corticocortical feedback contributes to surround suppression in V1, raising the possibility that feedback acts through normalization. To test this idea, we characterized area summation and contrast response properties in V1 with and without feedback from V2 and V3 in alert macaques and applied a standard normalization model to the data. Area summation properties were well explained by a form of divisive normalization, which computes the ratio between a neuron's driving input and the spatially integrated activity of a “normalization pool.” Feedback inactivation reduced surround suppression by shrinking the spatial extent of the normalization pool. This effect was independent of the gain modulation thought to mediate the influence of contrast on area summation, which remained intact during feedback inactivation. Contrast sensitivity within the receptive field center was also unaffected by feedback inactivation, providing further evidence that feedback participates in normalization independent of the circuit mechanisms involved in modulating contrast gain and saturation. These results suggest that corticocortical feedback contributes to surround suppression by increasing the visuotopic extent of normalization and, via this mechanism, feedback can play a critical role in contextual information processing. PMID:24910596

  5. Maximum Torque and Momentum Envelopes for Reaction Wheel Arrays

    Science.gov (United States)

    Markley, F. Landis; Reynolds, Reid G.; Liu, Frank X.; Lebsock, Kenneth L.

    2009-01-01

    Spacecraft reaction wheel maneuvers are limited by the maximum torque and/or angular momentum that the wheels can provide. For an n-wheel configuration, the torque or momentum envelope can be obtained by projecting the n-dimensional hypercube, representing the domain boundary of individual wheel torques or momenta, into three dimensional space via the 3xn matrix of wheel axes. In this paper, the properties of the projected hypercube are discussed, and algorithms are proposed for determining this maximal torque or momentum envelope for general wheel configurations. Practical strategies for distributing a prescribed torque or momentum among the n wheels are presented, with special emphasis on configurations of four, five, and six wheels.

  6. Venus atmosphere profile from a maximum entropy principle

    Directory of Open Access Journals (Sweden)

    L. N. Epele

    2007-10-01

    Full Text Available The variational method with constraints recently developed by Verkley and Gerkema to describe maximum-entropy atmospheric profiles is generalized to ideal gases but with temperature-dependent specific heats. In so doing, an extended and non standard potential temperature is introduced that is well suited for tackling the problem under consideration. This new formalism is successfully applied to the atmosphere of Venus. Three well defined regions emerge in this atmosphere up to a height of 100 km from the surface: the lowest one up to about 35 km is adiabatic, a transition layer located at the height of the cloud deck and finally a third region which is practically isothermal.

  7. Maximum power analysis of photovoltaic module in Ramadi city

    Energy Technology Data Exchange (ETDEWEB)

    Shahatha Salim, Majid; Mohammed Najim, Jassim [College of Science, University of Anbar (Iraq); Mohammed Salih, Salih [Renewable Energy Research Center, University of Anbar (Iraq)

    2013-07-01

    Performance of photovoltaic (PV) module is greatly dependent on the solar irradiance, operating temperature, and shading. Solar irradiance can have a significant impact on power output of PV module and energy yield. In this paper, a maximum PV power which can be obtain in Ramadi city (100km west of Baghdad) is practically analyzed. The analysis is based on real irradiance values obtained as the first time by using Soly2 sun tracker device. Proper and adequate information on solar radiation and its components at a given location is very essential in the design of solar energy systems. The solar irradiance data in Ramadi city were analyzed based on the first three months of 2013. The solar irradiance data are measured on earth's surface in the campus area of Anbar University. Actual average data readings were taken from the data logger of sun tracker system, which sets to save the average readings for each two minutes and based on reading in each one second. The data are analyzed from January to the end of March-2013. Maximum daily readings and monthly average readings of solar irradiance have been analyzed to optimize the output of photovoltaic solar modules. The results show that the system sizing of PV can be reduced by 12.5% if a tracking system is used instead of fixed orientation of PV modules.

  8. Superfast maximum-likelihood reconstruction for quantum tomography

    Science.gov (United States)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  9. [Evolutionary process unveiled by the maximum genetic diversity hypothesis].

    Science.gov (United States)

    Huang, Yi-Min; Xia, Meng-Ying; Huang, Shi

    2013-05-01

    As two major popular theories to explain evolutionary facts, the neutral theory and Neo-Darwinism, despite their proven virtues in certain areas, still fail to offer comprehensive explanations to such fundamental evolutionary phenomena as the genetic equidistance result, abundant overlap sites, increase in complexity over time, incomplete understanding of genetic diversity, and inconsistencies with fossil and archaeological records. Maximum genetic diversity hypothesis (MGD), however, constructs a more complete evolutionary genetics theory that incorporates all of the proven virtues of existing theories and adds to them the novel concept of a maximum or optimum limit on genetic distance or diversity. It has yet to meet a contradiction and explained for the first time the half-century old Genetic Equidistance phenomenon as well as most other major evolutionary facts. It provides practical and quantitative ways of studying complexity. Molecular interpretation using MGD-based methods reveal novel insights on the origins of humans and other primates that are consistent with fossil evidence and common sense, and reestablished the important role of China in the evolution of humans. MGD theory has also uncovered an important genetic mechanism in the construction of complex traits and the pathogenesis of complex diseases. We here made a series of sequence comparisons among yeasts, fishes and primates to illustrate the concept of limit on genetic distance. The idea of limit or optimum is in line with the yin-yang paradigm in the traditional Chinese view of the universal creative law in nature.

  10. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  11. Probable maximum flood analysis, Richton Dome, Mississippi-Phase I: Technical report

    International Nuclear Information System (INIS)

    1987-03-01

    This report presents results of a preliminary analysis of the extent of inundation that would result from a probable maximum flood (PMF) event in the overdome area of Richton Dome, Mississippi. Bogue Homo and Thompson Creek watersheds drain the overdome area. The US Army Corps of Engineers' HEC-1 Flood Hydrograph Package was used to calculate runoff hydrographs, route computed flood hydrographs, and determine maximum flood stages at cross sections along overdome tributaries. The area and configuration of stream cross sections were determined from US Geological Survey topographic maps. Using maximum flood stages calculated by the HEC-1 analysis, areas of inundation were delineated on 10-ft (3-m) contour interval topographic maps. Approximately 10% of the overdome area, or 0.9 mi 2 (2 km 2 ), would be inundated by a PMF event. 34 refs., 3 figs., 1 tab

  12. Extent and application of ICU diaries in Germany in 2014

    DEFF Research Database (Denmark)

    Nydahl, Peter; Knueck, Dirk; Egerod, Ingrid

    2015-01-01

    in keeping ICU diaries. CONCLUSION: Six years after the introduction of ICU diaries, ICU nurses in Germany are becoming familiar with the concept. Nursing shortage and bureaucratic challenges have impeded the process of implementation, but the adaption of ICU diaries to German conditions appears......, newsletters, newspapers, lectures and publications in German nursing journals. AIM: The aim of the study was to update our knowledge of the extent and application of ICU diaries in Germany in 2014. DESIGN: The study had a prospective mixed methods multicenter design. METHOD: All 152 ICUs in the two German...... of Germany had implemented diaries and three units were planning to do so. Interviews were conducted with nurses at 14 selected ICUs. Informants reported successful adaption of the diary concept to their culture, but variability in application. No units were identified where all nursing staff participated...

  13. Lymphadenectomy in bladder cancer: What should be the extent?

    Directory of Open Access Journals (Sweden)

    K Muruganandham

    2010-01-01

    Full Text Available The extent of Lymh node dissection (LND during radical cystectomy is a subject of increasing importance with several studies suggesting that an extended LND may improve staging accuracy and outcome. Significant numbers of patients have lymph node metastasis above the boundaries of standard LND. Extended LND yields higher number of lymph nodes which may result in better staging. Various retrospective studies have reported better oncological outcomes with extended LND compared to limited LND. No difference in the mortality and the incidence of lymphocele formation has been found between ′standard′ and ′extended′ LND. Till we have a well-designed randomized controlled trial to address these issues for level 1 evidence, it is not justified to deny our patients the advantages of ′extended′ lymphadenectomy based on the current level of evidence.

  14. Peritoneum and mesenterium. Radiological anatomy and extent of peritoneal diseases

    International Nuclear Information System (INIS)

    Ba-Ssalamah, A.; Bastati, N.; Uffmann, M.; Schima, W.

    2009-01-01

    The abdominal cavity is subdivided into the peritoneal cavity, lined by the parietal peritoneum, and the extraperitoneal space. It extends from the diaphragm to the pelvic floor. The visceral peritoneum covers the intraperitoneal organs and part of the pelvic organs. The parietal and visceral layers of the peritoneum are in sliding contact; the potential space between them is called the peritoneal cavity and is a part of the embryologic abdominal cavity or primitive coelomic duct. To understand the complex anatomical construction of the different variants of plicae and recesses of the peritoneum, an appreciation of the embryologic development of the peritoneal cavity is crucial. This knowledge reflects the understanding of the peritoneal anatomy, deep knowledge of which is very important in determining the cause and extent of peritoneal diseases as well as in decision making when choosing the appropriate therapeutic approach, whether surgery, conservative treatment, or interventional radiology. (orig.) [de

  15. Obesity and extent of emphysema depicted at CT

    International Nuclear Information System (INIS)

    Gu, S.; Li, R.; Leader, J.K.; Zheng, B.; Bon, J.; Gur, D.; Sciurba, F.; Jin, C.; Pu, J.

    2015-01-01

    Aim: To investigate the underlying relationship between obesity and the extent of emphysema depicted at CT. Methods and materials: A dataset of 477 CT examinations was retrospectively collected from a study of chronic obstructive pulmonary disease (COPD). The low attenuation areas (LAAs; ≤950 HU) of the lungs were identified. The extent of emphysema (denoted as %LAA) was defined as the percentage of LAA divided by the lung volume. The association between log-transformed %LAA and body mass index (BMI) adjusted for age, sex, the forced expiratory volume in one second as percent predicted value (FEV1% predicted), and smoking history (pack years) was assessed using multiple linear regression analysis. Results: After adjusting for age, gender, smoking history, and FEV1% predicted, BMI was negatively associated with severe emphysema in patients with COPD. Specifically, one unit increase in BMI is associated with a 0.93-fold change (95% CI: 0.91–0.96, p < 0.001) in %LAA; the estimated %LAA for males was 1.75 (95% CI: 1.36–2.26, p < 0.001) times that of females; per 10% increase in FEV1% predicated is associated with a 0.72-fold change (95% CI: 0.69–0.76, p < 0.001) in %LAA. Conclusion: Increasing obesity is negatively associated with severity of emphysema independent of gender, age, and smoking history. - Highlights: • BMI is inversely associated with emphysema depicted on CT. • Emphysema severity in men was higher than that in women. • ∼50% of the subjects with COPD in our dataset were either overweight or obese. • Age and smoking status are not significantly associated with %LAA

  16. Measuring the extent of overlaps in protected area designations.

    Science.gov (United States)

    Deguignet, Marine; Arnell, Andy; Juffe-Bignoli, Diego; Shi, Yichuan; Bingham, Heather; MacSharry, Brian; Kingston, Naomi

    2017-01-01

    Over the past decades, a number of national policies and international conventions have been implemented to promote the expansion of the world's protected area network, leading to a diversification of protected area strategies, types and designations. As a result, many areas are protected by more than one convention, legal instrument, or other effective means which may result in a lack of clarity around the governance and management regimes of particular locations. We assess the degree to which different designations overlap at global, regional and national levels to understand the extent of this phenomenon at different scales. We then compare the distribution and coverage of these multi-designated areas in the terrestrial and marine realms at the global level and among different regions, and we present the percentage of each county's protected area extent that is under more than one designation. Our findings show that almost a quarter of the world's protected area network is protected through more than one designation. In fact, we have documented up to eight overlapping designations. These overlaps in protected area designations occur in every region of the world, both in the terrestrial and marine realms, but are more common in the terrestrial realm and in some regions, notably Europe. In the terrestrial realm, the most common overlap is between one national and one international designation. In the marine realm, the most common overlap is between any two national designations. Multi-designations are therefore a widespread phenomenon but its implications are not well understood. This analysis identifies, for the first time, multi-designated areas across all designation types. This is a key step to understand how these areas are managed and governed to then move towards integrated and collaborative approaches that consider the different management and conservation objectives of each designation.

  17. Estimating Global Cropland Extent with Multi-year MODIS Data

    Directory of Open Access Journals (Sweden)

    Christopher O. Justice

    2010-07-01

    Full Text Available This study examines the suitability of 250 m MODIS (MODerate Resolution Imaging Spectroradiometer data for mapping global cropland extent. A set of 39 multi-year MODIS metrics incorporating four MODIS land bands, NDVI (Normalized Difference Vegetation Index and thermal data was employed to depict cropland phenology over the study period. Sub-pixel training datasets were used to generate a set of global classification tree models using a bagging methodology, resulting in a global per-pixel cropland probability layer. This product was subsequently thresholded to create a discrete cropland/non-cropland indicator map using data from the USDA-FAS (Foreign Agricultural Service Production, Supply and Distribution (PSD database describing per-country acreage of production field crops. Five global land cover products, four of which attempted to map croplands in the context of multiclass land cover classifications, were subsequently used to perform regional evaluations of the global MODIS cropland extent map. The global probability layer was further examined with reference to four principle global food crops: corn, soybeans, wheat and rice. Overall results indicate that the MODIS layer best depicts regions of intensive broadleaf crop production (corn and soybean, both in correspondence with existing maps and in associated high probability matching thresholds. Probability thresholds for wheat-growing regions were lower, while areas of rice production had the lowest associated confidence. Regions absent of agricultural intensification, such as Africa, are poorly characterized regardless of crop type. The results reflect the value of MODIS as a generic global cropland indicator for intensive agriculture production regions, but with little sensitivity in areas of low agricultural intensification. Variability in mapping accuracies between areas dominated by different crop types also points to the desirability of a crop-specific approach rather than attempting

  18. Maximum mass of magnetic white dwarfs

    International Nuclear Information System (INIS)

    Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez

    2015-01-01

    We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)

  19. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  20. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  1. Maximum Margin Clustering of Hyperspectral Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  2. Paving the road to maximum productivity.

    Science.gov (United States)

    Holland, C

    1998-01-01

    "Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.

  3. Maximum power flux of auroral kilometric radiation

    International Nuclear Information System (INIS)

    Benson, R.F.; Fainberg, J.

    1991-01-01

    The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3

  4. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  5. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  6. Extent of ESL teachers' access to, utilisation and production of ...

    African Journals Online (AJOL)

    Hennie

    South African Journal of Education, Volume 35, Number 3, August 2015 ... Second language teaching literature attests to a virtual explosion of research aimed ... teacher to a mere technician, who unquestioningly implements others' ideas ... Internationally, education and teaching are regarded as evidence-based practices.

  7. Creative Process: Its Use and Extent of Formalization by Corporations.

    Science.gov (United States)

    Fernald, Lloyd W., Jr.; Nickolenko, Pam

    1993-01-01

    This study reports creativity policies and practices used by Central Florida corporations. Survey responses (n=105) indicated that businesses are using a variety of creativity techniques with usage greater among the newer companies but that these techniques are not yet a formal part of business operations. (DB)

  8. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  9. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  10. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  11. Collaborative, Rapid Mapping of Water Extents During Hurricane Harvey Using Optical and Radar Satellite Sensors

    Science.gov (United States)

    Muench, R.; Jones, M.; Herndon, K. E.; Bell, J. R.; Anderson, E. R.; Markert, K. N.; Molthan, A.; Adams, E. C.; Shultz, L.; Cherrington, E. A.; Flores, A.; Lucey, R.; Munroe, T.; Layne, G.; Pulla, S. T.; Weigel, A. M.; Tondapu, G.

    2017-12-01

    On August 25, 2017, Hurricane Harvey made landfall between Port Aransas and Port O'Connor, Texas, bringing with it unprecedented amounts of rainfall and flooding. In times of natural disasters of this nature, emergency responders require timely and accurate information about the hazard in order to assess and plan for disaster response. Due to the extreme flooding impacts associated with Hurricane Harvey, delineations of water extent were crucial to inform resource deployment. Through the USGS's Hazards Data Distribution System, government and commercial vendors were able to acquire and distribute various satellite imagery to analysts to create value-added products that can be used by these emergency responders. Rapid-response water extent maps were created through a collaborative multi-organization and multi-sensor approach. One team of researchers created Synthetic Aperture Radar (SAR) water extent maps using modified Copernicus Sentinel data (2017), processed by ESA. This group used backscatter images, pre-processed by the Alaska Satellite Facility's Hybrid Pluggable Processing Pipeline (HyP3), to identify and apply a threshold to identify water in the image. Quality control was conducted by manually examining the image and correcting for potential errors. Another group of researchers and graduate student volunteers derived water masks from high resolution DigitalGlobe and SPOT images. Through a system of standardized image processing, quality control measures, and communication channels the team provided timely and fairly accurate water extent maps to support a larger NASA Disasters Program response. The optical imagery was processed through a combination of various band thresholds by using Normalized Difference Water Index (NDWI), Modified Normalized Water Index (MNDWI), Normalized Difference Vegetation Index (NDVI), and cloud masking. Several aspects of the pre-processing and image access were run on internal servers to expedite the provision of images to

  12. Collaborative, Rapid Mapping of Water Extents During Hurricane Harvey Using Optical and Radar Satellite Sensors

    Science.gov (United States)

    Muench, Rebekke; Jones, Madeline; Herndon, Kelsey; Schultz, Lori; Bell, Jordan; Anderson, Eric; Markert, Kel; Molthan, Andrew; Adams, Emily; Cherrington, Emil; hide

    2017-01-01

    On August 25, 2017, Hurricane Harvey made landfall between Port Aransas and Port O'Connor, Texas, bringing with it unprecedented amounts of rainfall and record flooding. In times of natural disasters of this nature, emergency responders require timely and accurate information about the hazard in order to assess and plan for disaster response. Due to the extreme flooding impacts associated with Hurricane Harvey, delineations of water extent were crucial to inform resource deployment. Through the USGS's Hazards Data Distribution System, government and commercial vendors were able to acquire and distribute various satellite imagery to analysts to create value-added products that can be used by these emergency responders. Rapid-response water extent maps were created through a collaborative multi-organization and multi-sensor approach. One team of researchers created Synthetic Aperture Radar (SAR) water extent maps using modified Copernicus Sentinel data (2017), processed by ESA. This group used backscatter images, pre-processed by the Alaska Satellite Facility's Hybrid Pluggable Processing Pipeline (HyP3), to identify and apply a threshold to identify water in the image. Quality control was conducted by manually examining the image and correcting for potential errors. Another group of researchers and graduate student volunteers derived water masks from high resolution DigitalGlobe and SPOT images. Through a system of standardized image processing, quality control measures, and communication channels the team provided timely and fairly accurate water extent maps to support a larger NASA Disasters Program response. The optical imagery was processed through a combination of various band thresholds and by using Normalized Difference Water Index (NDWI), Modified Normalized Water Index (MNDWI), Normalized Difference Vegetation Index (NDVI), and cloud masking. Several aspects of the pre-processing and image access were run on internal servers to expedite the provision of

  13. Wetland methane emissions during the Last Glacial Maximum estimated from PMIP2 simulations: climate, vegetation and geographic controls

    NARCIS (Netherlands)

    Weber, S.L.; Drury, A.J.; Toonen, W.H.J.; Weele, M. van

    2010-01-01

    It is an open question to what extent wetlands contributed to the interglacial‐glacial decrease in atmospheric methane concentration. Here we estimate methane emissions from glacial wetlands, using newly available PMIP2 simulations of the Last Glacial Maximum (LGM) climate from coupled

  14. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  15. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  16. Significance of microscopic extention from 1162 esophageal carcinoma specimens

    International Nuclear Information System (INIS)

    Wang Jun; Zhu Shuchai; Han Chun; Zhang Xin; Xiao Aiqin; Ma Guoxin

    2007-01-01

    Objective: To examine the subclinical microscopic tumor extention along the long axis in 1162 specimens of esophageal carcinoma so as to help define the clinical target volume(CTV) according to the degree of microscopic extention(ME) for radiotherapy for esophageal carcinoma. Methods: 1162 resected esophageal carcinoma specimens originally located in the neck and thorax were studied with special reference to the correlation between upper and lower resection length from the tumor and positive microscopic margin. Another 52 resected esophageal carcinoma specimens were made into pathological giant sections: the actual resection length of upper and para-esophageal normal tissues was compared with that of the lower nor- mal tissues from the tumor, there by, the ratio of shrinkage was obtained and compared. Results: After fixation, microscopic positive margin ratio of the upper resection border in length ≤0.5 cm group was higher than that in length > 0.5 cm group (16.4% vs 4.1%, P=0.000). Microscopic positive margin ratio of the lower resection border in length ≤1.5 cm group was higher than that in length > 1.5 cm group( 8.1% vs 0.4%, P = 0.000). This showed that the positive margin ratio of the upper border was higher than that of the lower border in resection length > 1.5 cm group(3.5% vs 0.4%, P=0. 000). The actual length of upper and lower normal esophageal tissue after having been made into pathological giant sections in 52 patients, was 30% ± 14% and 44% ± 19% of that measured in the operation. Conclusions: Considering the shrinkage of the normal esophagus during fixation, a CTV margin of 2.0 cm along the upper long axis and 3.5 cm along the lower long axis should be chosen for radiotherapy for esophageal carcinoma, according to the ratio of shrinkage. Ascending invasion proportion is higher than the descending invasion in that tumor. (authors)

  17. The extent of benchmarking in the South African financial sector

    OpenAIRE

    W Vermeulen

    2014-01-01

    Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective impleme...

  18. Maximum entropy production rate in quantum thermodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)

    2010-06-01

    In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible

  19. Estimation of steam-chamber extent using 4D seismic

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, M. [Waseda Univ., Waseda (Japan); Endo, K. [Japan Canada Oil Sands Ltd., Calgary, AB (Canada); Onozuka, S. [Japan Oil, Gas and Metals National Corp., Tokyo (Japan)

    2009-07-01

    The steam-assisted gravity drainage (SAGD) technique is among the most effective steam injection methods and is widely applied in Canadian oil-sand reservoirs. The SAGD technology uses hot steam to decrease bitumen viscosity and allow it to flow. Japan Canada Oil Sands Limited (JACOS) has been developing an oil-sand reservoir in the Alberta's Hangingstone area since 1997. This paper focused on the western area of the reservoir and reported on a study that estimated the steam-chamber extent generated by horizontal well pairs. It listed steam injection start time for each well of the western area. Steam-chamber distribution was determined by distinguishing high temperature and high pore-pressure zones from low temperature and high pore-pressure zones. The bitumen recovery volume in the steam-chamber zone was estimated and compared with the actual cumulative production. This paper provided details of the methodology and interpretation procedures for the quantitative method to interpret 4D-seismic data for a SAGD process. A procedure to apply a petrophysical model was demonstrated first by scaling laboratory measurements to field-scale applications, and then by decoupling pressure and temperature effects. The first 3D seismic data in this study were already affected by higher pressures and temperatures. 11 refs., 3 tabs., 12 figs.

  20. The extent and nature of alcohol advertising on Australian television.

    Science.gov (United States)

    Pettigrew, Simone; Roberts, Michele; Pescud, Melanie; Chapman, Kathy; Quester, Pascale; Miller, Caroline

    2012-09-01

    Current alcohol guidelines in Australia recommend minimising alcohol consumption, especially among minors. This study investigated (i) the extent to which children and the general population are exposed to television advertisements that endorse alcohol consumption and (ii) the themes used in these advertisements. A content analysis was conducted on alcohol advertisements aired over two months in major Australian cities. The advertisements were coded according to the products that were promoted, the themes that were employed, and the time of exposure. Advertising placement expenditure was also captured. In total, 2810 alcohol advertisements were aired, representing one in 10 beverage advertisements. Advertisement placement expenditure for alcohol products in the five cities over the two months was $15.8 million. Around half of all alcohol advertisements appeared during children's popular viewing times. The most common themes used were humour, friendship/mateship and value for money. Children and adults are regularly exposed to advertisements that depict alcohol consumption as fun, social and inexpensive. Such messages may reinforce existing alcohol-related cultural norms that prevent many Australians from meeting current intake guidelines. © 2012 Australasian Professional Society on Alcohol and other Drugs.

  1. Extent and modes of physics instruction in European dental schools.

    Science.gov (United States)

    Letić, Milorad; Popović, Gorjana

    2013-01-01

    Changes in dental education towards integration of sciences and convergence of curricula have affected instruction in physics. Earlier studies of undergraduate curricula make possible comparisons in physics instruction. For this study, the websites of 245 European dental schools were explored, and information about the curriculum was found on 213 sites. Physics instruction in the form of a separate course was found in 63 percent of these schools, with eighty-two hours and 5.9 European Credit Transfer and Accumulation System (ECTS) credits on average. Physics integrated with other subjects or into modules was found in 19 percent of these schools. Half of these schools had on average sixty-one hours and 6.9 ECTS credits devoted to physics. Eighteen percent of the schools had no noticeable obligatory physics instruction, but in half of them physics was found to be required or accepted on admission, included in other subjects, or appeared as an elective course. In 122 dental schools, the extent of physics instruction was found to be between forty and 120 contact hours. Physics instruction has been reduced by up to 14 percent in the last fourteen years in the group of eleven countries that were members of the European Union (EU) in 1997, but by approximately 30 percent in last five years in the group of ten Accession Countries to the EU.

  2. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  3. Vliv klasických postupů při nácviku dovednosti psaní na rozsah slovní zásoby ve srovnání s metodou psaní tvůrčího : The Influence of Classical Practices in Writing Skills Training on Vocabulary’s Extent Compared to the Creative Writing Method

    Directory of Open Access Journals (Sweden)

    Zuzana Stárková

    2017-12-01

    Full Text Available The paper deals with influence of creative writing on vocabulary’s extent in the written manifest of non-native speakers in compare with classical methods used in foreign language didactics. There are presented results of qualitative action research, probe, which was backed by Grant Agency of Charles University. Its respondents were 30 students of Czech as a foreign/second language levelled A2–B2 according to CEFR, from Charles University, Institute of Czech Studies. The respondents were divided into both two control and experimental groups. For analyzing the vocabulary extent, the Mistrík’s formula of repeated words Index was used and applied to the written works of the respondents. Next, the knowledge synonyms and antonyms was observed, on the entrance and final subtests. Probe’s result showed, that after using two different methods within one semester there was a comparable decrease of word’s rep, only half of the respondents, in both control and experimental groups I. In control group II, there was a slight decrease in rep, but experimental group II has a slight increase. The compare knowledge of synonyms and antonyms results that after using two different methods in both control and experimental groups causes increase, the growth was only in the control and experimental group I comparable, however. Due to similar research’s absence for Czech as a foreign/second language, there are briefly presented other researches on creative writing in foreign language tuition.

  4. Analysis of reaction schemes using maximum rates of constituent steps

    Science.gov (United States)

    Motagamwala, Ali Hussain; Dumesic, James A.

    2016-01-01

    We show that the steady-state kinetics of a chemical reaction can be analyzed analytically in terms of proposed reaction schemes composed of series of steps with stoichiometric numbers equal to unity by calculating the maximum rates of the constituent steps, rmax,i, assuming that all of the remaining steps are quasi-equilibrated. Analytical expressions can be derived in terms of rmax,i to calculate degrees of rate control for each step to determine the extent to which each step controls the rate of the overall stoichiometric reaction. The values of rmax,i can be used to predict the rate of the overall stoichiometric reaction, making it possible to estimate the observed reaction kinetics. This approach can be used for catalytic reactions to identify transition states and adsorbed species that are important in controlling catalyst performance, such that detailed calculations using electronic structure calculations (e.g., density functional theory) can be carried out for these species, whereas more approximate methods (e.g., scaling relations) are used for the remaining species. This approach to assess the feasibility of proposed reaction schemes is exact for reaction schemes where the stoichiometric coefficients of the constituent steps are equal to unity and the most abundant adsorbed species are in quasi-equilibrium with the gas phase and can be used in an approximate manner to probe the performance of more general reaction schemes, followed by more detailed analyses using full microkinetic models to determine the surface coverages by adsorbed species and the degrees of rate control of the elementary steps. PMID:27162366

  5. Hamstring Injuries in Professional Soccer Players: Extent of MRI-Detected Edema and the Time to Return to Play.

    Science.gov (United States)

    Crema, Michel D; Godoy, Ivan R B; Abdalla, Rene J; de Aquino, Jose Sanchez; Ingham, Sheila J McNeill; Skaf, Abdalla Y

    Discrepancies exist in the literature regarding the association of the extent of injuries assessed on magnetic resonance imaging (MRI) with recovery times. MRI-detected edema in grade 1 hamstring injuries does not affect the return to play (RTP). Retrospective cohort study. Level 4. Grade 1 hamstring injuries from 22 professional soccer players were retrospectively reviewed. The extent of edema-like changes on fluid-sensitive sequences from 1.5-T MRI were evaluated using craniocaudal length, percentage of cross-sectional area, and volume. The time needed to RTP was the outcome. Negative binomial regression analysis tested the measurements of MRI-detected edema-like changes as prognostic factors. The mean craniocaudal length was 7.6 cm (SD, 4.9 cm; range, 0.9-19.1 cm), the mean percentage of cross-sectional area was 23.6% (SD, 20%; range, 4.4%-89.6%), and the mean volume was 33.1 cm 3 (SD, 42.6 cm 3 ; range, 1.1-161.3 cm 3 ). The mean time needed to RTP was 13.6 days (SD, 8.9 days; range, 3-32 days). None of the parameters of extent was associated with RTP. The extent of MRI edema in hamstring injuries does not have prognostic value. Measuring the extent of edema in hamstring injuries using MRI does not add prognostic value in clinical practice.

  6. Influence of Dynamic Neuromuscular Stabilization Approach on Maximum Kayak Paddling Force

    Directory of Open Access Journals (Sweden)

    Davidek Pavel

    2018-03-01

    Full Text Available The purpose of this study was to examine the effect of Dynamic Neuromuscular Stabilization (DNS exercise on maximum paddling force (PF and self-reported pain perception in the shoulder girdle area in flatwater kayakers. Twenty male flatwater kayakers from a local club (age = 21.9 ± 2.4 years, body height = 185.1 ± 7.9 cm, body mass = 83.9 ± 9.1 kg were randomly assigned to the intervention or control groups. During the 6-week study, subjects from both groups performed standard off-season training. Additionally, the intervention group engaged in a DNS-based core stabilization exercise program (quadruped exercise, side sitting exercise, sitting exercise and squat exercise after each standard training session. Using a kayak ergometer, the maximum PF stroke was measured four times during the six weeks. All subjects completed the Disabilities of the Arm, Shoulder and Hand (DASH questionnaire before and after the 6-week interval to evaluate subjective pain perception in the shoulder girdle area. Initially, no significant differences in maximum PF and the DASH questionnaire were identified between the two groups. Repeated measures analysis of variance indicated that the experimental group improved significantly compared to the control group on maximum PF (p = .004; Cohen’s d = .85, but not on the DASH questionnaire score (p = .731 during the study. Integration of DNS with traditional flatwater kayak training may significantly increase maximum PF, but may not affect pain perception to the same extent.

  7. Maximum a posteriori covariance estimation using a power inverse wishart prior

    DEFF Research Database (Denmark)

    Nielsen, Søren Feodor; Sporring, Jon

    2012-01-01

    The estimation of the covariance matrix is an initial step in many multivariate statistical methods such as principal components analysis and factor analysis, but in many practical applications the dimensionality of the sample space is large compared to the number of samples, and the usual maximum...

  8. 19 CFR 212.07 - Rulemaking on maximum rates for attorney fees.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Rulemaking on maximum rates for attorney fees. 212.07 Section 212.07 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION INVESTIGATIONS OF UNFAIR PRACTICES IN IMPORT TRADE IMPLEMENTATION OF THE EQUAL ACCESS TO JUSTICE ACT General Provisions...

  9. Radiocarbon and seismic evidence of ice-sheet extent and the last deglaciation on the mid-Norwegian continental shelf

    International Nuclear Information System (INIS)

    Rokoengen, Kaare; Frengstad, Bjoern

    1999-01-01

    Reconstruction of the ice extent and glacier chronology on the continental shelf off mid-Norway has been severely hampered by the lack of dates from the glacial deposits. Seismic interpretation and new accelerator mass spectrometer radiocarbon dates show that the ice sheet extended to the edge of the continental shelf at the last glacial maximum. The two youngest till units near the shelf edge were deposited about 15000 and 13500 BP. The results indicate that the ice sheet partly reached the shelf break as late as 11000 BP followed by a deglaciation of most of the continental shelf in less than 1000 years

  10. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  11. The extent of benchmarking in the South African financial sector

    Directory of Open Access Journals (Sweden)

    W Vermeulen

    2014-06-01

    Full Text Available Benchmarking is the process of identifying, understanding and adapting outstanding practices from within the organisation or from other businesses, to help improve performance. The importance of benchmarking as an enabler of business excellence has necessitated an in-depth investigation into the current state of benchmarking in South Africa. This research project highlights the fact that respondents realise the importance of benchmarking, but that various problems hinder the effective implementation of benchmarking. Based on the research findings, recommendations for achieving success are suggested.

  12. Targeted maximum likelihood estimation for a binary treatment: A tutorial.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E

    2018-04-23

    When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  13. [The maximum heart rate in the exercise test: the 220-age formula or Sheffield's table?].

    Science.gov (United States)

    Mesquita, A; Trabulo, M; Mendes, M; Viana, J F; Seabra-Gomes, R

    1996-02-01

    To determine in the maximum cardiac rate in exercise test of apparently healthy individuals may be more properly estimated through 220-age formula (Astrand) or the Sheffield table. Retrospective analysis of clinical history and exercises test of apparently healthy individuals submitted to cardiac check-up. Sequential sampling of 170 healthy individuals submitted to cardiac check-up between April 1988 and September 1992. Comparison of maximum cardiac rate of individuals studied by the protocols of Bruce and modified Bruce, in interrupted exercise test by fatigue, and with the estimated values by the formulae: 220-age versus Sheffield table. The maximum cardiac heart rate is similar with both protocols. This parameter in normal individuals is better predicted by the 220-age formula. The theoretic maximum cardiac heart rate determined by 220-age formula should be recommended for a healthy, and for this reason the Sheffield table has been excluded from our clinical practice.

  14. Pattern formation, logistics, and maximum path probability

    Science.gov (United States)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  15. Extent of lymph node dissection for adenocarcinoma of the stomach.

    Science.gov (United States)

    Mocellin, Simone; McCulloch, Peter; Kazi, Hussain; Gama-Rodrigues, Joaquin J; Yuan, Yuhong; Nitti, Donato

    2015-08-12

    The impact of lymphadenectomy extent on the survival of patients with primary resectable gastric carcinoma is debated. We aimed to systematically review and meta-analyze the evidence on the impact of the three main types of progressively more extended lymph node dissection (that is, D1, D2 and D3 lymphadenectomy) on the clinical outcome of patients with primary resectable carcinoma of the stomach. The primary objective was to assess the impact of lymphadenectomy extent on survival (overall survival [OS], disease specific survival [DSS] and disease free survival [DFS]). The secondary aim was to assess the impact of lymphadenectomy on post-operative mortality. We searched CENTRAL, MEDLINE and EMBASE until 2001, including references from relevant articles and conference proceedings. We also contacted known researchers in the field. For the updated review, CENTRAL, MEDLINE and EMBASE were searched from 2001 to February 2015. We considered randomized controlled trials (RCTs) comparing the three main types of lymph node dissection (i.e., D1, D2 and D3 lymphadenectomy) in patients with primary non-metastatic resectable carcinoma of the stomach. Two authors independently extracted data from the included studies. Hazard ratios (HR) and relative risks (RR) along with their 95% confidence intervals (CI) were used to measure differences in survival and mortality rates between trial arms, respectively. Potential sources of between-study heterogeneity were investigated by means of subgroup and sensitivity analyses. The same two authors independently assessed the risk of bias of eligible studies according to the standards of the Cochrane Collaboration and the quality of the overall evidence based on the GRADE (Grades of Recommendation, Assessment, Development and Evaluation) criteria. Eight RCTs (enrolling 2515 patients) met the inclusion criteria. Three RCTs (all performed in Asian countries) compared D3 with D2 lymphadenectomy: data suggested no significant difference in OS

  16. The extent of continental crust beneath the Seychelles

    Science.gov (United States)

    Hammond, J. O. S.; Kendall, J.-M.; Collier, J. S.; Rümpker, G.

    2013-11-01

    The granitic islands of the Seychelles Plateau have long been recognised to overlie continental crust, isolated from Madagascar and India during the formation of the Indian Ocean. However, to date the extent of continental crust beneath the Seychelles region remains unknown. This is particularly true beneath the Mascarene Basin between the Seychelles Plateau and Madagascar and beneath the Amirante Arc. Constraining the size and shape of the Seychelles continental fragment is needed for accurate plate reconstructions of the breakup of Gondwana and has implications for the processes of continental breakup in general. Here we present new estimates of crustal thickness and VP/VS from H-κ stacking of receiver functions from a year long deployment of seismic stations across the Seychelles covering the topographic plateau, the Amirante Ridge and the northern Mascarene Basin. These results, combined with gravity modelling of historical ship track data, confirm that continental crust is present beneath the Seychelles Plateau. This is ˜30-33 km thick, but with a relatively high velocity lower crustal layer. This layer thins southwards from ˜10 km to ˜1 km over a distance of ˜50 km, which is consistent with the Seychelles being at the edge of the Deccan plume prior to its separation from India. In contrast, the majority of the Seychelles Islands away from the topographic plateau show no direct evidence for continental crust. The exception to this is the island of Desroche on the northern Amirante Ridge, where thicker low density crust, consistent with a block of continental material is present. We suggest that the northern Amirantes are likely continental in nature and that small fragments of continental material are a common feature of plume affected continental breakup.

  17. Migratory decisions in birds: Extent of genetic versus environmental control

    Science.gov (United States)

    Ogonowski, M.S.; Conway, C.J.

    2009-01-01

    Migration is one of the most spectacular of animal behaviors and is prevalent across a broad array of taxa. In birds, we know much about the physiological basis of how birds migrate, but less about the relative contribution of genetic versus environmental factors in controlling migratory tendency. To evaluate the extent to which migratory decisions are genetically determined, we examined whether individual western burrowing owls (Athene cunicularia hypugaea) change their migratory tendency from one year to the next at two sites in southern Arizona. We also evaluated the heritability of migratory decisions by using logistic regression to examine the association between the migratory tendency of burrowing owl parents and their offspring. The probability of migrating decreased with age in both sexes and adult males were less migratory than females. Individual owls sometimes changed their migratory tendency from one year to the next, but changes were one-directional: adults that were residents during winter 2004-2005 remained residents the following winter, but 47% of adults that were migrants in winter 2004-2005 became residents the following winter. We found no evidence for an association between the migratory tendency of hatch-year owls and their male or female parents. Migratory tendency of hatch-year owls did not differ between years, study sites or sexes or vary by hatching date. Experimental provision of supplemental food did not affect these relationships. All of our results suggest that heritability of migratory tendency in burrowing owls is low, and that intraspecific variation in migratory tendency is likely due to: (1) environmental factors, or (2) a combination of environmental factors and non-additive genetic variation. The fact that an individual's migratory tendency can change across years implies that widespread anthropogenic changes (i.e., climate change or changes in land use) could potentially cause widespread changes in the migratory tendency of

  18. Rate and extent of aqueous perchlorate removal by iron surfaces.

    Science.gov (United States)

    Moore, Angela M; De Leon, Corinne H; Young, Thomas M

    2003-07-15

    The rate and extent of perchlorate reduction on several types of iron metal was studied in batch and column reactors. Mass balances performed on the batch experiments indicate that perchlorate is initially sorbed to the iron surface, followed by a reduction to chloride. Perchlorate removal was proportional to the iron dosage in the batch reactors, with up to 66% removal in 336 h in the highest dosage system (1.25 g mL(-1)). Surface-normalized reaction rates among three commercial sources of iron filings were similar for acid-washed samples. The most significant perchlorate removal occurred in solutions with slightly acidic or near-neutral initial pH values. Surface mediation of the reaction is supported by the absence of reduction in batch experiments with soluble Fe2+ and also by the similarity in specific reaction rate constants (kSA) determined for three different iron types. Elevated soluble chloride concentrations significantly inhibited perchlorate reduction, and lower removal rates were observed for iron samples with higher amounts of background chloride contamination. Perchlorate reduction was not observed on electrolytic sources of iron or on a mixed-phase oxide (Fe3O4), suggesting that the reactive iron phase is neither pure zerovalent iron nor the mixed oxide alone. A mixed valence iron hydr(oxide) coating or a sorbed Fe2+ surface complex represent the most likely sites for the reaction. The observed reaction rates are too slow for immediate use in remediation system design, but the findings may provide a basis for future development of cost-effective abiotic perchlorate removal techniques.

  19. The extent of use of online pharmacies in Saudi Arabia.

    Science.gov (United States)

    Abanmy, Norah

    2017-09-01

    Online pharmacies sell medicine over the Internet and deliver them by mail. The main objective of this study is to explore the extent of use of online pharmacies in Saudi Arabia which will be useful for the scientific community and regulators. An Arabic survey questionnaire was developed for this study. The questionnaire was distributed via email and social media. Four sections were created to cover the objectives: experience with online shopping in general, demographics, awareness of the existence and customer experiences of buying medicine online, and reasons for buying/not buying medicine online. A total of 633 responses were collected. Around 69% (437) of them were female and the majority (256, 40.4%) was in the age range 26-40. Only 23.1% (146) were aware of the existence of online pharmacies where 2.7% (17) of them had bought a medicine over the Internet and 15 (88.2%) respondents out of the 17 was satisfied with the process. Lack of awareness of the availability of such services was the main reason for not buying medicines online. Many respondents (263, 42.7%) were willing to try an online pharmacy, although majorities (243, 45.9%) were unable to differentiate between legal and illegal online pharmacies. The largest categories of products respondents were willing to buy them online were nonprescription medicines and cosmetics. The popularity of purchasing medicines over the Internet is still low in Saudi Arabia. However, because the majority of respondents are willing to purchase medicines online, efforts should be made by the Saudi FDA to set regulations and monitor this activity.

  20. A systematic review of the extent and measurement of healthcare provider racism.

    Science.gov (United States)

    Paradies, Yin; Truong, Mandy; Priest, Naomi

    2014-02-01

    Although considered a key driver of racial disparities in healthcare, relatively little is known about the extent of interpersonal racism perpetrated by healthcare providers, nor is there a good understanding of how best to measure such racism. This paper reviews worldwide evidence (from 1995 onwards) for racism among healthcare providers; as well as comparing existing measurement approaches to emerging best practice, it focuses on the assessment of interpersonal racism, rather than internalized or systemic/institutional racism. The following databases and electronic journal collections were searched for articles published between 1995 and 2012: Medline, CINAHL, PsycInfo, Sociological Abstracts. Included studies were published empirical studies of any design measuring and/or reporting on healthcare provider racism in the English language. Data on study design and objectives; method of measurement, constructs measured, type of tool; study population and healthcare setting; country and language of study; and study outcomes were extracted from each study. The 37 studies included in this review were almost solely conducted in the U.S. and with physicians. Statistically significant evidence of racist beliefs, emotions or practices among healthcare providers in relation to minority groups was evident in 26 of these studies. Although a number of measurement approaches were utilized, a limited range of constructs was assessed. Despite burgeoning interest in racism as a contributor to racial disparities in healthcare, we still know little about the extent of healthcare provider racism or how best to measure it. Studies using more sophisticated approaches to assess healthcare provider racism are required to inform interventions aimed at reducing racial disparities in health.

  1. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  2. Maximum power per VA control of vector controlled interior ...

    Indian Academy of Sciences (India)

    Thakur Sumeet Singh

    2018-04-11

    Apr 11, 2018 ... Department of Electrical Engineering, Indian Institute of Technology Delhi, New ... The MPVA operation allows maximum-utilization of the drive-system. ... Permanent magnet motor; unity power factor; maximum VA utilization; ...

  3. Electron density distribution in Si and Ge using multipole, maximum ...

    Indian Academy of Sciences (India)

    Si and Ge has been studied using multipole, maximum entropy method (MEM) and ... and electron density distribution using the currently available versatile ..... data should be subjected to maximum possible utility for the characterization of.

  4. What intent, whose intent and to what extent?

    DEFF Research Database (Denmark)

    Nilsson, David; Minssen, Timo

    2012-01-01

    . The latter is commonly referred to as the knowledge requirement. The basic idea behind indirect infringement is that the patentee can already start legal action before his patent is actually infringed. It is thus potentially a powerful tool for patentees. In respect of the UK, it has been said...... that the indirect infringement rules extend the patent proprietor’s monopoly to cover the supply of means, relating to an essential element, for putting the invention into effect. Similarly, in Germany the instrument of indirect patent infringement has been described as containing the expansion of the protection...... patent infringement under German law is not easy to understand and its application causes problems in practice. One of the reasons why an indirect patent infringement could fail is because the knowledge requirement in an indirect patent infringement claim is not fulfilled. Why this is so, and also under...

  5. Evaluation of nature and extent of injuries during Dahihandi festival.

    Science.gov (United States)

    Nemade, P; Wade, R; Patwardhan, A R; Kale, S

    2012-01-01

    Injuries related to the Hindu festival of Dahihandi where a human pyramid is formed and a pot of money kept at a height is broken, celebrated in the state of Maharashtra, have seen a significant rise in the past few years. The human pyramid formed is multi-layered and carries with it a high risk of injury including mortality. To evaluate the nature, extent and influencing factors of injuries related to Dahihandi festival. We present a retrospective analysis of patients who presented in a tertiary care center with injuries during the Dahihandi festival in the year 2010. 124 patients' records were evaluated for timing of injury, height of the Dahihandi pyramid, position of the patient in the multi-layered pyramid, mode of pyramid collapse and mechanism of an injury. A binary regression logistic analysis for risk factors was done at 5% significance level. Univariate and multi-variate binary logistic regression of the risk factors for occurrence of a major or minor injury was done using Minitab™ version 16.0 at 5% significance. Out of 139 patients presented to the center, 15 were not involved directly in the formation of pyramid, rest 124 were included in the analysis. A majority of the patients were above 15 years of age [110 (83.6%)]. 46 (37.1%) patients suffered major injuries. There were 39 fractures, 3 cases of chest wall trauma with 10 cases of head injuries and 1 death. More than half of the patients [78 (56.1%)] were injured after 1800 hours. 73 (58.9%) injured participants were part of the pyramid constructed to reach the Dahihandi placed at 30 feet or more above the ground. 72 (51.8%) participants were part of the middle layers of the pyramid. Fall of a participant from upstream layers on the body was the main mechanism of injury, and majority [101 (81.5%)] of the patients suffered injury during descent phase of the pyramid. There is a considerable risk of serious, life-threatening injuries inherent to human pyramid formation and descent in the Dahihandi

  6. Geographic extent and variation of a coral reef trophic cascade.

    Science.gov (United States)

    McClanahan, T R; Muthiga, N A

    2016-07-01

    Trophic cascades caused by a reduction in predators of sea urchins have been reported in Indian Ocean and Caribbean coral reefs. Previous studies have been constrained by their site-specific nature and limited spatial replication, which has produced site and species-specific understanding that can potentially preclude larger community-organization nuances and generalizations. In this study, we aimed to evaluate the extent and variability of the cascade community in response to fishing across ~23° of latitude and longitude in coral reefs in the southwestern Indian Ocean. The taxonomic composition of predators of sea urchins, the sea urchin community itself, and potential effects of changing grazer abundance on the calcifying benthic organisms were studied in 171 unique coral reef sites. We found that geography and habitat were less important than the predator-prey relationships. There were seven sea urchin community clusters that aligned with a gradient of declining fishable biomass and the abundance of a key predator, the orange-lined triggerfish (Balistapus undulatus). The orange-lined triggerfish dominated where sea urchin numbers and diversity were low but the relative abundance of wrasses and emperors increased where sea urchin numbers were high. Two-thirds of the study sites had high sea urchin biomass (>2,300 kg/ha) and could be dominated by four different sea urchin species, Echinothrix diadema, Diadema savignyi, D. setosum, and Echinometra mathaei, depending on the community of sea urchin predators, geographic location, and water depth. One-third of the sites had low sea urchin biomass and diversity and were typified by high fish biomass, predators of sea urchins, and herbivore abundance, representing lightly fished communities with generally higher cover of calcifying algae. Calcifying algal cover was associated with low urchin abundance where as noncalcifying fleshy algal cover was not clearly associated with herbivore abundance. Fishing of the orange

  7. A Fully Automated Classification for Mapping the Annual Cropland Extent

    Science.gov (United States)

    Waldner, F.; Defourny, P.

    2015-12-01

    Mapping the global cropland extent is of paramount importance for food security. Indeed, accurate and reliable information on cropland and the location of major crop types is required to make future policy, investment, and logistical decisions, as well as production monitoring. Timely cropland information directly feed early warning systems such as GIEWS and, FEWS NET. In Africa, and particularly in the arid and semi-arid region, food security is center of debate (at least 10% of the population remains undernourished) and accurate cropland estimation is a challenge. Space borne Earth Observation provides opportunities for global cropland monitoring in a spatially explicit, economic, efficient, and objective fashion. In the both agriculture monitoring and climate modelling, cropland maps serve as mask to isolate agricultural land for (i) time-series analysis for crop condition monitoring and (ii) to investigate how the cropland is respond to climatic evolution. A large diversity of mapping strategies ranging from the local to the global scale and associated with various degrees of accuracy can be found in the literature. At the global scale, despite efforts, cropland is generally one of classes with the poorest accuracy which make difficult the use for agricultural. This research aims at improving the cropland delineation from the local scale to the regional and global scales as well as allowing near real time updates. To that aim, five temporal features were designed to target the key- characteristics of crop spectral-temporal behavior. To ensure a high degree of automation, training data is extracted from available baseline land cover maps. The method delivers cropland maps with a high accuracy over contrasted agro-systems in Ukraine, Argentina, China and Belgium. The accuracy reached are comparable to those obtained with classifiers trained with in-situ data. Besides, it was found that the cropland class is associated with a low uncertainty. The temporal features

  8. PET/CT colonography: a novel non-invasive technique for assessment of extent and activity of ulcerative colitis

    Energy Technology Data Exchange (ETDEWEB)

    Das, Chandan J.; Sharma, Raju [All India Institute of Medical Sciences, Department of Radiodiagnosis, New Delhi (India); Makharia, Govind K.; Tiwari, Rajeew P. [All India Institute of Medical Sciences, Department of Gastroenterology and Human Nutrition, New Delhi (India); Kumar, Rakesh; Kumar, Rajender; Malhotra, Arun [All India Institute of Medical Sciences, Department of Nuclear Medicine, New Delhi (India)

    2010-04-15

    Extent of involvement and activity of ulcerative colitis (UC) is best evaluated by colonoscopy. Colonoscopy however carries risk during acute exacerbation. We investigated the utility of PET/CT colonography for assessment of extent and activity of UC. Within a 1-week window, 15 patients with mild to moderately active UC underwent colonoscopy and PET/CT colonography 60 min after injection of 10 mCi of {sup 18}F-fluorodeoxyglucose (FDG). PET activity score based on the amount of FDG uptake and endoscopic mucosal activity in seven colonic segments of each patient was recorded. The mean maximum standardized uptake value (SUV{sub max}) of seven segments was compared with activity in liver. A PET activity grade of 0, 1, 2 or 3 was assigned to each region depending upon their SUV{sub max} ratio (colon segment to liver). The extent of disease was left-sided colitis in five and pancolitis in ten. The mean Ulcerative Colitis Disease Activity Index (UCDAI) was 7.6. The number of segments involved as per colonoscopic evaluation and PET/CT colonography was 67 and 66, respectively. There was a good correlation for extent evaluation between the two modalities (kappa 55.3%, p = 0.02). One patient had grade 0 PET activity, nine had grade 2 and five had grade 3 PET activity. In six patients, there was one to one correlation between PET activity grades with that of endoscopic grade. One patient showed activity in the sacroiliac joint suggesting active sacroiliitis. PET/CT colonography is a novel non-invasive technique for the assessment of extent and activity of the disease in patients with UC. (orig.)

  9. 40 CFR 141.13 - Maximum contaminant levels for turbidity.

    Science.gov (United States)

    2010-07-01

    ... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...

  10. 13 CFR 107.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...

  11. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  12. Characterizing graphs of maximum matching width at most 2

    DEFF Research Database (Denmark)

    Jeong, Jisu; Ok, Seongmin; Suh, Geewon

    2017-01-01

    The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...

  13. Age, extent and carbon storage of the central Congo Basin peatland complex.

    Science.gov (United States)

    Dargie, Greta C; Lewis, Simon L; Lawson, Ian T; Mitchard, Edward T A; Page, Susan E; Bocko, Yannick E; Ifo, Suspense A

    2017-02-02

    Peatlands are carbon-rich ecosystems that cover just three per cent of Earth's land surface, but store one-third of soil carbon. Peat soils are formed by the build-up of partially decomposed organic matter under waterlogged anoxic conditions. Most peat is found in cool climatic regions where unimpeded decomposition is slower, but deposits are also found under some tropical swamp forests. Here we present field measurements from one of the world's most extensive regions of swamp forest, the Cuvette Centrale depression in the central Congo Basin. We find extensive peat deposits beneath the swamp forest vegetation (peat defined as material with an organic matter content of at least 65 per cent to a depth of at least 0.3 metres). Radiocarbon dates indicate that peat began accumulating from about 10,600 years ago, coincident with the onset of more humid conditions in central Africa at the beginning of the Holocene. The peatlands occupy large interfluvial basins, and seem to be largely rain-fed and ombrotrophic-like (of low nutrient status) systems. Although the peat layer is relatively shallow (with a maximum depth of 5.9 metres and a median depth of 2.0 metres), by combining in situ and remotely sensed data, we estimate the area of peat to be approximately 145,500 square kilometres (95 per cent confidence interval of 131,900-156,400 square kilometres), making the Cuvette Centrale the most extensive peatland complex in the tropics. This area is more than five times the maximum possible area reported for the Congo Basin in a recent synthesis of pantropical peat extent. We estimate that the peatlands store approximately 30.6 petagrams (30.6 × 10 15  grams) of carbon belowground (95 per cent confidence interval of 6.3-46.8 petagrams of carbon)-a quantity that is similar to the above-ground carbon stocks of the tropical forests of the entire Congo Basin. Our result for the Cuvette Centrale increases the best estimate of global tropical peatland carbon stocks by

  14. Maximum likelihood estimation for cytogenetic dose-response curves

    International Nuclear Information System (INIS)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa[γd + g(t, tau)d 2 ], where t is the time and d is dose. The coefficient of the d 2 term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure

  15. Applications of the maximum entropy principle in nuclear physics

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1990-01-01

    Soon after the advent of information theory the principle of maximum entropy was recognized as furnishing the missing rationale for the familiar rules of classical thermodynamics. More recently it has also been applied successfully in nuclear physics. As an elementary example we derive a physically meaningful macroscopic description of the spectrum of neutrons emitted in nuclear fission, and compare the well known result with accurate data on 252 Cf. A second example, derivation of an expression for resonance-averaged cross sections for nuclear reactions like scattering or fission, is less trivial. Entropy maximization, constrained by given transmission coefficients, yields probability distributions for the R- and S-matrix elements, from which average cross sections can be calculated. If constrained only by the range of the spectrum of compound-nuclear levels it produces the Gaussian Orthogonal Ensemble (GOE) of Hamiltonian matrices that again yields expressions for average cross sections. Both avenues give practically the same numbers in spite of the quite different cross section formulae. These results were employed in a new model-aided evaluation of the 238 U neutron cross sections in the unresolved resonance region. (orig.) [de

  16. Maximum likelihood estimation for cytogenetic dose-response curves

    International Nuclear Information System (INIS)

    Frome, E.L.; DuFrain, R.J.

    1986-01-01

    In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure

  17. Mixed integer linear programming for maximum-parsimony phylogeny inference.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2008-01-01

    Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.

  18. Maximum likelihood estimation for cytogenetic dose-response curves

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  19. Extent and kinetics of recovery of occult spinal cord injury

    International Nuclear Information System (INIS)

    Ang, K. Kian; Jiang, G.-L.; Feng Yan; Stephens, L. Clifton; Tucker, Susan L.; Price, Roger E.

    2001-01-01

    Purpose: To obtain clinically useful quantitative data on the extent and kinetics of recovery of occult radiation injury in primate spinal cord, after a commonly administered elective radiation dose of 44 Gy, given in about 2 Gy per fraction. Methods and Materials: A group of 56 rhesus monkeys was assigned to receive two radiation courses to the cervical and upper thoracic spinal cord, given in 2.2 Gy per fraction. The dose of the initial course was 44 Gy in all monkeys. Reirradiation dose was 57.2 Gy, given after 1-year (n 16) or 2-year (n = 20) intervals, or 66 Gy, given after 2-year (n = 4) or 3-year (n = 14) intervals. Two animals developed intramedullary tumors before reirradiation and, therefore, did not receive a second course. The study endpoint was myeloparesis, manifesting predominantly as lower extremity weakness and decrease in balance, occurring within 2.5 years after reirradiation, complemented by histologic examination of the spinal cord. The data obtained were analyzed along with data from a previous study addressing single-course tolerance, and data from a preliminary study of reirradiation tolerance. Results: Only 4 of 45 monkeys completing the required observation period (2-2.5 years after reirradiation, 3-5.5 years total) developed myeloparesis. The data revealed a substantial recovery of occult injury induced by 44 Gy within the first year, and suggested additional recovery between 1 and 3 years. Fitting the data with a model, assuming that all (single course and reirradiation) dose-response curves were parallel, yielded recovery estimates of 33.6 Gy (76%), 37.6 Gy (85%), and 44.6 Gy (101%) of the initial dose, after 1, 2, and 3 years, respectively, at the 5% incidence (D 5 ) level. The most conservative estimate, using a model in which it was assumed that there was no recovery between 1 and 3 years following initial irradiation and that the combined reirradiation curve was not necessarily parallel to the single-course curve, still showed an

  20. Estimation of the players maximum heart rate in real game situations in team sports: a practical propose ESTIMACIÓN DE LA FRECUENCIA CARDIACA MÁXIMA INDIVIDUAL EN SITUACIONES INTEGRADAS DE JUEGO EN DEPORTES COLECTIVOS: UNA PROPUESTA PRÁCTICA

    Directory of Open Access Journals (Sweden)

    Daniel Aguilar

    2011-05-01

    Full Text Available Abstract   This  research developed a logarithms  for calculating the maximum heart rate (max. HR for players in team sports in  game situations. The sample was made of  thirteen players (aged 24 ± 3   to a  Division Two Handball team. HR was initially measured by Course Navette test.  Later, twenty one training sessions were conducted  in which HR and Rate of Perceived Exertion (RPE, were  continuously monitored, in each task. A lineal regression analysis was done  to help find a max. HR prediction equation from the max. HR of the three highest intensity sessions. Results from  this equation correlate significantly with data obtained in the Course Navette test and with those obtained by other indirect methods. The conclusion of this research is that this equation provides a very useful and easy way to measure the max. HR in real game situations, avoiding non-specific analytical tests and, therefore laboratory testing..   Key words: workout control, functional evaluation, prediction equation.Resumen   En el presente estudio se propone una ecuación logarítmica para el cálculo de la frecuencia cardiaca máxima (FC máx de forma indirecta en jugadores de deportes de equipo en situaciones integradas de juego. La muestra experimental estuvo formada por trece jugadores (24± 3 años pertenecientes a un equipo de División de Honor B de balonmano. Se midió la FC máx inicialmente por medio de la prueba de Course Navette. Posteriormente, se realizaron veintiuna sesiones de entrenamiento en las que se registró la FC, de forma continua, y la percepción subjetiva del esfuerzo (RPE, en cada tarea. Se realizó un análisis de regresión lineal que permitió encontrar una ecuación de predicción de la FC máx. a partir de las frecuencias cardiacas máximas de las tres sesiones de mayor intensidad. Los datos previstos por esta ecuación correlacionan significativamente con los datos obtenidos en el Course Navette y tienen menor error t

  1. The Spatial Extent of Epiretinal Electrical Stimulation in the Healthy Mouse Retina

    Directory of Open Access Journals (Sweden)

    Zohreh Hosseinazdeh

    2017-07-01

    Full Text Available Background/Aims: Retinal prostheses use electrical stimulation to restore functional vision to patients blinded by retinitis pigmentosa. A key detail is the spatial pattern of ganglion cells activated by stimulation. Therefore, we characterized the spatial extent of network-mediated electrical activation of retinal ganglion cells (RGCs in the epiretinal monopolar electrode configuration. Methods: Healthy mouse RGC activities were recorded with a micro-electrode array (MEA. The stimuli consisted of monophasic rectangular cathodic voltage pulses and cycling full-field light flashes. Results: Voltage tuning curves exhibited significant hysteresis, reflecting adaptation to electrical stimulation on the time scale of seconds. Responses decreased from 0 to 300 µm, and were also dependent on the strength of stimulation. Applying the Rayleigh criterion to the half-width at half-maximum of the electrical point spread function suggests a visual acuity limit of no better than 20/946. Threshold voltage showed only a modest increase across these distances. Conclusion: The existence of significant hysteresis requires that future investigations of electrical retinal stimulation control for such long-memory adaptation. The spread of electrical activation beyond 200 µm suggests that neighbouring electrodes in epiretinal implants based on indirect stimulation of RGCs may be indiscriminable at interelectrode spacings as large as 400 µm.

  2. [Autonomy: to what extent is the concept relevant in psychiatry?].

    Science.gov (United States)

    de Wit, F A

    2012-01-01

    Autonomy is an important concept in psychiatry, but because it is a somewhat abstract and ambiguous notion, it is not applicable in its entirety in a psychiatric context. This becomes obvious in situations where patients are receiving long term care and treatment. To modify the concept of autonomy in such a way that it acquires an extra dimension that renders it applicable to daily psychiatric practice. The literature was reviewed in order to find articles that reveal the tensions that arise between autonomy and dependence in psychiatry and that reflect the human characteristics that are concealed behind the modern concepts of autonomy, freedom and respect for autonomy. Concepts such as person, identity, acknowledgement, dialogical ethics and life histories are used as an addition to the concepts of autonomy of Kant and Mill. A phenomenological and a context sensitive conception of autonomy is needed within the perspective of dialogical ethics. A dialogical perspective requires from psychiatric professionals a susceptibility for what the patient as a human being really has to say. On the basis of a dialogue where there is space and attention for life histories, backgrounds and the potentials of patients, a new perspective can be developed that is shared by the persons involved. In psychiatry, statements about real autonomy and genuine respect for autonomy are only truly meaningful within the context of doctors, nurses and patients. A hermeneutic approach to patients which involves dialogue creates new opportunities in the field of staff-patient relations.

  3. The NBA’s Maximum Player Salary and the Distribution of Player Rents

    Directory of Open Access Journals (Sweden)

    Kelly M. Hastings

    2015-03-01

    Full Text Available The NBA’s 1999 Collective Bargaining Agreement (CBA included provisions capping individual player pay in addition to team payrolls. This study examines the effect the NBA’s maximum player salary on player rents by comparing player pay from the 1997–1998 and 2003–2004 seasons while controlling for player productivity and other factors related to player pay. The results indicate a large increase in the pay received by teams’ second highest and, to a lesser extent, third highest paid players. We interpret this result as evidence that the adoption of the maximum player salary shifted rents from stars to complementary players. We also show that the 1999 CBA’s rookie contract provisions reduced salaries of early career players.

  4. Maximum Acceptable Vibrato Excursion as a Function of Vibrato Rate in Musicians and Non-musicians

    DEFF Research Database (Denmark)

    Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels H.

    2014-01-01

    and, in most listeners, exhibited a peak at medium vibrato rates (5–7 Hz). Large across-subject variability was observed, and no significant effect of musical experience was found. Overall, most listeners were not solely sensitive to the vibrato excursion and there was a listener-dependent rate...... for which larger vibrato excursions were favored. The observed interaction between maximum excursion thresholds and vibrato rate may be due to the listeners’ judgments relying on cues provided by the rate of frequency changes (RFC) rather than excursion per se. Further studies are needed to evaluate......Human vibrato is mainly characterized by two parameters: vibrato extent and vibrato rate. These parameters have been found to exhibit an interaction both in physical recordings of singers’ voices and in listener’s preference ratings. This study was concerned with the way in which the maximum...

  5. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  6. Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB

    CERN Document Server

    Millar, Russell B

    2011-01-01

    This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis

  7. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  8. Solar Maximum Mission Experiment - Ultraviolet Spectroscopy and Polarimetry on the Solar Maximum Mission

    Science.gov (United States)

    Tandberg-Hanssen, E.; Cheng, C. C.; Woodgate, B. E.; Brandt, J. C.; Chapman, R. D.; Athay, R. G.; Beckers, J. M.; Bruner, E. C.; Gurman, J. B.; Hyder, C. L.

    1981-01-01

    The Ultraviolet Spectrometer and Polarimeter on the Solar Maximum Mission spacecraft is described. It is pointed out that the instrument, which operates in the wavelength range 1150-3600 A, has a spatial resolution of 2-3 arcsec and a spectral resolution of 0.02 A FWHM in second order. A Gregorian telescope, with a focal length of 1.8 m, feeds a 1 m Ebert-Fastie spectrometer. A polarimeter comprising rotating Mg F2 waveplates can be inserted behind the spectrometer entrance slit; it permits all four Stokes parameters to be determined. Among the observing modes are rasters, spectral scans, velocity measurements, and polarimetry. Examples of initial observations made since launch are presented.

  9. Influence of Arctic Sea Ice Extent on Polar Cloud Fraction and Vertical Structure and Implications for Regional Climate

    Science.gov (United States)

    Palm, Stephen P.; Strey, Sara T.; Spinhirne, James; Markus, Thorsten

    2010-01-01

    Recent satellite lidar measurements of cloud properties spanning a period of 5 years are used to examine a possible connection between Arctic sea ice amount and polar cloud fraction and vertical distribution. We find an anticorrelation between sea ice extent and cloud fraction with maximum cloudiness occurring over areas with little or no sea ice. We also find that over ice!free regions, there is greater low cloud frequency and average optical depth. Most of the optical depth increase is due to the presence of geometrically thicker clouds over water. In addition, our analysis indicates that over the last 5 years, October and March average polar cloud fraction has increased by about 7% and 10%, respectively, as year average sea ice extent has decreased by 5% 7%. The observed cloud changes are likely due to a number of effects including, but not limited to, the observed decrease in sea ice extent and thickness. Increasing cloud amount and changes in vertical distribution and optical properties have the potential to affect the radiative balance of the Arctic region by decreasing both the upwelling terrestrial longwave radiation and the downward shortwave solar radiation. Because longwave radiation dominates in the long polar winter, the overall effect of increasing low cloud cover is likely a warming of the Arctic and thus a positive climate feedback, possibly accelerating the melting of Arctic sea ice.

  10. The Influence of Arctic Sea Ice Extent on Polar Cloud Fraction and Vertical Structure and Implications for Regional Climate

    Science.gov (United States)

    Palm, Stephen P.; Strey, Sara T.; Spinhirne, James; Markus, Thorsten

    2010-01-01

    Recent satellite lidar measurements of cloud properties spanning a period of five years are used to examine a possible connection between Arctic sea ice amount and polar cloud fraction and vertical distribution. We find an anti-correlation between sea ice extent and cloud fraction with maximum cloudiness occurring over areas with little or no sea ice. We also find that over ice free regions, there is greater low cloud frequency and average optical depth. Most of the optical depth increase is due to the presence of geometrically thicker clouds over water. In addition, our analysis indicates that over the last 5 years, October and March average polar cloud fraction has increased by about 7 and 10 percent, respectively, as year average sea ice extent has decreased by 5 to 7 percent. The observed cloud changes are likely due to a number of effects including, but not limited to, the observed decrease in sea ice extent and thickness. Increasing cloud amount and changes in vertical distribution and optical properties have the potential to affect the radiative balance of the Arctic region by decreasing both the upwelling terrestrial longwave radiation and the downward shortwave solar radiation. Since longwave radiation dominates in the long polar winter, the overall effect of increasing low cloud cover is likely a warming of the Arctic and thus a positive climate feedback, possibly accelerating the melting of Arctic sea ice.

  11. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  12. New results on the mid-latitude midnight temperature maximum

    Science.gov (United States)

    Mesquita, Rafael L. A.; Meriwether, John W.; Makela, Jonathan J.; Fisher, Daniel J.; Harding, Brian J.; Sanders, Samuel C.; Tesema, Fasil; Ridley, Aaron J.

    2018-04-01

    Fabry-Perot interferometer (FPI) measurements of thermospheric temperatures and winds show the detection and successful determination of the latitudinal distribution of the midnight temperature maximum (MTM) in the continental mid-eastern United States. These results were obtained through the operation of the five FPI observatories in the North American Thermosphere Ionosphere Observing Network (NATION) located at the Pisgah Astronomic Research Institute (PAR) (35.2° N, 82.8° W), Virginia Tech (VTI) (37.2° N, 80.4° W), Eastern Kentucky University (EKU) (37.8° N, 84.3° W), Urbana-Champaign (UAO) (40.2° N, 88.2° W), and Ann Arbor (ANN) (42.3° N, 83.8° W). A new approach for analyzing the MTM phenomenon is developed, which features the combination of a method of harmonic thermal background removal followed by a 2-D inversion algorithm to generate sequential 2-D temperature residual maps at 30 min intervals. The simultaneous study of the temperature data from these FPI stations represents a novel analysis of the MTM and its large-scale latitudinal and longitudinal structure. The major finding in examining these maps is the frequent detection of a secondary MTM peak occurring during the early evening hours, nearly 4.5 h prior to the timing of the primary MTM peak that generally appears after midnight. The analysis of these observations shows a strong night-to-night variability for this double-peaked MTM structure. A statistical study of the behavior of the MTM events was carried out to determine the extent of this variability with regard to the seasonal and latitudinal dependence. The results show the presence of the MTM peak(s) in 106 out of the 472 determinable nights (when the MTM presence, or lack thereof, can be determined with certainty in the data set) selected for analysis (22 %) out of the total of 846 nights available. The MTM feature is seen to appear slightly more often during the summer (27 %), followed by fall (22 %), winter (20 %), and spring

  13. Understanding the Role of Reservoir Size on Probable Maximum Precipitation

    Science.gov (United States)

    Woldemichael, A. T.; Hossain, F.

    2011-12-01

    This study addresses the question 'Does surface area of an artificial reservoir matter in the estimation of probable maximum precipitation (PMP) for an impounded basin?' The motivation of the study was based on the notion that the stationarity assumption that is implicit in the PMP for dam design can be undermined in the post-dam era due to an enhancement of extreme precipitation patterns by an artificial reservoir. In addition, the study lays the foundation for use of regional atmospheric models as one way to perform life cycle assessment for planned or existing dams to formulate best management practices. The American River Watershed (ARW) with the Folsom dam at the confluence of the American River was selected as the study region and the Dec-Jan 1996-97 storm event was selected for the study period. The numerical atmospheric model used for the study was the Regional Atmospheric Modeling System (RAMS). First, the numerical modeling system, RAMS, was calibrated and validated with selected station and spatially interpolated precipitation data. Best combinations of parameterization schemes in RAMS were accordingly selected. Second, to mimic the standard method of PMP estimation by moisture maximization technique, relative humidity terms in the model were raised to 100% from ground up to the 500mb level. The obtained model-based maximum 72-hr precipitation values were named extreme precipitation (EP) as a distinction from the PMPs obtained by the standard methods. Third, six hypothetical reservoir size scenarios ranging from no-dam (all-dry) to the reservoir submerging half of basin were established to test the influence of reservoir size variation on EP. For the case of the ARW, our study clearly demonstrated that the assumption of stationarity that is implicit the traditional estimation of PMP can be rendered invalid to a large part due to the very presence of the artificial reservoir. Cloud tracking procedures performed on the basin also give indication of the

  14. The nature, extent and effect of skills shortages on skills migration in South Africa

    Directory of Open Access Journals (Sweden)

    Fatima Rasool

    2011-07-01

    Full Text Available Orientation: South Africa is currently experiencing a serious shortage of skilled workers. It has a negative effect on South Africa’s economic prospects and on global participation in South Africa (SA. This skills shortage severely affects socioeconomic growth and development in SA. Research purpose: This study focuses on the causes and effects of the skills shortages in South Africa. Motivation for the study: The researchers undertook this study to highlight the role that skilled foreign workers can play in supplementing the shortage of skilled workers in South Africa. The shortage is partly because of the failure of the national education and training system to supply the economy with much-needed skills. Research design, approach and method: The researchers undertook a literature study to identify the nature, extent and effect of skills shortages in South Africa. They consulted a wide range of primary and secondary resources in order to acquire an in-depth understanding of the problem. The article explains the research approach and method comprehensively. It also outlines the research method the researchers used. Main findings: This study shows that several factors cause serious skills shortages in SA. Practical/managerial implications: The researchers mention only two significant implications. Firstly, this article provides a logical description of the nature, extent and effect of skills shortages on the economy. Secondly, it indicates clearly the implications of skills shortages for immigration policy. Contribution/value-add: This study confirms the findings of similar studies the Centre for Development and Enterprise (CDE conducted. Opening the doors to highly skilled immigrants can broaden the skills pool.

  15. Benefits of the maximum tolerated dose (MTD) and maximum tolerated concentration (MTC) concept in aquatic toxicology

    International Nuclear Information System (INIS)

    Hutchinson, Thomas H.; Boegi, Christian; Winter, Matthew J.; Owens, J. Willie

    2009-01-01

    There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic organisms and the

  16. An essay on the extent and significance of the Greek athletic culture in the classical period

    DEFF Research Database (Denmark)

    Nielsen, Thomas Heine

    2014-01-01

    This article discusses the extent of the Greek athletic culture in the classical period. It is demonstrated that the athletic culture had a surprising extent, and the article goes on the discuss the historical significance of this fact.......This article discusses the extent of the Greek athletic culture in the classical period. It is demonstrated that the athletic culture had a surprising extent, and the article goes on the discuss the historical significance of this fact....

  17. Microprocessor Controlled Maximum Power Point Tracker for Photovoltaic Application

    International Nuclear Information System (INIS)

    Jiya, J. D.; Tahirou, G.

    2002-01-01

    This paper presents a microprocessor controlled maximum power point tracker for photovoltaic module. Input current and voltage are measured and multiplied within the microprocessor, which contains an algorithm to seek the maximum power point. The duly cycle of the DC-DC converter, at which the maximum power occurs is obtained, noted and adjusted. The microprocessor constantly seeks for improvement of obtained power by varying the duty cycle

  18. Coupling Mars' Dust and Water Cycles: Effects on Dust Lifting Vigor, Spatial Extent and Seasonality

    Science.gov (United States)

    Kahre, M. A.; Hollingsworth, J. L.; Haberle, R. M.; Montmessin, F.

    2012-01-01

    , thereby modifying the thermal structure of the atmosphere and its circulation. Results presented in other papers at this workshop show that including the radiative effects of water ice clouds greatly influence the water cycle and the vigor of weather systems in both the northern and southern hemispheres. Our goal is to investigate the effects of fully coupling the dust and water cycles on the dust cycle. We show that including water ice clouds and their radiative effects greatly affect the magnitude, spatial extent and seasonality of dust lifting and the season of maximum atmospheric dust loading.

  19. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory

    2010-12-15

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.

  20. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    International Nuclear Information System (INIS)

    Wollaber, Allan B.; Larsen, Edward W.; Densmore, Jeffery D.

    2011-01-01

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle'. Previous attempts at prescribing a maximum value of the time-step size Δ t that is sufficient to eliminate these violations have recommended a Δ t that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size Δ x . This explicitly demonstrates that the effect of coarsening Δ x is to reduce the limitation on Δ t , which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent time-step restriction can impact IMC solution algorithms. (author)

  1. A reliable, fast and low cost maximum power point tracker for photovoltaic applications

    Energy Technology Data Exchange (ETDEWEB)

    Enrique, J.M.; Andujar, J.M.; Bohorquez, M.A. [Departamento de Ingenieria Electronica, de Sistemas Informaticos y Automatica, Universidad de Huelva (Spain)

    2010-01-15

    This work presents a new maximum power point tracker system for photovoltaic applications. The developed system is an analog version of the ''P and O-oriented'' algorithm. It maintains its main advantages: simplicity, reliability and easy practical implementation, and avoids its main disadvantages: inaccurateness and relatively slow response. Additionally, the developed system can be implemented in a practical way at a low cost, which means an added value. The system also shows an excellent behavior for very fast variables in incident radiation levels. (author)

  2. 49 CFR 195.406 - Maximum operating pressure.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...

  3. 78 FR 49370 - Inflation Adjustment of Maximum Forfeiture Penalties

    Science.gov (United States)

    2013-08-14

    ... ``civil monetary penalties provided by law'' at least once every four years. DATES: Effective September 13... increases the maximum civil monetary forfeiture penalties available to the Commission under its rules... maximum civil penalties established in that section to account for inflation since the last adjustment to...

  4. 22 CFR 201.67 - Maximum freight charges.

    Science.gov (United States)

    2010-04-01

    ..., commodity rate classification, quantity, vessel flag category (U.S.-or foreign-flag), choice of ports, and... the United States. (2) Maximum charter rates. (i) USAID will not finance ocean freight under any... owner(s). (4) Maximum liner rates. USAID will not finance ocean freight for a cargo liner shipment at a...

  5. Maximum penetration level of distributed generation without violating voltage limits

    NARCIS (Netherlands)

    Morren, J.; Haan, de S.W.H.

    2009-01-01

    Connection of Distributed Generation (DG) units to a distribution network will result in a local voltage increase. As there will be a maximum on the allowable voltage increase, this will limit the maximum allowable penetration level of DG. By reactive power compensation (by the DG unit itself) a

  6. Particle Swarm Optimization Based of the Maximum Photovoltaic ...

    African Journals Online (AJOL)

    Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency. In this work, a Particle Swarm ...

  7. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  8. Application of maximum entropy to neutron tunneling spectroscopy

    International Nuclear Information System (INIS)

    Mukhopadhyay, R.; Silver, R.N.

    1990-01-01

    We demonstrate the maximum entropy method for the deconvolution of high resolution tunneling data acquired with a quasielastic spectrometer. Given a precise characterization of the instrument resolution function, a maximum entropy analysis of lutidine data obtained with the IRIS spectrometer at ISIS results in an effective factor of three improvement in resolution. 7 refs., 4 figs

  9. The regulation of starch accumulation in Panicum maximum Jacq ...

    African Journals Online (AJOL)

    ... decrease the starch level. These observations are discussed in relation to the photosynthetic characteristics of P. maximum. Keywords: accumulation; botany; carbon assimilation; co2 fixation; growth conditions; mesophyll; metabolites; nitrogen; nitrogen levels; nitrogen supply; panicum maximum; plant physiology; starch; ...

  10. 32 CFR 842.35 - Depreciation and maximum allowances.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide” to...

  11. The maximum significant wave height in the Southern North Sea

    NARCIS (Netherlands)

    Bouws, E.; Tolman, H.L.; Holthuijsen, L.H.; Eldeberky, Y.; Booij, N.; Ferier, P.

    1995-01-01

    The maximum possible wave conditions along the Dutch coast, which seem to be dominated by the limited water depth, have been estimated in the present study with numerical simulations. Discussions with meteorologists suggest that the maximum possible sustained wind speed in North Sea conditions is

  12. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    OpenAIRE

    Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...

  13. 5 CFR 838.711 - Maximum former spouse survivor annuity.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the amount...

  14. On risk analysis for repositories in northern Switzerland: extent and probability of geological processes and events

    International Nuclear Information System (INIS)

    Buergisser, H.M.; Herrnberger, V.

    1981-01-01

    The literature study assesses, in the form of expert analysis, geological processes and events for a 1200 km 2 -area of northern Switzerland, with regard to repositories for medium- and high-active waste (depth 100 to 600 m and 600 to 2500 m, respectively) over the next 10 6 years. The area, which comprises parts of the Tabular Jura, the folded Jura and the Molasse Basin, the latter two being parts of the Alpine Orogene, has undergone a non-uniform geologic development since the Oligocene. Within the next 10 4 to 10 5 years a maximum earthquake intensity of VIII-IX (MSK-scale) has been predicted. After this period, particularly in the southern and eastern parts of the area, glaciations will probably occur, with associated erosion of possibly 200 to 300 m. Fluvial erosion as a reponse to an uplift could reach similar values after 10 5 to 10 6 years; however, there are no data on the recent relative vertical crustal movements of the area. The risk of a meteorite impact is considered small as compared to that of these factors. Seismic activity and the position and extent of faults are so poorly known within the area that the faulting probability cannot be derived at present. Flooding by the sea, intrusion of magma, diapirism, metamorphism and volcanic eruptions are not considered to be risk factors for final repositories in northern Switzerland. For the shallow-type repositories, the risk of denudation and landslides have to be judged when locality-bound projects have been proposed. (Auth.)

  15. The role of infarct transmural extent in infarct extension: A computational study.

    Science.gov (United States)

    Leong, Chin-Neng; Lim, Einly; Andriyana, Andri; Al Abed, Amr; Lovell, Nigel Hamilton; Hayward, Christopher; Hamilton-Craig, Christian; Dokos, Socrates

    2017-02-01

    Infarct extension, a process involving progressive extension of the infarct zone (IZ) into the normally perfused border zone (BZ), leads to continuous degradation of the myocardial function and adverse remodelling. Despite carrying a high risk of mortality, detailed understanding of the mechanisms leading to BZ hypoxia and infarct extension remains unexplored. In the present study, we developed a 3D truncated ellipsoidal left ventricular model incorporating realistic electromechanical properties and fibre orientation to examine the mechanical interaction among the remote, infarct and BZs in the presence of varying infarct transmural extent (TME). Localized highly abnormal systolic fibre stress was observed at the BZ, owing to the simultaneous presence of moderately increased stiffness and fibre strain at this region, caused by the mechanical tethering effect imposed by the overstretched IZ. Our simulations also demonstrated the greatest tethering effect and stress in BZ regions with fibre direction tangential to the BZ-remote zone boundary. This can be explained by the lower stiffness in the cross-fibre direction, which gave rise to a greater stretching of the IZ in this direction. The average fibre strain of the IZ, as well as the maximum stress in the sub-endocardial layer, increased steeply from 10% to 50% infarct TME, and slower thereafter. Based on our stress-strain loop analysis, we found impairment in the myocardial energy efficiency and elevated energy expenditure with increasing infarct TME, which we believe to place the BZ at further risk of hypoxia. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. The deep chlorophyll layer in Lake Ontario: Extent, mechanisms of formation, and abiotic predictors

    Science.gov (United States)

    Scofield, Anne E.; Watkins, James M.; Weidel, Brian C.; Luckey, Frederick J.; Rudstam, Lars G.

    2017-01-01

    Epilimnetic production has declined in Lake Ontario, but increased production in metalimnetic deep chlorophyll layers (DCLs) may compensate for these losses. We investigated the spatial and temporal extent of DCLs, the mechanisms driving DCL formation, and the use of physical variables for predicting the depth and concentration of the deep chlorophyll maximum (DCM) during April–September 2013. A DCL with DCM concentrations 2 to 3 times greater than those in the epilimnion was present when the euphotic depth extended below the epilimnion, which occurred primarily from late June through mid-August. In situ growth was important for DCL formation in June and July, but settling and photoadaptation likely also contributed to the later-season DCL. Supporting evidence includes: phytoplankton biovolume was 2.4 × greater in the DCL than in the epilimnion during July, the DCL phytoplankton community of July was different from that of May and the July epilimnion (p = 0.004), and there were concurrences of DCM with maxima in fine particle concentration and dissolved oxygen saturation. Higher nutrient levels in the metalimnion may also be a necessary condition for DCL formation because July metalimnetic concentrations were 1.5 × (nitrate) and 3.5 × (silica) greater than in the epilimnion. Thermal structure variables including epilimnion depth, thermocline depth, and thermocline steepness were useful for predicting DCM depth; the inclusion of euphotic depth only marginally improved these predictions. However, euphotic depth was critical for predicting DCM concentrations. The DCL is a productive and predictable feature of the Lake Ontario ecosystem during the stratified period.

  17. Risk analysis for repositories in north Switzerland. Extent and probability of geologic processes and events

    Energy Technology Data Exchange (ETDEWEB)

    Buergisser, H M; Herrnberger, V

    1981-07-01

    The literature study assesses, in the form of expert analysis, geological processes and events for a 1200 km/sup 2/-area of northern Switzerland, with regard to repositories for medium- and high-active waste (depth 100 to 600 m and 600 to 2500 m, respectively) over the next 10/sup 6/ years. The area, which comprises parts of the Tabular Jura, the folded Jura and the Molasse Basin, the latter two being parts of the Alpine Orogene, has undergone a non-uniform geologic development since the Oligocene. Within the next 10/sup 4/ to 10/sup 5/ years a maximum earthquake intensity of VIII-IX (MSK-scale) has been predicted. After this period, particularly in the southern and eastern parts of the area, glaciations will probably occur, with asociated erosion of possibly 200 to 300 m. Fluvial erosion as a response to an uplift could reach similar values after 10/sup 5/ to 10/sup 6/ years; however, there are no data on the recent relative vertical crustal movements of the area. The risk of a meteorite impact is considered small as compared to that of these factors. Seismic activity and the position and extent of faults are so poorly known within the area that the faulting probability cannot be derived at present. Flooding by the sea, intrusion of magma, diapirism, metamorphism and volcanic eruptions are not considered to be risk factors for final repositories in northern Switzerland. For the shallow-type repositories, the risk of denudation and landslides have to be judged when locality-bound projects have been proposed.

  18. Maximum physical capacity testing in cancer patients undergoing chemotherapy

    DEFF Research Database (Denmark)

    Knutsen, L.; Quist, M; Midtgaard, J

    2006-01-01

    BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...

  19. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  20. Strong Maximum Principle for Multi-Term Time-Fractional Diffusion Equations and its Application to an Inverse Source Problem

    OpenAIRE

    Liu, Yikan

    2015-01-01

    In this paper, we establish a strong maximum principle for fractional diffusion equations with multiple Caputo derivatives in time, and investigate a related inverse problem of practical importance. Exploiting the solution properties and the involved multinomial Mittag-Leffler functions, we improve the weak maximum principle for the multi-term time-fractional diffusion equation to a stronger one, which is parallel to that for its single-term counterpart as expected. As a direct application, w...

  1. The Venture Capital-University Interface: Best Practices for Maximum Impact

    Science.gov (United States)

    Holly, Krisztina

    2010-01-01

    Entrepreneurial start-ups have left an indelible impression on much of the USA's recent economic history. As hotbeds for technological innovation, university research laboratories create groundbreaking innovations that have been at the heart of many successful start-ups. However, powerful ideas do not necessarily beget successful companies: great…

  2. Prediction of the extent of formation damage caused by water injection

    Energy Technology Data Exchange (ETDEWEB)

    Al-Homadhi, Emad S. [King Saud Univ., Riyadh (Saudi Arabia). Petroleum Engineering Dept.

    2013-06-15

    As a general practice water is injected along the O/W contact to maintain reservoir pressure during production. Down hole analysis of the injected water shows that, even after surface treatment, it still can contain a considerable amount of solid particles. These particles can bridge formation pores and cause a considerable reduction in the injectivity. To ensure good injectivity over a longer term, the concentration and size of these solids should not exceed certain limits. In this article core flood tests were carried out to simulate high rate injectors. The injected brine contained solid particles in different concentrations and sizes. Particle concentration was between 5 and 20 ppm and the particle mean size was between 2 and 9 {mu}m. The results were presented as damaging ratio versus pore volume injected. Contrarily to previous studies instead of using experimental results in calibrating or evaluating certain theoretical models, the results in this study were directly fitted to produce equations which can predict the extent of damage caused by injected water by knowing the mean size and concentration of the solid particles contained in that water. (orig.)

  3. Extent of East-African Nurse Leaders’ Participation in Health Policy Development

    Directory of Open Access Journals (Sweden)

    N. Shariff

    2012-01-01

    Full Text Available This paper reports part of a bigger study whose aim was to develop an empowerment model that could be used to enhance nurse leaders’ participation in health policy development. A Delphi survey was applied which included the following criteria: expert panelists, iterative rounds, statistical analysis, and consensus building. The expert panelists were purposively selected and included national nurse leaders in leadership positions at the nursing professional associations, nursing regulatory bodies, ministries of health, and universities in East Africa. The study was conducted in three iterative rounds. The results reported here were gathered as part of the first round of the study and that examined the extent of nurse leaders’ participation in health policy development. Seventy-eight (78 expert panelists were invited to participate in the study, and the response rate was 47%. Data collection was done with the use of a self-report questionnaire. Data analysis was done by use of SPSS and descriptive statistics were examined. The findings indicated that nurse leaders participate in health policy development though participation is limited and not consistent across all the stages of health policy development. The recommendations from the findings are that health policy development process needs to be pluralistic and inclusive of all nurse leaders practicing in positions related to policy development and the process must be open to their ideas and suggestions.

  4. Innovative Human Resource Management Practices and Firm ...

    African Journals Online (AJOL)

    In this study, the effect of innovative HRM practices on the financial performance of banks in Nigeria is examined. Results indicate that strategic integration and devolvement of HRM are practiced to a moderate extent in the Nigerian banking sector. Findings also show that innovative HRM practices have significant positive ...

  5. 78 FR 9845 - Minimum and Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for a Violation of...

    Science.gov (United States)

    2013-02-12

    ... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...

  6. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  7. To what extent does variability of historical rainfall series influence extreme event statistics of sewer system surcharge and overflows?

    DEFF Research Database (Denmark)

    Schaarup-Jensen, Kjeld; Rasmussen, Michael R.; Thorndahl, Søren

    2008-01-01

    In urban drainage modeling long term extreme statistics has become an important basis for decision-making e.g. in connection with renovation projects. Therefore it is of great importance to minimize the uncertainties concerning long term prediction of maximum water levels and combined sewer...... overflow (CSO) in drainage systems. These uncertainties originate from large uncertainties regarding rainfall inputs, parameters, and assessment of return periods. This paper investigates how the choice of rainfall time series influences the extreme events statistics of max water levels in manholes and CSO...... gauges are located at a distance of max 20 kilometers from the catchment. All gauges are included in the Danish national rain gauge system which was launched in 1976. The paper describes to what extent the extreme events statistics based on these 9 series diverge from each other and how this diversity...

  8. To what extent does variability of historical rainfall series influence extreme event statistics of sewer system surcharge and overflows?

    DEFF Research Database (Denmark)

    Schaarup-Jensen, Kjeld; Rasmussen, Michael R.; Thorndahl, Søren

    2009-01-01

    In urban drainage modelling long term extreme statistics has become an important basis for decision-making e.g. in connection with renovation projects. Therefore it is of great importance to minimize the uncertainties concerning long term prediction of maximum water levels and combined sewer...... overflow (CSO) in drainage systems. These uncertainties originate from large uncertainties regarding rainfall inputs, parameters, and assessment of return periods. This paper investigates how the choice of rainfall time series influences the extreme events statistics of max water levels in manholes and CSO...... gauges are located at a distance of max 20 kilometers from the catchment. All gauges are included in the Danish national rain gauge system which was launched in 1976. The paper describes to what extent the extreme events statistics based on these 9 series diverge from each other and how this diversity...

  9. The radii of the Wolf-Rayet stars and the extent of their chromosphere-corona formation

    Energy Technology Data Exchange (ETDEWEB)

    Sahade, J [Instituto de Astronomia y Fisica del Espacio, Buenos Aires, Argentina; Zorec, J [College de France, Paris, France

    1981-03-01

    The radii of 14 Wolf-Rayet stars are computed on the basis of previously reported absolute fluxes in the region from 4150 to 8000 A for 10 WN stars and from 3650 to 8000 A for four WC stars. For comparison, the radii of 11 Of stars are also calculated. The Wolf-Rayet radii are found to range from about 10 to 35 solar radii, values that do not appear to provide supporting evidence for the hypothesis that Of stars evolve into late WN stars. Available UV observations of Gamma-2 Vel are used to investigate the extent of the chromosphere-corona structure in Wolf-Rayet stars. It is suggested that the second electron-temperature maximum in a recently proposed model for the extended envelopes of Wolf-Rayet stars should be further than about 300 solar radii from the center of a star.

  10. Parameters determining maximum wind velocity in a tropical cyclone

    International Nuclear Information System (INIS)

    Choudhury, A.M.

    1984-09-01

    The spiral structure of a tropical cyclone was earlier explained by a tangential velocity distribution which varies inversely as the distance from the cyclone centre outside the circle of maximum wind speed. The case has been extended in the present paper by adding a radial velocity. It has been found that a suitable combination of radial and tangential velocities can account for the spiral structure of a cyclone. This enables parametrization of the cyclone. Finally a formula has been derived relating maximum velocity in a tropical cyclone with angular momentum, radius of maximum wind speed and the spiral angle. The shapes of the spirals have been computed for various spiral angles. (author)

  11. Prognostic value of medulloblastoma extent of resection after accounting for molecular subgroup: a retrospective integrated clinical and molecular analysis.

    Science.gov (United States)

    Thompson, Eric M; Hielscher, Thomas; Bouffet, Eric; Remke, Marc; Luu, Betty; Gururangan, Sridharan; McLendon, Roger E; Bigner, Darell D; Lipp, Eric S; Perreault, Sebastien; Cho, Yoon-Jae; Grant, Gerald; Kim, Seung-Ki; Lee, Ji Yeoun; Rao, Amulya A Nageswara; Giannini, Caterina; Li, Kay Ka Wai; Ng, Ho-Keung; Yao, Yu; Kumabe, Toshihiro; Tominaga, Teiji; Grajkowska, Wieslawa A; Perek-Polnik, Marta; Low, David C Y; Seow, Wan Tew; Chang, Kenneth T E; Mora, Jaume; Pollack, Ian F; Hamilton, Ronald L; Leary, Sarah; Moore, Andrew S; Ingram, Wendy J; Hallahan, Andrew R; Jouvet, Anne; Fèvre-Montange, Michelle; Vasiljevic, Alexandre; Faure-Conter, Cecile; Shofuda, Tomoko; Kagawa, Naoki; Hashimoto, Naoya; Jabado, Nada; Weil, Alexander G; Gayden, Tenzin; Wataya, Takafumi; Shalaby, Tarek; Grotzer, Michael; Zitterbart, Karel; Sterba, Jaroslav; Kren, Leos; Hortobágyi, Tibor; Klekner, Almos; László, Bognár; Pócza, Tímea; Hauser, Peter; Schüller, Ulrich; Jung, Shin; Jang, Woo-Youl; French, Pim J; Kros, Johan M; van Veelen, Marie-Lise C; Massimi, Luca; Leonard, Jeffrey R; Rubin, Joshua B; Vibhakar, Rajeev; Chambless, Lola B; Cooper, Michael K; Thompson, Reid C; Faria, Claudia C; Carvalho, Alice; Nunes, Sofia; Pimentel, José; Fan, Xing; Muraszko, Karin M; López-Aguilar, Enrique; Lyden, David; Garzia, Livia; Shih, David J H; Kijima, Noriyuki; Schneider, Christian; Adamski, Jennifer; Northcott, Paul A; Kool, Marcel; Jones, David T W; Chan, Jennifer A; Nikolic, Ana; Garre, Maria Luisa; Van Meir, Erwin G; Osuka, Satoru; Olson, Jeffrey J; Jahangiri, Arman; Castro, Brandyn A; Gupta, Nalin; Weiss, William A; Moxon-Emre, Iska; Mabbott, Donald J; Lassaletta, Alvaro; Hawkins, Cynthia E; Tabori, Uri; Drake, James; Kulkarni, Abhaya; Dirks, Peter; Rutka, James T; Korshunov, Andrey; Pfister, Stefan M; Packer, Roger J; Ramaswamy, Vijay; Taylor, Michael D

    2016-04-01

    benefit for gross total resection compared with near-total resection (HR 1·05, 0·71-1·53, p=0·8158 for progression-free survival and HR 1·14, 0·75-1·72, p=0·55 for overall survival). No significant survival benefit existed for greater extent of resection for patients with WNT, SHH, or group 3 tumours (HR 1·03, 0·67-1·58, p=0·89 for sub-total resection vs gross total resection). For patients with group 4 tumours, gross total resection conferred a benefit to progression-free survival compared with sub-total resection (HR 1·97, 1·22-3·17, p=0·0056), especially for those with metastatic disease (HR 2·22, 1·00-4·93, p=0·050). However, gross total resection had no effect on overall survival compared with sub-total resection in patients with group 4 tumours (HR 1·67, 0·93-2·99, p=0·084). The prognostic benefit of increased extent of resection for patients with medulloblastoma is attenuated after molecular subgroup affiliation is taken into account. Although maximum safe surgical resection should remain the standard of care, surgical removal of small residual portions of medulloblastoma is not recommended when the likelihood of neurological morbidity is high because there is no definitive benefit to gross total resection compared with near-total resection. Canadian Cancer Society Research Institute, Terry Fox Research Institute, Canadian Institutes of Health Research, National Institutes of Health, Pediatric Brain Tumor Foundation, and the Garron Family Chair in Childhood Cancer Research. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)

    Data.gov (United States)

    NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...

  13. Probabilistic maximum-value wind prediction for offshore environments

    DEFF Research Database (Denmark)

    Staid, Andrea; Pinson, Pierre; Guikema, Seth D.

    2015-01-01

    statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...... the full probabilistic distribution of maximum wind speed. Knowledge of the maximum wind speed for an offshore location within a given period can inform decision-making regarding turbine operations, planned maintenance operations and power grid scheduling in order to improve safety and reliability...

  14. Parametric optimization of thermoelectric elements footprint for maximum power generation

    DEFF Research Database (Denmark)

    Rezania, A.; Rosendahl, Lasse; Yin, Hao

    2014-01-01

    The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost......-performance, and variation of efficiency in the uni-couple over a wide range of the heat transfer coefficient on the cold junction. The three-dimensional (3D) governing equations of the thermoelectricity and the heat transfer are solved using the finite element method (FEM) for temperature dependent properties of TE...... materials. The results, which are in good agreement with the previous computational studies, show that the maximum power generation and the maximum cost-performance in the module occur at An/Ap

  15. Ethylene Production Maximum Achievable Control Technology (MACT) Compliance Manual

    Science.gov (United States)

    This July 2006 document is intended to help owners and operators of ethylene processes understand and comply with EPA's maximum achievable control technology standards promulgated on July 12, 2002, as amended on April 13, 2005 and April 20, 2006.

  16. A technique for estimating maximum harvesting effort in a stochastic ...

    Indian Academy of Sciences (India)

    Unknown

    Estimation of maximum harvesting effort has a great impact on the ... fluctuating environment has been developed in a two-species competitive system, which shows that under realistic .... The existence and local stability properties of the equi-.

  17. Water Quality Assessment and Total Maximum Daily Loads Information (ATTAINS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Water Quality Assessment TMDL Tracking And Implementation System (ATTAINS) stores and tracks state water quality assessment decisions, Total Maximum Daily Loads...

  18. Post optimization paradigm in maximum 3-satisfiability logic programming

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.

  19. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  20. Encoding Strategy for Maximum Noise Tolerance Bidirectional Associative Memory

    National Research Council Canada - National Science Library

    Shen, Dan

    2003-01-01

    In this paper, the Basic Bidirectional Associative Memory (BAM) is extended by choosing weights in the correlation matrix, for a given set of training pairs, which result in a maximum noise tolerance set for BAM...

  1. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.

    2012-01-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous

  2. Revisions to the original extent of the Devonian Shale-Middle and Upper Paleozoic Total Petroleum System

    Science.gov (United States)

    Enomoto, Catherine B.; Rouse, William A.; Trippi, Michael H.; Higley, Debra K.

    2016-04-11

    Technically recoverable undiscovered hydrocarbon resources in continuous accumulations are present in Upper Devonian and Lower Mississippian strata in the Appalachian Basin Petroleum Province. The province includes parts of New York, Pennsylvania, Ohio, Maryland, West Virginia, Virginia, Kentucky, Tennessee, Georgia, and Alabama. The Upper Devonian and Lower Mississippian strata are part of the previously defined Devonian Shale-Middle and Upper Paleozoic Total Petroleum System (TPS) that extends from New York to Tennessee. This publication presents a revision to the extent of the Devonian Shale-Middle and Upper Paleozoic TPS. The most significant modification to the maximum extent of the Devonian Shale-Middle and Upper Paleozoic TPS is to the south and southwest, adding areas in Tennessee, Georgia, Alabama, and Mississippi where Devonian strata, including potential petroleum source rocks, are present in the subsurface up to the outcrop. The Middle to Upper Devonian Chattanooga Shale extends from southeastern Kentucky to Alabama and eastern Mississippi. Production from Devonian shale has been established in the Appalachian fold and thrust belt of northeastern Alabama. Exploratory drilling has encountered Middle to Upper Devonian strata containing organic-rich shale in west-central Alabama. The areas added to the TPS are located in the Valley and Ridge, Interior Low Plateaus, and Appalachian Plateaus physiographic provinces, including the portion of the Appalachian fold and thrust belt buried beneath Cretaceous and younger sediments that were deposited on the U.S. Gulf Coastal Plain.

  3. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  4. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  5. Maximum organic carbon limits at different melter feed rates (U)

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    This report documents the results of a study to assess the impact of varying melter feed rates on the maximum total organic carbon (TOC) limits allowable in the DWPF melter feed. Topics discussed include: carbon content; feed rate; feed composition; melter vapor space temperature; combustion and dilution air; off-gas surges; earlier work on maximum TOC; overview of models; and the results of the work completed

  6. A tropospheric ozone maximum over the equatorial Southern Indian Ocean

    Directory of Open Access Journals (Sweden)

    L. Zhang

    2012-05-01

    Full Text Available We examine the distribution of tropical tropospheric ozone (O3 from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O3 during 2005 to 2009 reveal a distinct, persistent O3 maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O3 observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O3 maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O3 maximum is dominated by the O3 production driven by lightning nitrogen oxides (NOx emissions, which accounts for 62% of the tropospheric column O3 in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O3 maximum are rather small. The O3 productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O3 maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O3 maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.

  7. Dinosaur Metabolism and the Allometry of Maximum Growth Rate

    OpenAIRE

    Myhrvold, Nathan P.

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth...

  8. MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY

    Directory of Open Access Journals (Sweden)

    B. Sizykh Grigory

    2017-01-01

    Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.

  9. On semidefinite programming relaxations of maximum k-section

    NARCIS (Netherlands)

    de Klerk, E.; Pasechnik, D.V.; Sotirov, R.; Dobre, C.

    2012-01-01

    We derive a new semidefinite programming bound for the maximum k -section problem. For k=2 (i.e. for maximum bisection), the new bound is at least as strong as a well-known bound by Poljak and Rendl (SIAM J Optim 5(3):467–487, 1995). For k ≥ 3the new bound dominates a bound of Karisch and Rendl

  10. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  11. Maximum spectral demands in the near-fault region

    Science.gov (United States)

    Huang, Yin-Nan; Whittaker, Andrew S.; Luco, Nicolas

    2008-01-01

    The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed.

  12. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  13. Investigation of nucleation events vertical extent: a long term study at two different altitude sites

    Directory of Open Access Journals (Sweden)

    J. Boulon

    2011-06-01

    Full Text Available In this work we present an analysis of the occurrence of nucleation events during more than three years of measurements at two different rural altitude sites, the puy de Dôme research station (1465 m a.s.l. and the Opme station (660 m a.s.l., central France. The collected database is a unique combination of scanning mobility particle sizer (10–400 nm, air ion spectrometers (from 0.8 to 42 nm for NTP-conditions, and, neutral clusters and air ion spectrometers (from 0.8 to 42 nm for NTP-conditions measurements at these two different altitudes nearly located research stations, from February 2007 to June 2010. The seasonality of the frequency of nucleation events was studied at the puy de Dôme station and maximum of events frequency was found during early spring and early autumn. During the measurement period, neither the particle formation rates (J2= 1.382 ± 0.195 s−1 nor the growth rates (GR1.3−20 nm = 6.20 ± 0.12 nm h−1 differ from one site to the other on average. Hovewer, we found that, on 437 sampling days in common to the two sites, the nucleation frequency was higher at the puy de Dôme station (35.9 %, 157 days than at the low elevation station of Opme (20.8 %, 91 days. LIDAR measurements and the evolution of the potential equivalent temperature revealed that the nucleation could be triggered either (i within the whole low tropospheric column at the same time from the planetary boundary layer to the top of the interface layer (29.2 %, 47 events, (ii above the planetary boundary layer upper limit (43.5 %, 70 events, and (iii at low altitude and then transported, conserving dynamic and properties, at high altitude (24.8 %, 40 events. This is the first time that the vertical extent of nucleation can be studied over a long observational period, allowing for a rigorous

  14. Theoretical assessment of the maximum power point tracking efficiency of photovoltaic facilities with different converter topologies

    Energy Technology Data Exchange (ETDEWEB)

    Enrique, J.M.; Duran, E.; Andujar, J.M. [Departamento de Ingenieria Electronica, de Sistemas Informaticos y Automatica, Universidad de Huelva (Spain); Sidrach-de-Cardona, M. [Departamento de Fisica Aplicada, II, Universidad de Malaga (Spain)

    2007-01-15

    The operating point of a photovoltaic generator that is connected to a load is determined by the intersection point of its characteristic curves. In general, this point is not the same as the generator's maximum power point. This difference means losses in the system performance. DC/DC converters together with maximum power point tracking systems (MPPT) are used to avoid these losses. Different algorithms have been proposed for maximum power point tracking. Nevertheless, the choice of the configuration of the right converter has not been studied so widely, although this choice, as demonstrated in this work, has an important influence in the optimum performance of the photovoltaic system. In this article, we conduct a study of the three basic topologies of DC/DC converters with resistive load connected to photovoltaic modules. This article demonstrates that there is a limitation in the system's performance according to the type of converter used. Two fundamental conclusions are derived from this study: (1) the buck-boost DC/DC converter topology is the only one which allows the follow-up of the PV module maximum power point regardless of temperature, irradiance and connected load and (2) the connection of a buck-boost DC/DC converter in a photovoltaic facility to the panel output could be a good practice to improve performance. (author)

  15. A single European pharmaceutical market: Does maximum harmonization enhance medicinal product innovation?

    DEFF Research Database (Denmark)

    Faeh, Andrea Beata

    2013-01-01

    the orphan medicinal products scheme. The latter is subject to uniform Union rules specifically introduced to stimulate research and development and has led to the development of a number of new products. The article shows that the most radical positive integration depends to a large extent on the prospect...... – Innovation Union’ – market fragmentation to be one of the major causes of the lack of innovation. In order to establish if maximum harmonization benefits innovation, two distinct legal regimes in the pharmaceutical sector will be compared. The general rules for medicinal products are weighed against...... of it yielding revenue for the innovator. Hence, fuller harmonization can benefit innovation, but it is just as important, if not more important, to address other factors such as pricing, reimbursement and patent protection....

  16. 34 CFR 600.10 - Date, extent, duration, and consequence of eligibility.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false Date, extent, duration, and consequence of eligibility... EDUCATION ACT OF 1965, AS AMENDED General § 600.10 Date, extent, duration, and consequence of eligibility... statutory and regulatory requirements governing its eligibility. (e) Consequence of eligibility. (1) If, as...

  17. Prevalence, Vascular Distribution, and Multiterritorial Extent of Subclinical Atherosclerosis in a Middle-Aged Cohort

    DEFF Research Database (Denmark)

    Fernández-Friera, Leticia; Peñalvo, José L; Fernández-Ortiz, Antonio

    2015-01-01

    age, 45.8 years; 63% male) to evaluate the systemic extent of atherosclerosis in the carotid, abdominal aortic, and iliofemoral territories by 2-/3-dimensional ultrasound and coronary artery calcification by computed tomography. The extent of subclinical atherosclerosis, defined as presence of plaque...

  18. Extent of resection and timing of surgery in adult low grade glioma.

    Science.gov (United States)

    A Mirza, Farhan; Shamim, Muhammad Shahzad

    2017-06-01

    Low grade glioma is a group of WHO grade II tumours including diffuse astrocytoma, oligodendroglioma, and oligoastrocytoma. Strong evidence exists in literature now to support early surgery and higher extent of safe resection in improving outcomes. In this review, we are highlighting some of the important studies done in the last few years specifically addressing timing of surgery and extent of resection.

  19. Effects of geographical extent on the determinants of woody plant diversity

    DEFF Research Database (Denmark)

    Wang, Zhiheng; Rahbek, Carsten; Fang, Jingyun

    2012-01-01

    the quantitative effects of geographical extent are rarely tested. Here, using distribution maps of 11,405 woody species found in China and associated environmental data to the domain, we investigated the influence of geographical extent on the determinants of species richness patterns. Our results revealed...

  20. The spatial extent of polycyclic aromatic hydrocarbons emission in the Herbig star HD 179218

    Science.gov (United States)

    Taha, A. S.; Labadie, L.; Pantin, E.; Matter, A.; Alvarez, C.; Esquej, P.; Grellmann, R.; Rebolo, R.; Telesco, C.; Wolf, S.

    2018-04-01

    Aim. We investigate, in the mid-infrared, the spatial properties of the polycyclic aromatic hydrocarbons (PAHs) emission in the disk of HD 179218, an intermediate-mass Herbig star at 300 pc. Methods: We obtained mid-infrared images in the PAH-1, PAH-2 and Si-6 filters centered at 8.6, 11.3, and 12.5 μm, and N-band low-resolution spectra using CanariCam on the 10-m Gran Telescopio Canarias (GTC). We compared the point spread function (PSF) profiles measured in the PAH filters to the profile derived in the Si-6 filter, where the thermal continuum emission dominates. We performed radiative transfer modeling of the spectral energy distribution (SED) and produced synthetic images in the three filters to investigate different spatial scenarios. Results: Our data show that the disk emission is spatially resolved in the PAH-1 and PAH-2 filters, while unresolved in the Si-6 filter. Thanks to very good observing conditions, an average full width at half maximum (FWHM) of 0.232'', 0.280'' and 0.293'' is measured in the three filters, respectively. Gaussian disk fitting and quadratic subtraction of the science and calibrator PSFs suggests a lower-limit characteristic angular diameter of the emission of 100 mas, or 30 au. The photometric and spectroscopic results are compatible with previous findings. Our radiative transfer (RT) modeling of the continuum suggests that the resolved emission should result from PAH molecules on the disk atmosphere being UV-excited by the central star. Simple geometrical models of the PAH component compared to the underlying continuum point to a PAH emission uniformly extended out to the physical limits of the disk model. Furthermore, our RT best model of the continuum requires a negative exponent of the surface density power-law, in contrast with earlier modeling pointing to a positive exponent. Conclusions: We have spatially resolved - for the first time to our knowledge - the PAHs emission in the disk of HD 179218 and set constraints on its

  1. Seasonal regional forecast of the minimum sea ice extent in the LapteV Sea

    Science.gov (United States)

    Tremblay, B.; Brunette, C.; Newton, R.

    2017-12-01

    Late winter anomaly of sea ice export from the peripheral seas of the Atctic Ocean was found to be a useful predictor for the minimum sea ice extent (SIE) in the Arctic Ocean (Williams et al., 2017). In the following, we present a proof of concept for a regional seasonal forecast of the min SIE for the Laptev Sea based on late winter coastal divergence quantified using a Lagrangian Ice Tracking System (LITS) forced with satellite derived sea-ice drifts from the Polar Pathfinder. Following Nikolaeva and Sesterikov (1970), we track an imaginary line just offshore of coastal polynyas in the Laptev Sea from December of the previous year to May 1 of the following year using LITS. Results show that coastal divergence in the Laptev Sea between February 1st and May 1st is best correlated (r = -0.61) with the following September minimum SIE in accord with previous results from Krumpen et al. (2013, for the Laptev Sea) and Williams et a. (2017, for the pan-Arctic). This gives a maximum seasonal predictability of Laptev Sea min SIE anomalies from observations of approximately 40%. Coastal ice divergence leads to formation of thinner ice that melts earlier in early summer, hence creating areas of open water that have a lower albedo and trigger an ice-albedo feedback. In the Laptev Sea, we find that anomalies of coastal divergence in late winter are amplified threefold to result in the September SIE. We also find a correlation coefficient r = 0.49 between February-March-April (FMA) anomalies of coastal divergence with the FMA averaged AO index. Interestingly, the correlation is stronger, r = 0.61, when comparing the FMA coastal divergence anomalies to the DJFMA averaged AO index. It is hypothesized that the AO index at the beginning of the winter (and the associated anomalous sea ice export) also contains information that impact the magnitude of coastal divergence opening later in the winter. Our approach differs from previous approaches (e.g. Krumpen et al and Williams et al

  2. Port practices

    OpenAIRE

    Grigorut Cornel; Anechitoae Constantin; Grigorut Lavinia-Maria

    2011-01-01

    Commercial practices are practices or rules applicable to contractual relations between the participants to international trade activities. Commercial practices require a determined objective element of a particular practice, attitude or behavior. They are characterized by: continuity, consistency and uniformity and require duration, repeatability and stability. Depending on how many partners apply them, practices differ from the habits established between certain contracting parties

  3. The Speed and Extent of New Venture Internationalisation in the Emerging Economy Context

    Directory of Open Access Journals (Sweden)

    Rūta Kazlauskaitė

    2015-06-01

    Full Text Available The objective of this paper is to explore to what extentthe patterns of the internationalisation  process  described  in  the  new  venture  (NV internationalisation theory, developed on the experience and practice of advanced economy firms, apply to the emerging economy context. The  paper  is  a  systematic  literature  review  developed on the basis of peer reviewed journal articles on NV internationalisation in emerging economies. It critically evaluates the applicability  of arguments proposed by the NV internationalisation theory to the emerging economy context. The  paper  contributes  to  the  NV  internationalisation theory  by offering  some propositions  on  the  specifics  of  international entrepreneurship in the emerging economy context. Findings:  In  contrast  to  firms  from  advanced  economies, internationalisation  of  NV from emerging economies is mainly driven by push factors related to their domestic markets.  Transportation, communication  and  digital  technology  play  a  less  relevant role in emerging economies; besides, their significance is more context specific; while their absence does not inhibit rapid internationalisation. To  better  understand  the  process  of  NV internationalisation in the emerging economy context,it is necessary to study to what extent  other  theoretical  logics  contribute  to  its  explication.  Further  research  should also seek to synthesise findings of research on other major theoretical frameworks.

  4. Extent, Awareness and Perception of Dissemination Bias in Qualitative Research: An Explorative Survey.

    Science.gov (United States)

    Toews, Ingrid; Glenton, Claire; Lewin, Simon; Berg, Rigmor C; Noyes, Jane; Booth, Andrew; Marusic, Ana; Malicki, Mario; Munthe-Kaas, Heather M; Meerpohl, Joerg J

    2016-01-01

    Qualitative research findings are increasingly used to inform decision-making. Research has indicated that not all quantitative research on the effects of interventions is disseminated or published. The extent to which qualitative researchers also systematically underreport or fail to publish certain types of research findings, and the impact this may have, has received little attention. A survey was delivered online to gather data regarding non-dissemination and dissemination bias in qualitative research. We invited relevant stakeholders through our professional networks, authors of qualitative research identified through a systematic literature search, and further via snowball sampling. 1032 people took part in the survey of whom 859 participants identified as researchers, 133 as editors and 682 as peer reviewers. 68.1% of the researchers said that they had conducted at least one qualitative study that they had not published in a peer-reviewed journal. The main reasons for non-dissemination were that a publication was still intended (35.7%), resource constraints (35.4%), and that the authors gave up after the paper was rejected by one or more journals (32.5%). A majority of the editors and peer reviewers "(strongly) agreed" that the main reasons for rejecting a manuscript of a qualitative study were inadequate study quality (59.5%; 68.5%) and inadequate reporting quality (59.1%; 57.5%). Of 800 respondents, 83.1% "(strongly) agreed" that non-dissemination and possible resulting dissemination bias might undermine the willingness of funders to support qualitative research. 72.6% and 71.2%, respectively, "(strongly) agreed" that non-dissemination might lead to inappropriate health policy and health care. The proportion of non-dissemination in qualitative research is substantial. Researchers, editors and peer reviewers play an important role in this. Non-dissemination and resulting dissemination bias may impact on health care research, practice and policy. More

  5. To what extent does IQ 'explain' socio-economic variations in function?

    Directory of Open Access Journals (Sweden)

    van Eijk Jacques

    2007-07-01

    Full Text Available Abstract Background The aims of this study were to examine the extent to which higher intellectual abilities protect higher socio-economic groups from functional decline and to examine whether the contribution of intellectual abilities is independent of childhood deprivation and low birth weight and other socio-economic and developmental factors in early life. Methods The Maastricht Aging Study (MAAS is a prospective cohort study based upon participants in a registration network of general practices in The Netherlands. Information was available on 1211 men and women, 24 – 81 years old, who were without cognitive impairment at baseline (1993 – 1995, who ever had a paid job, and who participated in the six-year follow-up. Main outcomes were longitudinal decline in important components of quality of life and successful aging, i.e., self-reported physical, affective, and cognitive functioning. Results Persons with a low occupational level at baseline showed more functional decline than persons with a high occupational level. Socio-economic and developmental factors from early life hardly contributed to the adult socio-economic differences in functional decline. Intellectual abilities, however, took into account more than one third of the association between adult socio-economic status and functional decline. The contribution of the intellectual abilities was independent of the early life factors. Conclusion Rather than developmental and socio-economic characteristics of early life, the findings substantiate the importance of intellectual abilities for functional decline and their contribution – as potential, but neglected confounders – to socio-economic differences in functioning, successful aging, and quality of life. The higher intellectual abilities in the higher socio-economic status groups may also underlie the higher prevalences of mastery, self-efficacy and efficient coping styles in these groups.

  6. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  7. Fractal Dimension and Maximum Sunspot Number in Solar Cycle

    Directory of Open Access Journals (Sweden)

    R.-S. Kim

    2006-09-01

    Full Text Available The fractal dimension is a quantitative parameter describing the characteristics of irregular time series. In this study, we use this parameter to analyze the irregular aspects of solar activity and to predict the maximum sunspot number in the following solar cycle by examining time series of the sunspot number. For this, we considered the daily sunspot number since 1850 from SIDC (Solar Influences Data analysis Center and then estimated cycle variation of the fractal dimension by using Higuchi's method. We examined the relationship between this fractal dimension and the maximum monthly sunspot number in each solar cycle. As a result, we found that there is a strong inverse relationship between the fractal dimension and the maximum monthly sunspot number. By using this relation we predicted the maximum sunspot number in the solar cycle from the fractal dimension of the sunspot numbers during the solar activity increasing phase. The successful prediction is proven by a good correlation (r=0.89 between the observed and predicted maximum sunspot numbers in the solar cycles.

  8. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.; Ito, N.

    2013-01-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  9. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.

    2013-10-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  10. How long do centenarians survive? Life expectancy and maximum lifespan.

    Science.gov (United States)

    Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A

    2017-08-01

    The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  11. A Modified Levenberg-Marquardt Method for Nonsmooth Equations with Finitely Many Maximum Functions

    Directory of Open Access Journals (Sweden)

    Shou-qiang Du

    2008-01-01

    Full Text Available For solving nonsmooth systems of equations, the Levenberg-Marquardt method and its variants are of particular importance because of their locally fast convergent rates. Finitely many maximum functions systems are very useful in the study of nonlinear complementarity problems, variational inequality problems, Karush-Kuhn-Tucker systems of nonlinear programming problems, and many problems in mechanics and engineering. In this paper, we present a modified Levenberg-Marquardt method for nonsmooth equations with finitely many maximum functions. Under mild assumptions, the present method is shown to be convergent Q-linearly. Some numerical results comparing the proposed method with classical reformulations indicate that the modified Levenberg-Marquardt algorithm works quite well in practice.

  12. Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model

    Science.gov (United States)

    Yang, Yuefang; Gan, Chunhui; Shen, Tingting

    2017-05-01

    In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.

  13. Modeling multisite streamflow dependence with maximum entropy copula

    Science.gov (United States)

    Hao, Z.; Singh, V. P.

    2013-10-01

    Synthetic streamflows at different sites in a river basin are needed for planning, operation, and management of water resources projects. Modeling the temporal and spatial dependence structure of monthly streamflow at different sites is generally required. In this study, the maximum entropy copula method is proposed for multisite monthly streamflow simulation, in which the temporal and spatial dependence structure is imposed as constraints to derive the maximum entropy copula. The monthly streamflows at different sites are then generated by sampling from the conditional distribution. A case study for the generation of monthly streamflow at three sites in the Colorado River basin illustrates the application of the proposed method. Simulated streamflow from the maximum entropy copula is in satisfactory agreement with observed streamflow.

  14. Quality, precision and accuracy of the maximum No. 40 anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  15. Mass mortality of the vermetid gastropod Ceraesignum maximum

    Science.gov (United States)

    Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.

    2016-09-01

    Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.

  16. Stationary neutrino radiation transport by maximum entropy closure

    International Nuclear Information System (INIS)

    Bludman, S.A.

    1994-11-01

    The authors obtain the angular distributions that maximize the entropy functional for Maxwell-Boltzmann (classical), Bose-Einstein, and Fermi-Dirac radiation. In the low and high occupancy limits, the maximum entropy closure is bounded by previously known variable Eddington factors that depend only on the flux. For intermediate occupancy, the maximum entropy closure depends on both the occupation density and the flux. The Fermi-Dirac maximum entropy variable Eddington factor shows a scale invariance, which leads to a simple, exact analytic closure for fermions. This two-dimensional variable Eddington factor gives results that agree well with exact (Monte Carlo) neutrino transport calculations out of a collapse residue during early phases of hydrostatic neutron star formation

  17. Predicting the distribution of the Asian tapir in Peninsular Malaysia using maximum entropy modeling.

    Science.gov (United States)

    Clements, Gopalasamy Reuben; Rayan, D Mark; Aziz, Sheema Abdul; Kawanishi, Kae; Traeholt, Carl; Magintan, David; Yazi, Muhammad Fadlli Abdul; Tingley, Reid

    2012-12-01

    In 2008, the IUCN threat status of the Asian tapir (Tapirus indicus) was reclassified from 'vulnerable' to 'endangered'. The latest distribution map from the IUCN Red List suggests that the tapirs' native range is becoming increasingly fragmented in Peninsular Malaysia, but distribution data collected by local researchers suggest a more extensive geographical range. Here, we compile a database of 1261 tapir occurrence records within Peninsular Malaysia, and demonstrate that this species, indeed, has a much broader geographical range than the IUCN range map suggests. However, extreme spatial and temporal bias in these records limits their utility for conservation planning. Therefore, we used maximum entropy (MaxEnt) modeling to elucidate the potential extent of the Asian tapir's occurrence in Peninsular Malaysia while accounting for bias in existing distribution data. Our MaxEnt model predicted that the Asian tapir has a wider geographic range than our fine-scale data and the IUCN range map both suggest. Approximately 37% of Peninsular Malaysia contains potentially suitable tapir habitats. Our results justify a revision to the Asian tapir's extent of occurrence in the IUCN Red List. Furthermore, our modeling demonstrated that selectively logged forests encompass 45% of potentially suitable tapir habitats, underscoring the importance of these habitats for the conservation of this species in Peninsular Malaysia. © 2012 Wiley Publishing Asia Pty Ltd, ISZS and IOZ/CAS.

  18. Results of quantitative myocardial scintigraphy with Thallium-201 at rest and after maximum exercise

    International Nuclear Information System (INIS)

    Schicha, H.; Rentrop, P.; Facorro, L.; Karsch, K.R.; Blanke, H.; Kreuzer, H.; Emrich, D.; Goettingen Univ.

    1980-01-01

    In 20 normal individuals and 60 patients with CAD, myocardial scintigraphy with thallium-201 was performed after maximum exercise and two hours later at rest. The evaluation of digitized scintigrams was performed quantitatively by means of a 14-halfsegment model. At a specificity of 90%, sensitivity of scintigraphy for CAD was 97% in 34 patients with previous myocardial infarction and 85% in 26 patients without infarction. Sensitivity for the extent of CAD was 93% for 44 vessels, perfusing infarcted myocardium and 67% for 96 vessels, perfusing non-infarcted myocardium. Sensitivity decreased with increasing extent of CAD and was higher for Cx than for LAD. The predictive value of a positive or negative scintigram was analyzed for different prevalences of CAD. At a low prevalence, e.g. 5%, the predictive of a pathological scintigram is only 32%, consequently thallium scintigraphy is not applicable as a general screening procedure. At a high prevalence, e.g. 90%, the predictive value of a normal scintigram is only 40%. Therefore thallium scintigraphy seems not to be able to differentiate whether a coronary artery stenosis is hemodynamically significant or not. This was in agreement with the data from exercise cineventriculography. A high-predictive value of thallium scintigraphy of about 85% is obtained only in the case of a medium prevalence of CAD, e.g. in asymptomatic patients with pathological Ecg or in patients with atypical angina pectoris. (orig.) [de

  19. Spatio-temporal observations of the tertiary ozone maximum

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2009-07-01

    Full Text Available We present spatio-temporal distributions of the tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at an altitude of ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time to obtain spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.

    The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of the tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.

    Since ozone in the mesosphere is very sensitive to HOx concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HOx enhancement from the increased ionization.

  20. Estimating the maximum potential revenue for grid connected electricity storage :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.

    2012-12-01

    The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the

  1. Clinical outcomes from maximum-safe resection of primary and metastatic brain tumors using awake craniotomy.

    Science.gov (United States)

    Groshev, Anastasia; Padalia, Devang; Patel, Sephalie; Garcia-Getting, Rosemarie; Sahebjam, Solmaz; Forsyth, Peter A; Vrionis, Frank D; Etame, Arnold B

    2017-06-01

    To retrospectively analyze outcomes in patients undergoing awake craniotomies for tumor resection at our institution in terms of extent of resection, functional preservation and length of hospital stay. All cases of adults undergoing awake-craniotomy from September 2012-February 2015 were retrospectively reviewed based on an IRB approved protocol. Information regarding patient age, sex, cancer type, procedure type, location, hospital stay, extent of resection, and postoperative complications was extracted. 76 patient charts were analyzed. Resected cancer types included metastasis to the brain (41%), glioblastoma (34%), WHO grade III anaplastic astrocytoma (18%), WHO grade II glioma (4%), WHO grade I glioma (1%), and meningioma (1%). Over a half of procedures were performed in the frontal lobes, followed by temporal, and occipital locations. The most common indication was for motor cortex and primary somatosensory area lesions followed by speech. Extent of resection was gross total for 59% patients, near-gross total for 34%, and subtotal for 7%. Average hospital stay for the cohort was 1.7days with 75% of patients staying at the hospital for only 24h or less post surgery. In the postoperative period, 67% of patients experienced improvement in neurological status, 21% of patients experienced no change, 7% experienced transient neurological deficits, which resolved within two months post op, 1% experienced transient speech deficit, and 3% experienced permanent weakness. In a consecutive series of 76 patients undergoing maximum-safe resection for primary and metastatic brain tumors, awake-craniotomy was associated with a short hospital stay and low postoperative complications rate. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Discontinuity of maximum entropy inference and quantum phase transitions

    International Nuclear Information System (INIS)

    Chen, Jianxin; Ji, Zhengfeng; Yu, Nengkun; Zeng, Bei; Li, Chi-Kwong; Poon, Yiu-Tung; Shen, Yi; Zhou, Duanlu

    2015-01-01

    In this paper, we discuss the connection between two genuinely quantum phenomena—the discontinuity of quantum maximum entropy inference and quantum phase transitions at zero temperature. It is shown that the discontinuity of the maximum entropy inference of local observable measurements signals the non-local type of transitions, where local density matrices of the ground state change smoothly at the transition point. We then propose to use the quantum conditional mutual information of the ground state as an indicator to detect the discontinuity and the non-local type of quantum phase transitions in the thermodynamic limit. (paper)

  3. On an Objective Basis for the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    David J. Miller

    2015-01-01

    Full Text Available In this letter, we elaborate on some of the issues raised by a recent paper by Neapolitan and Jiang concerning the maximum entropy (ME principle and alternative principles for estimating probabilities consistent with known, measured constraint information. We argue that the ME solution for the “problematic” example introduced by Neapolitan and Jiang has stronger objective basis, rooted in results from information theory, than their alternative proposed solution. We also raise some technical concerns about the Bayesian analysis in their work, which was used to independently support their alternative to the ME solution. The letter concludes by noting some open problems involving maximum entropy statistical inference.

  4. The maximum economic depth of groundwater abstraction for irrigation

    Science.gov (United States)

    Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.

    2017-12-01

    Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of

  5. Efficiency of autonomous soft nanomachines at maximum power.

    Science.gov (United States)

    Seifert, Udo

    2011-01-14

    We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.

  6. A comparison of methods of predicting maximum oxygen uptake.

    OpenAIRE

    Grant, S; Corbett, K; Amjad, A M; Wilson, J; Aitchison, T

    1995-01-01

    The aim of this study was to compare the results from a Cooper walk run test, a multistage shuttle run test, and a submaximal cycle test with the direct measurement of maximum oxygen uptake on a treadmill. Three predictive tests of maximum oxygen uptake--linear extrapolation of heart rate of VO2 collected from a submaximal cycle ergometer test (predicted L/E), the Cooper 12 min walk, run test, and a multi-stage progressive shuttle run test (MST)--were performed by 22 young healthy males (mean...

  7. Maximum length scale in density based topology optimization

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Wang, Fengwen

    2017-01-01

    The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...

  8. A Maximum Entropy Method for a Robust Portfolio Problem

    Directory of Open Access Journals (Sweden)

    Yingying Xu

    2014-06-01

    Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.

  9. Capturing heterogeneity: The role of a study area's extent for estimating mean throughfall

    Science.gov (United States)

    Zimmermann, Alexander; Voss, Sebastian; Metzger, Johanna Clara; Hildebrandt, Anke; Zimmermann, Beate

    2016-11-01

    The selection of an appropriate spatial extent of a sampling plot is one among several important decisions involved in planning a throughfall sampling scheme. In fact, the choice of the extent may determine whether or not a study can adequately characterize the hydrological fluxes of the studied ecosystem. Previous attempts to optimize throughfall sampling schemes focused on the selection of an appropriate sample size, support, and sampling design, while comparatively little attention has been given to the role of the extent. In this contribution, we investigated the influence of the extent on the representativeness of mean throughfall estimates for three forest ecosystems of varying stand structure. Our study is based on virtual sampling of simulated throughfall fields. We derived these fields from throughfall data sampled in a simply structured forest (young tropical forest) and two heterogeneous forests (old tropical forest, unmanaged mixed European beech forest). We then sampled the simulated throughfall fields with three common extents and various sample sizes for a range of events and for accumulated data. Our findings suggest that the size of the study area should be carefully adapted to the complexity of the system under study and to the required temporal resolution of the throughfall data (i.e. event-based versus accumulated). Generally, event-based sampling in complex structured forests (conditions that favor comparatively long autocorrelations in throughfall) requires the largest extents. For event-based sampling, the choice of an appropriate extent can be as important as using an adequate sample size.

  10. Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays

    International Nuclear Information System (INIS)

    Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick

    2013-01-01

    Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand. - Highlights: ► Ecotoxicological shows significant benefits for detecting on site contaminations. ► MaxEnt to rebuild qualitative link on concentration and ecotoxicological assays. ► MaxEnt shows similar pattern when compared with concentrations map of groundwater. ► MaxEnt is a valuable method especially when quantitative relation is not at hand. - A Maximum Entropy method to rebuild qualitative relationships between Benzene groundwater concentrations and their ecotoxicological effect.

  11. A Fast Algorithm for Maximum Likelihood Estimation of Harmonic Chirp Parameters

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    . A statistically efficient estimator for extracting the parameters of the harmonic chirp model in additive white Gaussian noise is the maximum likelihood (ML) estimator which recently has been demonstrated to be robust to noise and accurate --- even when the model order is unknown. The main drawback of the ML......The analysis of (approximately) periodic signals is an important element in numerous applications. One generalization of standard periodic signals often occurring in practice are harmonic chirp signals where the instantaneous frequency increases/decreases linearly as a function of time...

  12. Spectral density analysis of time correlation functions in lattice QCD using the maximum entropy method

    International Nuclear Information System (INIS)

    Fiebig, H. Rudolf

    2002-01-01

    We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss the practical issues of the approach

  13. The Maximum Entropy Production Principle: Its Theoretical Foundations and Applications to the Earth System

    Directory of Open Access Journals (Sweden)

    Axel Kleidon

    2010-03-01

    Full Text Available The Maximum Entropy Production (MEP principle has been remarkably successful in producing accurate predictions for non-equilibrium states. We argue that this is because the MEP principle is an effective inference procedure that produces the best predictions from the available information. Since all Earth system processes are subject to the conservation of energy, mass and momentum, we argue that in practical terms the MEP principle should be applied to Earth system processes in terms of the already established framework of non-equilibrium thermodynamics, with the assumption of local thermodynamic equilibrium at the appropriate scales.

  14. Maximum repulsed magnetization of a bulk superconductor with low pulsed field

    International Nuclear Information System (INIS)

    Tsuchimoto, M.; Kamijo, H.; Fujimoto, H.

    2005-01-01

    Pulsed field magnetization of a bulk high-T c superconductor (HTS) is important technique especially for practical applications of a bulk superconducting magnet. Full magnetization is not obtained for low pulsed field and trapped field is decreased by reversed current in the HTS. The trapped field distribution by repulsed magnetization was previously reported in experiments with temperature control. In this study, repulsed magnetization technique with the low pulsed field is numerically analyzed under assumption of variable shielding current by the temperature control. The shielding current densities are discussed to obtain maximum trapped field by two times of low pulsed field magnetizations

  15. MAXIMUM RUNOFF OF THE FLOOD ON WADIS OF NORTHERN ...

    African Journals Online (AJOL)

    lanez

    The technique of account the maximal runoff of flood for the rivers of northern part of Algeria based on the theory of ... north to south: 1) coastal Tel – fertile, high cultivated and sown zone; 2) territory of Atlas. Mountains ... In the first case the empiric dependence between maximum intensity of precipitation for some calculation ...

  16. Scientific substantination of maximum allowable concentration of fluopicolide in water

    Directory of Open Access Journals (Sweden)

    Pelo I.М.

    2014-03-01

    Full Text Available In order to substantiate fluopicolide maximum allowable concentration in the water of water reservoirs the research was carried out. Methods of study: laboratory hygienic experiment using organoleptic and sanitary-chemical, sanitary-toxicological, sanitary-microbiological and mathematical methods. The results of fluopicolide influence on organoleptic properties of water, sanitary regimen of reservoirs for household purposes were given and its subthreshold concentration in water by sanitary and toxicological hazard index was calculated. The threshold concentration of the substance by the main hazard criteria was established, the maximum allowable concentration in water was substantiated. The studies led to the following conclusions: fluopicolide threshold concentration in water by organoleptic hazard index (limiting criterion – the smell – 0.15 mg/dm3, general sanitary hazard index (limiting criteria – impact on the number of saprophytic microflora, biochemical oxygen demand and nitrification – 0.015 mg/dm3, the maximum noneffective concentration – 0.14 mg/dm3, the maximum allowable concentration - 0.015 mg/dm3.

  17. Image coding based on maximum entropy partitioning for identifying ...

    Indian Academy of Sciences (India)

    A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization ...

  18. Computing the maximum volume inscribed ellipsoid of a polytopic projection

    NARCIS (Netherlands)

    Zhen, Jianzhe; den Hertog, Dick

    We introduce a novel scheme based on a blending of Fourier-Motzkin elimination (FME) and adjustable robust optimization techniques to compute the maximum volume inscribed ellipsoid (MVE) in a polytopic projection. It is well-known that deriving an explicit description of a projected polytope is

  19. Computing the Maximum Volume Inscribed Ellipsoid of a Polytopic Projection

    NARCIS (Netherlands)

    Zhen, J.; den Hertog, D.

    2015-01-01

    We introduce a novel scheme based on a blending of Fourier-Motzkin elimination (FME) and adjustable robust optimization techniques to compute the maximum volume inscribed ellipsoid (MVE) in a polytopic projection. It is well-known that deriving an explicit description of a projected polytope is

  20. Maximum super angle optimization method for array antenna pattern synthesis

    DEFF Research Database (Denmark)

    Wu, Ji; Roederer, A. G

    1991-01-01

    Different optimization criteria related to antenna pattern synthesis are discussed. Based on the maximum criteria and vector space representation, a simple and efficient optimization method is presented for array and array fed reflector power pattern synthesis. A sector pattern synthesized by a 2...

  1. correlation between maximum dry density and cohesion of ...

    African Journals Online (AJOL)

    HOD

    investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...

  2. Molecular markers linked to apomixis in Panicum maximum Jacq ...

    African Journals Online (AJOL)

    Panicum maximum Jacq. is an important forage grass of African origin largely used in the tropics. The genetic breeding of this species is based on the hybridization of sexual and apomictic genotypes and selection of apomictic F1 hybrids. The objective of this work was to identify molecular markers linked to apomixis in P.

  3. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...

  4. On a Weak Discrete Maximum Principle for hp-FEM

    Czech Academy of Sciences Publication Activity Database

    Šolín, Pavel; Vejchodský, Tomáš

    -, č. 209 (2007), s. 54-65 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA102/05/0629 Institutional research plan: CEZ:AV0Z20570509; CEZ:AV0Z10190503 Keywords : discrete maximum principle * hp-FEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.943, year: 2007

  5. Gamma-ray spectra deconvolution by maximum-entropy methods

    International Nuclear Information System (INIS)

    Los Arcos, J.M.

    1996-01-01

    A maximum-entropy method which includes the response of detectors and the statistical fluctuations of spectra is described and applied to the deconvolution of γ-ray spectra. Resolution enhancement of 25% can be reached for experimental peaks and up to 50% for simulated ones, while the intensities are conserved within 1-2%. (orig.)

  6. Modeling maximum daily temperature using a varying coefficient regression model

    Science.gov (United States)

    Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith

    2014-01-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...

  7. Maximum Interconnectedness and Availability for Directional Airborne Range Extension Networks

    Science.gov (United States)

    2016-08-29

    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS 1 Maximum Interconnectedness and Availability for Directional Airborne Range Extension Networks Thomas...2 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS I. INTRODUCTION Tactical military networks both on land and at sea often have restricted transmission...a standard definition in graph theoretic and networking literature that is related to, but different from, the metric we consider. August 29, 2016

  8. Maximum of difference assessment of typical semitrailers: a global study

    CSIR Research Space (South Africa)

    Kienhofer, F

    2016-11-01

    Full Text Available the maximum allowable width and frontal overhang as stipulated by legislation from Australia, the European Union, Canada, the United States and South Africa. The majority of the Australian, EU and Canadian semitrailer combinations and all of the South African...

  9. The constraint rule of the maximum entropy principle

    NARCIS (Netherlands)

    Uffink, J.

    1995-01-01

    The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability

  10. 24 CFR 232.565 - Maximum loan amount.

    Science.gov (United States)

    2010-04-01

    ... URBAN DEVELOPMENT MORTGAGE AND LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES MORTGAGE INSURANCE FOR NURSING HOMES, INTERMEDIATE CARE FACILITIES, BOARD AND CARE HOMES, AND ASSISTED... Fire Safety Equipment Eligible Security Instruments § 232.565 Maximum loan amount. The principal amount...

  11. 5 CFR 531.221 - Maximum payable rate rule.

    Science.gov (United States)

    2010-01-01

    ... before the reassignment. (ii) If the rate resulting from the geographic conversion under paragraph (c)(2... previous rate (i.e., the former special rate after the geographic conversion) with the rates on the current... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Maximum payable rate rule. 531.221...

  12. Effects of bruxism on the maximum bite force

    Directory of Open Access Journals (Sweden)

    Todić Jelena T.

    2017-01-01

    Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.

  13. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  14. Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation

    DEFF Research Database (Denmark)

    Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik

    2017-01-01

    The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...

  15. Handelman's hierarchy for the maximum stable set problem

    NARCIS (Netherlands)

    Laurent, M.; Sun, Z.

    2014-01-01

    The maximum stable set problem is a well-known NP-hard problem in combinatorial optimization, which can be formulated as the maximization of a quadratic square-free polynomial over the (Boolean) hypercube. We investigate a hierarchy of linear programming relaxations for this problem, based on a

  16. New shower maximum trigger for electrons and photons at CDF

    International Nuclear Information System (INIS)

    Amidei, D.; Burkett, K.; Gerdes, D.; Miao, C.; Wolinski, D.

    1994-01-01

    For the 1994 Tevatron collider run, CDF has upgraded the electron and photo trigger hardware to make use of shower position and size information from the central shower maximum detector. For electrons, the upgrade has resulted in a 50% reduction in backgrounds while retaining approximately 90% of the signal. The new trigger also eliminates the background to photon triggers from single-phototube spikes

  17. New shower maximum trigger for electrons and photons at CDF

    International Nuclear Information System (INIS)

    Gerdes, D.

    1994-08-01

    For the 1994 Tevatron collider run, CDF has upgraded the electron and photon trigger hardware to make use of shower position and size information from the central shower maximum detector. For electrons, the upgrade has resulted in a 50% reduction in backgrounds while retaining approximately 90% of the signal. The new trigger also eliminates the background to photon triggers from single-phototube discharge

  18. Maximum drawdown and the allocation to real estate

    NARCIS (Netherlands)

    Hamelink, F.; Hoesli, M.

    2004-01-01

    The role of real estate in a mixed-asset portfolio is investigated when the maximum drawdown (hereafter MaxDD), rather than the standard deviation, is used as the measure of risk. In particular, it is analysed whether the discrepancy between the optimal allocation to real estate and the actual

  19. A Family of Maximum SNR Filters for Noise Reduction

    DEFF Research Database (Denmark)

    Huang, Gongping; Benesty, Jacob; Long, Tao

    2014-01-01

    significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR...

  20. 5 CFR 581.402 - Maximum garnishment limitations.

    Science.gov (United States)

    2010-01-01

    ... PROCESSING GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Consumer Credit Protection Act Restrictions..., pursuant to section 1673(b)(2) (A) and (B) of title 15 of the United States Code (the Consumer Credit... local law, the maximum part of the aggregate disposable earnings subject to garnishment to enforce any...

  1. Distribution of phytoplankton groups within the deep chlorophyll maximum

    KAUST Repository

    Latasa, Mikel; Cabello, Ana Marí a; Moran, Xose Anxelu G.; Massana, Ramon; Scharek, Renate

    2016-01-01

    and optical and FISH microscopy. All groups presented minimum abundances at the surface and a maximum in the DCM layer. The cell distribution was not vertically symmetrical around the DCM peak and cells tended to accumulate in the upper part of the DCM layer

  2. 44 CFR 208.12 - Maximum Pay Rate Table.

    Science.gov (United States)

    2010-10-01

    ...) Physicians. DHS uses the latest Special Salary Rate Table Number 0290 for Medical Officers (Clinical... Personnel, in which case the Maximum Pay Rate Table would not apply. (3) Compensation for Sponsoring Agency... organizations, e.g., HMOs or medical or engineering professional associations, under the revised definition of...

  3. Anti-nutrient components of guinea grass ( Panicum maximum ...

    African Journals Online (AJOL)

    Yomi

    2012-01-31

    Jan 31, 2012 ... A true measure of forage quality is animal ... The anti-nutritional contents of a pasture could be ... nutrient factors in P. maximum; (2) assess the effect of nitrogen ..... 3. http://www.clemson.edu/Fairfield/local/news/quality.

  4. SIMULATION OF NEW SIMPLE FUZZY LOGIC MAXIMUM POWER ...

    African Journals Online (AJOL)

    2010-06-30

    Jun 30, 2010 ... Basic structure photovoltaic system Solar array mathematic ... The equivalent circuit model of a solar cell consists of a current generator and a diode .... control of boost converter (tracker) such that maximum power is achieved at the output of the solar panel. Fig.11. The membership function of input. Fig.12.

  5. Sur les estimateurs du maximum de vraisemblance dans les mod& ...

    African Journals Online (AJOL)

    Abstract. We are interested in the existence and uniqueness of maximum likelihood estimators of parameters in the two multiplicative regression models, with Poisson or negative binomial probability distributions. Following its work on the multiplicative Poisson model with two factors without repeated measures, Haberman ...

  6. Gravitational Waves and the Maximum Spin Frequency of Neutron Stars

    NARCIS (Netherlands)

    Patruno, A.; Haskell, B.; D'Angelo, C.

    2012-01-01

    In this paper, we re-examine the idea that gravitational waves are required as a braking mechanism to explain the observed maximum spin frequency of neutron stars. We show that for millisecond X-ray pulsars, the existence of spin equilibrium as set by the disk/magnetosphere interaction is sufficient

  7. Applications of the Maximum Entropy Method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš

    2004-01-01

    Roč. 305, - (2004), s. 57-62 ISSN 0015-0193 Grant - others:DFG and FCI(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : Maximum Entropy Method * modulated structures * charge density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.517, year: 2004

  8. Phytophthora stricta isolated from Rhododendron maximum in Pennsylvania

    Science.gov (United States)

    During a survey in October 2013, in the Michaux State Forest in Pennsylvania , necrotic Rhododendron maximum leaves were noticed on mature plants alongside a stream. Symptoms were nondescript necrotic lesions at the tips of mature leaves. Colonies resembling a Phytophthora sp. were observed from c...

  9. Transversals and independence in linear hypergraphs with maximum degree two

    DEFF Research Database (Denmark)

    Henning, Michael A.; Yeo, Anders

    2017-01-01

    , k-uniform hypergraphs with maximum degree 2. It is known [European J. Combin. 36 (2014), 231–236] that if H ∈ Hk, then (k + 1)τ (H) 6 ≤ n + m, and there are only two hypergraphs that achieve equality in the bound. In this paper, we prove a much more powerful result, and establish tight upper bounds...

  10. A conrparison of optirnunl and maximum reproduction using the rat ...

    African Journals Online (AJOL)

    of pigs to increase reproduction rate of sows (te Brake,. 1978; Walker et at., 1979; Kemm et at., 1980). However, no experimental evidence exists that this strategy would in fact improve biological efficiency. In this pilot experiment, an attempt was made to compare systems of optimum or maximum reproduction using the rat.

  11. Revision of regional maximum flood (RMF) estimation in Namibia ...

    African Journals Online (AJOL)

    Extreme flood hydrology in Namibia for the past 30 years has largely been based on the South African Department of Water Affairs Technical Report 137 (TR 137) of 1988. This report proposes an empirically established upper limit of flood peaks for regions called the regional maximum flood (RMF), which could be ...

  12. Maximum entropy estimation via Gauss-LP quadratures

    NARCIS (Netherlands)

    Thély, Maxime; Sutter, Tobias; Mohajerin Esfahani, P.; Lygeros, John; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    2017-01-01

    We present an approximation method to a class of parametric integration problems that naturally appear when solving the dual of the maximum entropy estimation problem. Our method builds up on a recent generalization of Gauss quadratures via an infinite-dimensional linear program, and utilizes a

  13. Current opinion about maximum entropy methods in Moessbauer spectroscopy

    International Nuclear Information System (INIS)

    Szymanski, K

    2009-01-01

    Current opinion about Maximum Entropy Methods in Moessbauer Spectroscopy is presented. The most important advantage offered by the method is the correct data processing under circumstances of incomplete information. Disadvantage is the sophisticated algorithm and its application to the specific problems.

  14. The maximum number of minimal codewords in long codes

    DEFF Research Database (Denmark)

    Alahmadi, A.; Aldred, R.E.L.; dela Cruz, R.

    2013-01-01

    Upper bounds on the maximum number of minimal codewords in a binary code follow from the theory of matroids. Random coding provides lower bounds. In this paper, we compare these bounds with analogous bounds for the cycle code of graphs. This problem (in the graphic case) was considered in 1981 by...

  15. Inverse feasibility problems of the inverse maximum flow problems

    Indian Academy of Sciences (India)

    199–209. c Indian Academy of Sciences. Inverse feasibility problems of the inverse maximum flow problems. ADRIAN DEACONU. ∗ and ELEONOR CIUREA. Department of Mathematics and Computer Science, Faculty of Mathematics and Informatics, Transilvania University of Brasov, Brasov, Iuliu Maniu st. 50,. Romania.

  16. Maximum Permissible Concentrations and Negligible Concentrations for pesticides

    NARCIS (Netherlands)

    Crommentuijn T; Kalf DF; Polder MD; Posthumus R; Plassche EJ van de; CSR

    1997-01-01

    Maximum Permissible Concentrations (MPCs) and Negligible Concentrations (NCs) derived for a series of pesticides are presented in this report. These MPCs and NCs are used by the Ministry of Housing, Spatial Planning and the Environment (VROM) to set Environmental Quality Objectives. For some of the

  17. Maximum Safety Regenerative Power Tracking for DC Traction Power Systems

    Directory of Open Access Journals (Sweden)

    Guifu Du

    2017-02-01

    Full Text Available Direct current (DC traction power systems are widely used in metro transport systems, with running rails usually being used as return conductors. When traction current flows through the running rails, a potential voltage known as “rail potential” is generated between the rails and ground. Currently, abnormal rises of rail potential exist in many railway lines during the operation of railway systems. Excessively high rail potentials pose a threat to human life and to devices connected to the rails. In this paper, the effect of regenerative power distribution on rail potential is analyzed. Maximum safety regenerative power tracking is proposed for the control of maximum absolute rail potential and energy consumption during the operation of DC traction power systems. The dwell time of multiple trains at each station and the trigger voltage of the regenerative energy absorbing device (READ are optimized based on an improved particle swarm optimization (PSO algorithm to manage the distribution of regenerative power. In this way, the maximum absolute rail potential and energy consumption of DC traction power systems can be reduced. The operation data of Guangzhou Metro Line 2 are used in the simulations, and the results show that the scheme can reduce the maximum absolute rail potential and energy consumption effectively and guarantee the safety in energy saving of DC traction power systems.

  18. Maximum Mass of Hybrid Stars in the Quark Bag Model

    Science.gov (United States)

    Alaverdyan, G. B.; Vartanyan, Yu. L.

    2017-12-01

    The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.

  19. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  20. Multilevel maximum likelihood estimation with application to covariance matrices

    Czech Academy of Sciences Publication Activity Database

    Turčičová, Marie; Mandel, J.; Eben, Kryštof

    Published online: 23 January ( 2018 ) ISSN 0361-0926 R&D Projects: GA ČR GA13-34856S Institutional support: RVO:67985807 Keywords : Fisher information * High dimension * Hierarchical maximum likelihood * Nested parameter spaces * Spectral diagonal covariance model * Sparse inverse covariance model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016

  1. Heat Convection at the Density Maximum Point of Water

    Science.gov (United States)

    Balta, Nuri; Korganci, Nuri

    2018-01-01

    Water exhibits a maximum in density at normal pressure at around 4° degree temperature. This paper demonstrates that during cooling, at around 4 °C, the temperature remains constant for a while because of heat exchange associated with convective currents inside the water. Superficial approach implies it as a new anomaly of water, but actually it…

  2. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results...

  3. Optimal item discrimination and maximum information for logistic IRT models

    NARCIS (Netherlands)

    Veerkamp, W.J.J.; Veerkamp, Wim J.J.; Berger, Martijn P.F.; Berger, Martijn

    1999-01-01

    Items with the highest discrimination parameter values in a logistic item response theory model do not necessarily give maximum information. This paper derives discrimination parameter values, as functions of the guessing parameter and distances between person parameters and item difficulty, that

  4. Effect of Training Frequency on Maximum Expiratory Pressure

    Science.gov (United States)

    Anand, Supraja; El-Bashiti, Nour; Sapienza, Christine

    2012-01-01

    Purpose: To determine the effects of expiratory muscle strength training (EMST) frequency on maximum expiratory pressure (MEP). Method: We assigned 12 healthy participants to 2 groups of training frequency (3 days per week and 5 days per week). They completed a 4-week training program on an EMST trainer (Aspire Products, LLC). MEP was the primary…

  5. Assessment of the phytoremediation potential of Panicum maximum ...

    African Journals Online (AJOL)

    Obvious signs of phyto-toxicity however appeared in plants exposed to 120 ppm Pb2+ and Cd2+ at day twenty-three, suggesting that P. maximum may be a moderate metal accumulator. Keywords: phytoremediation, heavy metals, uptake, tissues, accumulator. African Journal of Biotechnology, Vol 13(19), 1979-1984 ...

  6. Atlantic Meridional Overturning Circulation During the Last Glacial Maximum.

    NARCIS (Netherlands)

    Lynch-Stieglitz, J.; Adkins, J.F.; Curry, W.B.; Dokken, T.; Hall, I.R.; Herguera, J.C.; Hirschi, J.J.-M.; Ivanova, E.V.; Kissel, C.; Marchal, O.; Marchitto, T.M.; McCave, I.N.; McManus, J.F.; Mulitza, S.; Ninnemann, U.; Peeters, F.J.C.; Yu, E.-F.; Zahn, R.

    2007-01-01

    The circulation of the deep Atlantic Ocean during the height of the last ice age appears to have been quite different from today. We review observations implying that Atlantic meridional overturning circulation during the Last Glacial Maximum was neither extremely sluggish nor an enhanced version of

  7. Impact of Sowing Date Induced Temperature and Management Practices on Development Events and Yield of Mustard

    Directory of Open Access Journals (Sweden)

    MSA Khan, MA Aziz

    2015-12-01

    Full Text Available The experiment was conducted at the research field of the Agronomy Division, Bangladesh Agricultural Research Institute (BARI, Joydebpur, Gazipur, during rabi season of 2014-2015 to find out the relationship between different development events of mustard crop and sowing dates induced temperature as well as to minimize the yield reduction of the crop by adopting appropriate management practices. The mustard var. BARI Sarisha-15 was sown on 06, 25 November and 14 December 2014. Crop accumulated lower growing degree days (GDD i.e., 72.15, 521.10 and 1070 to 1154 °C were observed for the events of emergence, 50 % flowering and maturity on 14 December sowing. Late sown plants took minimum time from flowering to maturity (36 days due to increased temperature and high variability in both maximum and minimum temperature. The highest seed yield (1569 kg ha-1 was recorded from 06 November sowing with high management practices while the lowest seed yield (435 kg ha-1 from 14 December sowing with low management practices. At high management practices the crop yielded 1183 kg ha-1 at 14 December sowing. Yield reduction at late sowing condition was reduced to some extent with high management practices. The seed yield reductions at 14 December sowing as compared to high management practices at 06 November sowing were 72, 43 and 25% under low, medium and high management, respectively.

  8. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  9. Modelling information flow along the human connectome using maximum flow.

    Science.gov (United States)

    Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung

    2018-01-01

    The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Reconstructed North American, Eurasian, and Northern Hemisphere Snow Cover Extent, 1915-1997

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains time series of monthly snow cover extent (SCE) for North America, Eurasia, and the Northern Hemisphere from 1915 to 1997, based on snow cover...

  11. Quantifying emphysema extent from weakly labeled CT scans of the lungs using label proportions learning

    DEFF Research Database (Denmark)

    Ørting, Silas Nyboe; Petersen, Jens; Wille, Mathilde

    2016-01-01

    Quantification of emphysema extent is important in diagnosing and monitoring patients with chronic obstructive pulmonary disease (COPD). Several studies have shown that emphysema quantification by supervised texture classification is more robust and accurate than traditional densitometry. Current...... techniques require highly time consuming manual annotations of patches or use only weak labels indicating overall disease status (e.g, COPD or healthy). We show how visual scoring of regional emphysema extent can be exploited in a learning with label proportions (LLP) framework to both predict presence...... of emphysema in smaller patches and estimate regional extent. We evaluate performance on 195 visually scored CT scans and achieve an intraclass correlation of 0.72 (0.65–0.78) between predicted region extent and expert raters. To our knowledge this is the first time that LLP methods have been applied...

  12. extent of use of ict by fish farmers in isoko agricultural zone of delta

    African Journals Online (AJOL)

    Mr. TONY A

    Abstract. The study examined the extent of use of ICTs by fish farmers in Isoko .... TABLE 1: Percentage distribution of respondents selected socioeconomics ... fish breeds, feeds and management), and made inquiries about market predictions.

  13. To what extent do science ESP learning materials fit the purpose for ...

    African Journals Online (AJOL)

    To what extent do science ESP learning materials fit the purpose for which they have been devised? An evaluation in terms of Cronje's (1993) criteria. ... Journal for Language Teaching. Journal Home · ABOUT THIS JOURNAL · Advanced ...

  14. LBA-ECO LC-07 Wetland Extent, Vegetation, and Inundation: Lowland Amazon Basin

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set provides a map of wetland extent, vegetation type, and dual-season flooding state of the entire lowland Amazon basin. The map was derived from mosaics...

  15. Sea Ice Edge Location and Extent in the Russian Arctic, 1933-2006

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Sea Ice Edge Location and Extent in the Russian Arctic, 1933-2006 data are derived from sea ice charts from the Arctic and Antarctic Research Institute (AARI),...

  16. NOAA Climate Data Record (CDR) of Northern Hemisphere (NH) Snow Cover Extent (SCE), Version 1

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This NOAA Climate Data Record (CDR) is a record for the Northern Hemisphere (NH) Snow Cover Extent (SCE) spanning from October 4, 1966 to present, updated monthly...

  17. The extent of unwanted infrared photoacoustic signals from polymer sampling tubings exposed to ultraviolet radiation

    NARCIS (Netherlands)

    Bicanic, D.; Solyom, A.; Angeli, G.; Wegh, H.; Postumus, M.; Jalink, H.

    1995-01-01

    The extent of unwanted photoacoustic (PA) signals due to volatiles released from various polymer tubing materials [transparent, red and black polyethylene (PE), polymer of tetrafluorethylene (PTFE) and copolymer of tetrafluorethylene and hexafluorethylene (FEP)] when exposed to 245 nm radiation was

  18. Comment on ''Canonical formalism for Lagrangians with nonlocality of finite extent''

    International Nuclear Information System (INIS)

    Llosa, Josep

    2003-01-01

    The paper by Woodward [Phys. Rev. A 62, 052105 (2000)] claimed to have proved that Lagrangian theories with a nonlocality of finite extent are necessarily unstable. In this Comment we propose that this conclusion is false

  19. Cook Inlet and Kenai Peninsula, Alaska ESI: ICE (Ice Extent Lines)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains locations of ice extent in Cook Inlet, Alaska. Vector lines in the data set represent 50 percent ice coverage. Location-specific type and...

  20. Global extent and determinants of savanna and forest as alternative biome states

    CSIR Research Space (South Africa)

    Staver, C

    2011-10-01

    Full Text Available Theoretically, fire–tree cover feedbacks can maintain savanna and forest as alternative stable states. However, the global extent of fire- driven discontinuities in tree cover is unknown, especially accounting for seasonality and soils. The authors...