WorldWideScience

Sample records for strength thresholds dataset

  1. Nuclear threshold effects and neutron strength function

    International Nuclear Information System (INIS)

    Hategan, Cornel; Comisel, Horia

    2003-01-01

    One proves that a Nuclear Threshold Effect is dependent, via Neutron Strength Function, on Spectroscopy of Ancestral Neutron Threshold State. The magnitude of the Nuclear Threshold Effect is proportional to the Neutron Strength Function. Evidence for relation of Nuclear Threshold Effects to Neutron Strength Functions is obtained from Isotopic Threshold Effect and Deuteron Stripping Threshold Anomaly. The empirical and computational analysis of the Isotopic Threshold Effect and of the Deuteron Stripping Threshold Anomaly demonstrate their close relationship to Neutron Strength Functions. It was established that the Nuclear Threshold Effects depend, in addition to genuine Nuclear Reaction Mechanisms, on Spectroscopy of (Ancestral) Neutron Threshold State. The magnitude of the effect is proportional to the Neutron Strength Function, in their dependence on mass number. This result constitutes also a proof that the origins of these threshold effects are Neutron Single Particle States at zero energy. (author)

  2. Dataset of the relationship between unconfined compressive strength and tensile strength of rock mass

    International Nuclear Information System (INIS)

    Sugita, Yutaka; Yui, Mikazu

    2002-02-01

    This report summary the dataset of the relationship between unconfined compressive strength and tensile strength of the rock mass described in supporting report 2; repository design and engineering technology of second progress report (H12 report) on research and development for the geological disposal of HLW in Japan. (author)

  3. Dataset on predictive compressive strength model for self-compacting concrete.

    Science.gov (United States)

    Ofuyatan, O M; Edeki, S O

    2018-04-01

    The determination of compressive strength is affected by many variables such as the water cement (WC) ratio, the superplasticizer (SP), the aggregate combination, and the binder combination. In this dataset article, 7, 28, and 90-day compressive strength models are derived using statistical analysis. The response surface methodology is used toinvestigate the effect of the parameters: Varying percentages of ash, cement, WC, and SP on hardened properties-compressive strengthat 7,28 and 90 days. Thelevels of independent parameters are determinedbased on preliminary experiments. The experimental values for compressive strengthat 7, 28 and 90 days and modulus of elasticity underdifferent treatment conditions are also discussed and presented.These dataset can effectively be used for modelling and prediction in concrete production settings.

  4. An open, multi-vendor, multi-field-strength brain MR dataset and analysis of publicly available skull stripping methods agreement.

    Science.gov (United States)

    Souza, Roberto; Lucena, Oeslle; Garrafa, Julia; Gobbi, David; Saluzzi, Marina; Appenzeller, Simone; Rittner, Letícia; Frayne, Richard; Lotufo, Roberto

    2018-04-15

    This paper presents an open, multi-vendor, multi-field strength magnetic resonance (MR) T1-weighted volumetric brain imaging dataset, named Calgary-Campinas-359 (CC-359). The dataset is composed of images of older healthy adults (29-80 years) acquired on scanners from three vendors (Siemens, Philips and General Electric) at both 1.5 T and 3 T. CC-359 is comprised of 359 datasets, approximately 60 subjects per vendor and magnetic field strength. The dataset is approximately age and gender balanced, subject to the constraints of the available images. It provides consensus brain extraction masks for all volumes generated using supervised classification. Manual segmentation results for twelve randomly selected subjects performed by an expert are also provided. The CC-359 dataset allows investigation of 1) the influences of both vendor and magnetic field strength on quantitative analysis of brain MR; 2) parameter optimization for automatic segmentation methods; and potentially 3) machine learning classifiers with big data, specifically those based on deep learning methods, as these approaches require a large amount of data. To illustrate the utility of this dataset, we compared to the results of a supervised classifier, the results of eight publicly available skull stripping methods and one publicly available consensus algorithm. A linear mixed effects model analysis indicated that vendor (p-valuefield strength (p-value<0.001) have statistically significant impacts on skull stripping results. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Landslide triggering thresholds for Switzerland based on a new gridded precipitation dataset

    Science.gov (United States)

    Leonarduzzi, Elena; Molnar, Peter; McArdell, Brian W.

    2017-04-01

    In Switzerland floods are responsible for most of the damage caused by rainfall-triggered natural hazards (89%), followed by landslides (6%, ca. 520 M Euros) as reported in Hilker et al. (2009) for the period 1972-2007. The prediction of landslide occurrence is particularly challenging because of their wide distribution in space and the complex interdependence of predisposing and triggering factors. The overall goal of our research is to develop an Early Warning System for landsliding in Switzerland based on hydrological modelling and rainfall forecasts. In order to achieve this, we first analyzed rainfall triggering thresholds for landslides from a new gridded daily precipitation dataset (RhiresD, MeteoSwiss) for Switzerland combined with landslide events recorded in the Swiss Damage Database (Hilker et al.,2009). The high-resolution gridded precipitation dataset allows us to collocate rainfall and landslides accurately in space, which is an advantage over many previous studies. Each of the 2272 landslides in the database in the period 1972-2012 was assigned to the corresponding 2x2 km precipitation cell. For each of these cells, precipitation events were defined as series of consecutive rainy days and the following event parameters were computed: duration (day), maximum and mean daily intensity (mm/day), total rainfall depth (mm) and maximum daily intensity divided by Mean Daily Precipitation (MDP). The events were classified as triggering or non-triggering depending on whether a landslide was recorded in the cell during the event. This classification of observations was compared to predictions based on a threshold for each of the parameters. The predictive power of each parameter and the best threshold value were quantified by ROC analysis and statistics such as AUC and the True Skill Statistic (TSS). Event parameters based on rainfall intensity were found to have similarly high predictive power (TSS=0.54-0.59, AUC=0.85-0.86), while rainfall duration had a

  6. Strength and Pain Threshold Handheld Dynamometry Test Reliability in Patellofemoral Pain.

    Science.gov (United States)

    van der Heijden, R A; Vollebregt, T; Bierma-Zeinstra, S M A; van Middelkoop, M

    2015-12-01

    Patellofemoral pain syndrome (PFPS), characterized by peri- and retropatellar pain, is a common disorder in young, active people. The etiology is unclear; however, quadriceps strength seems to be a contributing factor, and sensitization might play a role. The study purpose is determining the inter-rater reliability of handheld dynamometry to test both quadriceps strength and pressure pain threshold (PPT), a measure for sensitization, in patients with PFPS. This cross-sectional case-control study comprises 3 quadriceps strength and one PPT measurements performed by 2 independent investigators in 22 PFPS patients and 16 matched controls. Inter-rater reliability was analyzed using intraclass correlation coefficients (ICC) and Bland-Altman plots. Inter-rater reliability of quadriceps strength testing was fair to good in PFPS patients (ICC=0.72) and controls (ICC=0.63). Bland-Altman plots showed an increased difference between assessors when average quadriceps strength values exceeded 250 N. Inter-rater reliability of PPT was excellent in patients (ICC=0.79) and fair to good in controls (ICC=0.52). Handheld dynamometry seems to be a reliable method to test both quadriceps strength and PPT in PFPS patients. Inter-rater reliability was higher in PFPS patients compared to control subjects. With regard to quadriceps testing, a higher variance between assessors occurs when quadriceps strength increases. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Estimating parameters for probabilistic linkage of privacy-preserved datasets.

    Science.gov (United States)

    Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H

    2017-07-10

    Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher

  8. Privacy preserving data anonymization of spontaneous ADE reporting system dataset.

    Science.gov (United States)

    Lin, Wen-Yang; Yang, Duen-Chuan; Wang, Jie-Teng

    2016-07-18

    To facilitate long-term safety surveillance of marketing drugs, many spontaneously reporting systems (SRSs) of ADR events have been established world-wide. Since the data collected by SRSs contain sensitive personal health information that should be protected to prevent the identification of individuals, it procures the issue of privacy preserving data publishing (PPDP), that is, how to sanitize (anonymize) raw data before publishing. Although much work has been done on PPDP, very few studies have focused on protecting privacy of SRS data and none of the anonymization methods is favorable for SRS datasets, due to which contain some characteristics such as rare events, multiple individual records, and multi-valued sensitive attributes. We propose a new privacy model called MS(k, θ (*) )-bounding for protecting published spontaneous ADE reporting data from privacy attacks. Our model has the flexibility of varying privacy thresholds, i.e., θ (*) , for different sensitive values and takes the characteristics of SRS data into consideration. We also propose an anonymization algorithm for sanitizing the raw data to meet the requirements specified through the proposed model. Our algorithm adopts a greedy-based clustering strategy to group the records into clusters, conforming to an innovative anonymization metric aiming to minimize the privacy risk as well as maintain the data utility for ADR detection. Empirical study was conducted using FAERS dataset from 2004Q1 to 2011Q4. We compared our model with four prevailing methods, including k-anonymity, (X, Y)-anonymity, Multi-sensitive l-diversity, and (α, k)-anonymity, evaluated via two measures, Danger Ratio (DR) and Information Loss (IL), and considered three different scenarios of threshold setting for θ (*) , including uniform setting, level-wise setting and frequency-based setting. We also conducted experiments to inspect the impact of anonymized data on the strengths of discovered ADR signals. With all three

  9. Intermediate structure and threshold phenomena

    International Nuclear Information System (INIS)

    Hategan, Cornel

    2004-01-01

    The Intermediate Structure, evidenced through microstructures of the neutron strength function, is reflected in open reaction channels as fluctuations in excitation function of nuclear threshold effects. The intermediate state supporting both neutron strength function and nuclear threshold effect is a micro-giant neutron threshold state. (author)

  10. Interobserver reproducibility of the assessment of severity of complaints, grip strength, and pressure pain threshold in patients with lateral epicondylitis.

    NARCIS (Netherlands)

    Smidt, N.; Windt, A. van der; Assendelft, W.J.; Mourits, A.J.; Devillé, W.L.; Winter, F. de; Bouter, L.M.

    2002-01-01

    Objective: To evaluate the interobserver reproducibility of the assessment of severity of complaints, grip strength, and pressure pain threshold in patients with lateral epicondylitis in primary care. Design: Two physiotherapists assessed independently, and in randomized order, the severity of

  11. Interobserver reproducibility of the assessment of severity of complaints, grip strength, and pressure pain threshold in patients with lateral epicondylitis

    NARCIS (Netherlands)

    Smidt, N; van der Windt, DA; Assendelft, WJ; Mourits, AJ; Deville, WL; de Winter, AF; Bouter, LM

    Objective: To evaluate the interobserver reproducibility of the assessment of severity of complaints, grip strength, and pressure pain threshold in patients with lateral epicondylitis in primary care. Design: Two physiotherapists assessed independently, and in randomized order, the severity of

  12. Earth-strength magnetic field affects the rheotactic threshold of zebrafish swimming in shoals.

    Science.gov (United States)

    Cresci, Alessandro; De Rosa, Rosario; Putman, Nathan F; Agnisola, Claudio

    2017-02-01

    Rheotaxis, the unconditioned orienting response to water currents, is a main component of fish behavior. Rheotaxis is achieved using multiple sensory systems, including visual and tactile cues. Rheotactic orientation in open or low-visibility waters might also benefit from the stable frame of reference provided by the geomagnetic field, but this possibility has not been explored before. Zebrafish (Danio rerio) form shoals living in freshwater systems with low visibility, show a robust positive rheotaxis, and respond to geomagnetic fields. Here, we investigated whether a static magnetic field in the Earth-strength range influenced the rheotactic threshold of zebrafish in a swimming tunnel. The direction of the horizontal component of the magnetic field relative to water flow influenced the rheotactic threshold of fish as part of a shoal, but not of fish tested alone. Results obtained after disabling the lateral line of shoaling individuals with Co 2+ suggest that this organ system is involved in the observed magneto-rheotactic response. These findings constitute preliminary evidence that magnetic fields influence rheotaxis and suggest new avenues for further research. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Estimating the Threshold Level of Inflation for Thailand

    OpenAIRE

    Jiranyakul, Komain

    2017-01-01

    Abstract. This paper analyzes the relationship between inflation and economic growth in Thailand using annual dataset during 1990 and 2015. The threshold model is estimated for different levels of threshold inflation rate. The results suggest that the threshold level of inflation above which inflation significantly slow growth is estimated at 3 percent. The negative relationship between inflation and growth is apparent above this threshold level of inflation. In other words, the inflation rat...

  14. Measurements of NN → dπ near threshold

    International Nuclear Information System (INIS)

    Hutcheon, D.A.

    1990-09-01

    New, precise measurements of the differential cross sections for np → dπ 0 and π + d → pp and of analyzing powers for pp → dπ + have been made at energies within 10 MeV (c.m.) of threshold. They allow the pion s-wave and p-wave parts of the production strength to be distinguished unambiguously, yielding an s-wave strength at threshold which is significantly smaller than the previously accepted value. There is no evidence for charge independence breaking nor for πNN resonances near threshold. (Author) (17 refs., 17 figs., tab.)

  15. Gene expression analysis supports tumor threshold over 2.0 cm for T-category breast cancer.

    Science.gov (United States)

    Solvang, Hiroko K; Frigessi, Arnoldo; Kaveh, Fateme; Riis, Margit L H; Lüders, Torben; Bukholm, Ida R K; Kristensen, Vessela N; Andreassen, Bettina K

    2016-12-01

    Tumor size, as indicated by the T-category, is known as a strong prognostic indicator for breast cancer. It is common practice to distinguish the T1 and T2 groups at a tumor size of 2.0 cm. We investigated the 2.0-cm rule from a new point of view. Here, we try to find the optimal threshold based on the differences between the gene expression profiles of the T1 and T2 groups (as defined by the threshold). We developed a numerical algorithm to measure the overall differential gene expression between patients with smaller tumors and those with larger tumors among multiple expression datasets from different studies. We confirmed the performance of the proposed algorithm by a simulation study and then applied it to three different studies conducted at two Norwegian hospitals. We found that the maximum difference in gene expression is obtained at a threshold of 2.2-2.4 cm, and we confirmed that the optimum threshold was over 2.0 cm, as indicated by a validation study using five publicly available expression datasets. Furthermore, we observed a significant differentiation between the two threshold groups in terms of time to local recurrence for the Norwegian datasets. In addition, we performed an associated network and canonical pathway analyses for the genes differentially expressed between tumors below and above the given thresholds, 2.0 and 2.4 cm, using the Norwegian datasets. The associated network function illustrated a cellular assembly of the genes for the 2.0-cm threshold: an energy production for the 2.4-cm threshold and an enrichment in lipid metabolism based on the genes in the intersection for the 2.0- and 2.4-cm thresholds.

  16. The Glacial-Interglacial summer monsoon recorded in southwest Sulawesi speleothems: Evidence for sea level thresholds driving tropical monsoon strength

    Science.gov (United States)

    Kimbrough, A. K.; Gagan, M. K.; Dunbar, G. B.; Krause, C.; Di Nezio, P. N.; Hantoro, W. S.; Cheng, H.; Edwards, R. L.; Shen, C. C.; Sun, H.; Cai, B.; Rifai, H.

    2016-12-01

    Southwest Sulawesi lies within the Indo-Pacific Warm Pool (IPWP), at the center of atmospheric convection for two of the largest circulation cells on the planet, the meridional Hadley Cell and zonal Indo-Pacific Walker Circulation. Due to the geographic coincidence of these circulation cells, southwest Sulawesi serves as a hotspot for changes in tropical Pacific climate variability and Australian-Indonesian summer monsoon (AISM) strength over glacial-interglacial (G-I) timescales. The work presented here spans 386 - 127 ky BP, including glacial terminations IV ( 340 ky BP) and both phases of TIII (TIII 248 ky BP and TIIIa 217 ky BP). This record, along with previous work from southwest Sulawesi spanning the last 40 kyr, reveals coherent climatic features over three complete G-I cycles. The multi-stalagmite Sulawesi speleothem δ18O record demonstrates that on G-I timescales, the strength of the AISM is most sensitive to changes in sea level and its impact on the regional distribution of land and shallow ocean. Stalagmite δ18O and trace element (Mg/Ca) data indicate a rapid increase in rainfall at glacial terminations and wet interglacials. TIV, TIII, TIIIa, and TI are each characterized by an abrupt 3‰ decrease in δ18O that coincides with sea level rise and flooding of the Sunda and Sahul shelves. Strong evidence for a sea level (flooding/exposure) threshold is found throughout the southwest Sulawesi record. This is most clearly demonstrated over the period 230 - 212 ky BP (MIS 7d-7c), when a sea level fall to only -80 to -60 m for 10 kyr results in a weakened AISM and glacial conditions, followed by a full termination. Taken together, both glaciations and glacial terminations imply a sea level threshold driving the AISM between two primary levels of intensity (`interglacial' & `glacial'). These massive, sea-level driven shifts in AISM strength are superimposed on precession-scale variability associated with boreal fall insolation at the equator, indicating

  17. Mechanistic dissimilarities between environmentally-influenced fatigue-crack propagation at near-threshold and higher growth rates in lower-strength steels

    Energy Technology Data Exchange (ETDEWEB)

    Suresh, S.; Ritchie, R. O.

    1981-11-01

    The role of hydrogen gas in influencing fatigue crack propagation is examined for several classes of lower strength pressure vessel and piping steels. Based on measurements over a wide range of growth rates from 10/sup -8/ to 10/sup -2/ mm/cycle, crack propagation rates are found to be significantly higher in dehumidified gaseous hydrogen compared to moist air in two distinct regimes of crack growth, namely (i) at the intermediate range of growth typically above approx. 10/sup -5/ mm/cycle, and (ii) at the near-threshold region below approx. 10/sup -6/ mm/cycle approaching lattice dimensions per cycle. Both effects are seen at maximum stress intensities (K/sub max/) far below the sustained-load threshold stress intensity for hydrogen-assisted cracking (K/sub Iscc/). Characteristics of environmentally influenced fatigue crack growth in each regime are shown to be markedly different with regard to fractography and the effect of such variables as load ratio and frequency. It is concluded that the primary mechanisms responsible for the influence of the environment in each regime are distinctly different. Whereas corrosion fatigue behavior at intermediate growth rates can be attributed to hydrogen embrittlement processes, the primary role of moist environments at near-threshold levels is shown to involve a contribution from enhanced crack closure due to the formation of crack surface corrosion deposits at low load ratios.

  18. Imaging Shear Strength Along Subduction Faults

    Science.gov (United States)

    Bletery, Quentin; Thomas, Amanda M.; Rempel, Alan W.; Hardebeck, Jeanne L.

    2017-11-01

    Subduction faults accumulate stress during long periods of time and release this stress suddenly, during earthquakes, when it reaches a threshold. This threshold, the shear strength, controls the occurrence and magnitude of earthquakes. We consider a 3-D model to derive an analytical expression for how the shear strength depends on the fault geometry, the convergence obliquity, frictional properties, and the stress field orientation. We then use estimates of these different parameters in Japan to infer the distribution of shear strength along a subduction fault. We show that the 2011 Mw9.0 Tohoku earthquake ruptured a fault portion characterized by unusually small variations in static shear strength. This observation is consistent with the hypothesis that large earthquakes preferentially rupture regions with relatively homogeneous shear strength. With increasing constraints on the different parameters at play, our approach could, in the future, help identify favorable locations for large earthquakes.

  19. Imaging shear strength along subduction faults

    Science.gov (United States)

    Bletery, Quentin; Thomas, Amanda M.; Rempel, Alan W.; Hardebeck, Jeanne L.

    2017-01-01

    Subduction faults accumulate stress during long periods of time and release this stress suddenly, during earthquakes, when it reaches a threshold. This threshold, the shear strength, controls the occurrence and magnitude of earthquakes. We consider a 3-D model to derive an analytical expression for how the shear strength depends on the fault geometry, the convergence obliquity, frictional properties, and the stress field orientation. We then use estimates of these different parameters in Japan to infer the distribution of shear strength along a subduction fault. We show that the 2011 Mw9.0 Tohoku earthquake ruptured a fault portion characterized by unusually small variations in static shear strength. This observation is consistent with the hypothesis that large earthquakes preferentially rupture regions with relatively homogeneous shear strength. With increasing constraints on the different parameters at play, our approach could, in the future, help identify favorable locations for large earthquakes.

  20. Testing for a Debt-Threshold Effect on Output Growth.

    Science.gov (United States)

    Lee, Sokbae; Park, Hyunmin; Seo, Myung Hwan; Shin, Youngki

    2017-12-01

    Using the Reinhart-Rogoff dataset, we find a debt threshold not around 90 per cent but around 30 per cent, above which the median real gross domestic product (GDP) growth falls abruptly. Our work is the first to formally test for threshold effects in the relationship between public debt and median real GDP growth. The null hypothesis of no threshold effect is rejected at the 5 per cent significance level for most cases. While we find no evidence of a threshold around 90 per cent, our findings from the post-war sample suggest that the debt threshold for economic growth may exist around a relatively small debt-to-GDP ratio of 30 per cent. Furthermore, countries with debt-to-GDP ratios above 30 per cent have GDP growth that is 1 percentage point lower at the median.

  1. Muscle Weakness Thresholds for Prediction of Diabetes in Adults.

    Science.gov (United States)

    Peterson, Mark D; Zhang, Peng; Choksi, Palak; Markides, Kyriakos S; Al Snih, Soham

    2016-05-01

    Despite the known links between weakness and early mortality, what remains to be fully understood is the extent to which strength preservation is associated with protection from cardiometabolic diseases, such as diabetes. The purposes of this study were to determine the association between muscle strength and diabetes among adults, and to identify age- and sex-specific thresholds of low strength for detection of risk. A population-representative sample of 4066 individuals, aged 20-85 years, was included from the combined 2011-2012 National Health and Nutrition Examination Survey (NHANES) data sets. Strength was assessed using a handheld dynamometer, and the single highest reading from either hand was normalized to body mass. A logistic regression model was used to assess the association between normalized grip strength and risk of diabetes, as determined by haemoglobin A1c levels ≥6.5 % (≥48 mmol/mol), while controlling for sociodemographic characteristics, anthropometric measures and television viewing time. For every 0.05 decrement in normalized strength, there were 1.26 times increased adjusted odds for diabetes in men and women. Women were at lower odds of having diabetes (odds ratio 0.49; 95 % confidence interval 0.29-0.82). Age, waist circumference and lower income were also associated with diabetes. The optimal sex- and age-specific weakness thresholds to detect diabetes were 0.56, 0.50 and 0.45 for men at ages of 20-39, 40-59 and 60-80 years, respectively, and 0.42, 0.38 and 0.33 for women at ages of 20-39, 40-59 and 60-80 years, respectively. We present thresholds of strength that can be incorporated into a clinical setting for identifying adults who are at risk of developing diabetes and might benefit from lifestyle interventions to reduce risk.

  2. Effects of low strength pedaling exercise on stress sensitivity and pain threshold

    OpenAIRE

    坂野, 裕洋

    2017-01-01

     This study conducted a comparative assessment of the effects of low intensity lower limb pedaling exercise on the stress sensitivity and pain threshold in healthy subjects and those with chronic stiff neck or lower back pain. The results showed a reduction in pain threshold depending on the applied mechanical stress in both healthy and chronic pain groups. The individuals with chronic pain felt pain more intensely compared to the healthy individuals, and showed a significant reduction in pai...

  3. Alcoholic beverage strength discrimination by taste may have an upper threshold.

    Science.gov (United States)

    Lachenmeier, Dirk W; Kanteres, Fotis; Rehm, Jürgen

    2014-09-01

    Given the association between alcohol consumption and negative health consequences, there is a need for individuals to be aware of their consumption of ethanol, which requires knowledge of serving sizes and alcoholic strength. This study is one of the first to systematically investigate the ability to discriminate alcoholic strength by taste. Nine discrimination tests (total n = 413) according to International Standardization Organization (ISO) 4120 sensory analysis methodology "triangle test" were performed. A perceptible difference was found for vodka in orange juice (0.0 vs. 0.5% vol; 0 vs. 1% vol), pilsner and wheat beer (0.5 vs. 5% vol), and vodka in orange juice (5 vs. 10% vol, 20 vs. 30% vol, and 30 vs. 40% vol). The percentage of the population perceiving a difference between the beverages varied between 36 and 73%. Alcoholic strength (higher vs. lower) was correctly assigned in only 4 of the 7 trials at a significant level, with 30 to 66% of the trial groups assigning the correct strength. For the trials that included beverages above 40% vol (vodka unmixed, 40 vs. 50% vol and vodka in orange juice, 40 vs. 50% vol), testers could neither perceive a difference between the samples nor assign correct alcoholic strength. Discrimination of alcoholic strength by taste was possible to a limited degree in a window of intermediate alcoholic strengths, but not at higher concentrations. This result is especially relevant for drinkers of unlabeled, over-proof unrecorded alcoholic beverages who would potentially ingest more alcohol than if they were to ingest commercial alcohol. Our study provides strong evidence for the strict implementation and enforcement of labeling requirements for all alcoholic beverages to allow informed decision making by consumers. Copyright © 2014 by the Research Society on Alcoholism.

  4. Condensing Massive Satellite Datasets For Rapid Interactive Analysis

    Science.gov (United States)

    Grant, G.; Gallaher, D. W.; Lv, Q.; Campbell, G. G.; Fowler, C.; LIU, Q.; Chen, C.; Klucik, R.; McAllister, R. A.

    2015-12-01

    Our goal is to enable users to interactively analyze massive satellite datasets, identifying anomalous data or values that fall outside of thresholds. To achieve this, the project seeks to create a derived database containing only the most relevant information, accelerating the analysis process. The database is designed to be an ancillary tool for the researcher, not an archival database to replace the original data. This approach is aimed at improving performance by reducing the overall size by way of condensing the data. The primary challenges of the project include: - The nature of the research question(s) may not be known ahead of time. - The thresholds for determining anomalies may be uncertain. - Problems associated with processing cloudy, missing, or noisy satellite imagery. - The contents and method of creation of the condensed dataset must be easily explainable to users. The architecture of the database will reorganize spatially-oriented satellite imagery into temporally-oriented columns of data (a.k.a., "data rods") to facilitate time-series analysis. The database itself is an open-source parallel database, designed to make full use of clustered server technologies. A demonstration of the system capabilities will be shown. Applications for this technology include quick-look views of the data, as well as the potential for on-board satellite processing of essential information, with the goal of reducing data latency.

  5. Auditory-nerve single-neuron thresholds to electrical stimulation from scala tympani electrodes.

    Science.gov (United States)

    Parkins, C W; Colombo, J

    1987-12-31

    Single auditory-nerve neuron thresholds were studied in sensory-deafened squirrel monkeys to determine the effects of electrical stimulus shape and frequency on single-neuron thresholds. Frequency was separated into its components, pulse width and pulse rate, which were analyzed separately. Square and sinusoidal pulse shapes were compared. There were no or questionably significant threshold differences in charge per phase between sinusoidal and square pulses of the same pulse width. There was a small (less than 0.5 dB) but significant threshold advantage for 200 microseconds/phase pulses delivered at low pulse rates (156 pps) compared to higher pulse rates (625 pps and 2500 pps). Pulse width was demonstrated to be the prime determinant of single-neuron threshold, resulting in strength-duration curves similar to other mammalian myelinated neurons, but with longer chronaxies. The most efficient electrical stimulus pulse width to use for cochlear implant stimulation was determined to be 100 microseconds/phase. This pulse width delivers the lowest charge/phase at threshold. The single-neuron strength-duration curves were compared to strength-duration curves of a computer model based on the specific anatomy of auditory-nerve neurons. The membrane capacitance and resulting chronaxie of the model can be varied by altering the length of the unmyelinated termination of the neuron, representing the unmyelinated portion of the neuron between the habenula perforata and the hair cell. This unmyelinated segment of the auditory-nerve neuron may be subject to aminoglycoside damage. Simulating a 10 micron unmyelinated termination for this model neuron produces a strength-duration curve that closely fits the single-neuron data obtained from aminoglycoside deafened animals. Both the model and the single-neuron strength-duration curves differ significantly from behavioral threshold data obtained from monkeys and humans with cochlear implants. This discrepancy can best be explained by

  6. Can we set a global threshold age to define mature forests?

    Directory of Open Access Journals (Sweden)

    Philip Martin

    2016-02-01

    Full Text Available Globally, mature forests appear to be increasing in biomass density (BD. There is disagreement whether these increases are the result of increases in atmospheric CO2 concentrations or a legacy effect of previous land-use. Recently, it was suggested that a threshold of 450 years should be used to define mature forests and that many forests increasing in BD may be younger than this. However, the study making these suggestions failed to account for the interactions between forest age and climate. Here we revisit the issue to identify: (1 how climate and forest age control global forest BD and (2 whether we can set a threshold age for mature forests. Using data from previously published studies we modelled the impacts of forest age and climate on BD using linear mixed effects models. We examined the potential biases in the dataset by comparing how representative it was of global mature forests in terms of its distribution, the climate space it occupied, and the ages of the forests used. BD increased with forest age, mean annual temperature and annual precipitation. Importantly, the effect of forest age increased with increasing temperature, but the effect of precipitation decreased with increasing temperatures. The dataset was biased towards northern hemisphere forests in relatively dry, cold climates. The dataset was also clearly biased towards forests <250 years of age. Our analysis suggests that there is not a single threshold age for forest maturity. Since climate interacts with forest age to determine BD, a threshold age at which they reach equilibrium can only be determined locally. We caution against using BD as the only determinant of forest maturity since this ignores forest biodiversity and tree size structure which may take longer to recover. Future research should address the utility and cost-effectiveness of different methods for determining whether forests should be classified as mature.

  7. Development of a landlside EWS based on rainfall thresholds for Tuscany Region, Italy

    Science.gov (United States)

    Rosi, Ascanio; Segoni, Samuele; Battistini, Alessandro; Rossi, Guglielmo; Catani, Filippo; Casagli, Nicola

    2017-04-01

    We present the set-up of a landslide EWS based on rainfall thresholds for the Tuscany region (central Italy), that shows a heterogeneous distribution of reliefs and precipitation. The work started with the definition of a single set of thresholds for the whole region, but it resulted unsuitable for EWS purposes, because of the heterogeneity of the Tuscan territory and non-repeatability of the analyses, that were affected by a high degree of subjectivity. To overcome this problem, the work started from the implementation of a software capable of objectively defining the rainfall thresholds, since some of the main issues of these thresholds are the subjectivity of the analysis and therefore their non-repeatability. This software, named MaCumBA, is largely automated and can analyze, in a short time, a high number of rainfall events to define several parameters of the threshold, such as the intensity (I) and the duration (D) of the rainfall event, the no-rain time gap (NRG: how many hours without rain are needed to consider two events as separated) and the equation describing the threshold. The possibility of quickly perform several analyses lead to the decision to divide the territory in 25 homogeneous areas (named alert zones, AZ), so as a single threshold for each AZ could be defined. For the definition of the thresholds two independent datasets (of joint rainfall-landslide occurrences) have been used: a calibration dataset (data from 2000 to 2007) and a validation dataset (2008-2009). Once the thresholds were defined, a WebGIS-based EWS has been implemented. In this system it is possible to focus both on monitoring of real-time data and on forecasting at different lead times up to 48 h; forecasting data are collected from LAMI (Limited Area Model Italy) rainfall forecasts. The EWS works on the basis of the threshold parameters defined by MaCumBA (I, D, NRG). An important feature of the warning system is that the visualization of the thresholds in the Web

  8. Comparison of hand grip strength and upper limb pressure pain threshold between older adults with or without non-specific shoulder pain

    Directory of Open Access Journals (Sweden)

    Cesar Calvo Lobo

    2017-02-01

    Full Text Available Background There is a high prevalence of non-specific shoulder pain associated with upper limb functional limitations in older adults. The purpose of this study was to determine the minimal clinically important differences (MCID of grip strength and pressure pain threshold (PPT in the upper limb between older adults with or without non-specific shoulder pain. Methods A case-control study was carried out following the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE criteria. A sample of 132 shoulders (mean ± SD years with (n = 66; 76.04 ± 7.58 and without (n = 66; 75.05 ± 6.26 non-specific pain were recruited. The grip strength and PPT of the anterior deltoid and extensor carpi radialis brevis (ECRB muscles were assessed. Results There were statistically significant differences (mean ± SD; P-value for anterior deltoid PPT (2.51 ± 0.69 vs 3.68 ± 0.65, kg/cm2; P < .001, ECRB PPT (2.20 ± 0.60 vs 3.35 ± 0.38 kg/cm2; P < .001 and grip strength (20.78 ± 10.94 vs 24.63 ± 9.38 kg; P = .032 between shoulders with and without non-specific pain, respectively. Discussion The MCID of 1.17 kg/cm2, 1.15 kg/cm2 and 3.84 kg were proposed for anterior deltoid PPT, ECRB PPT and grip strength, respectively, to assess the upper limb of older adults with non-specific shoulder pain after treatment. In addition, univariate and multivariate (linear regression and regression trees analyses may be used to consider age distribution, sex, pain intensity, grip strength and PPT in older adults including clinical and epidemiological studies with non-specific shoulder pain.

  9. Influence of uncertain identification of triggering rainfall on the assessment of landslide early warning thresholds

    Science.gov (United States)

    Peres, David J.; Cancelliere, Antonino; Greco, Roberto; Bogaard, Thom A.

    2018-03-01

    Uncertainty in rainfall datasets and landslide inventories is known to have negative impacts on the assessment of landslide-triggering thresholds. In this paper, we perform a quantitative analysis of the impacts of uncertain knowledge of landslide initiation instants on the assessment of rainfall intensity-duration landslide early warning thresholds. The analysis is based on a synthetic database of rainfall and landslide information, generated by coupling a stochastic rainfall generator and a physically based hydrological and slope stability model, and is therefore error-free in terms of knowledge of triggering instants. This dataset is then perturbed according to hypothetical reporting scenarios that allow simulation of possible errors in landslide-triggering instants as retrieved from historical archives. The impact of these errors is analysed jointly using different criteria to single out rainfall events from a continuous series and two typical temporal aggregations of rainfall (hourly and daily). The analysis shows that the impacts of the above uncertainty sources can be significant, especially when errors exceed 1 day or the actual instants follow the erroneous ones. Errors generally lead to underestimated thresholds, i.e. lower than those that would be obtained from an error-free dataset. Potentially, the amount of the underestimation can be enough to induce an excessive number of false positives, hence limiting possible landslide mitigation benefits. Moreover, the uncertain knowledge of triggering rainfall limits the possibility to set up links between thresholds and physio-geographical factors.

  10. Influence of uncertain identification of triggering rainfall on the assessment of landslide early warning thresholds

    Directory of Open Access Journals (Sweden)

    D. J. Peres

    2018-03-01

    Full Text Available Uncertainty in rainfall datasets and landslide inventories is known to have negative impacts on the assessment of landslide-triggering thresholds. In this paper, we perform a quantitative analysis of the impacts of uncertain knowledge of landslide initiation instants on the assessment of rainfall intensity–duration landslide early warning thresholds. The analysis is based on a synthetic database of rainfall and landslide information, generated by coupling a stochastic rainfall generator and a physically based hydrological and slope stability model, and is therefore error-free in terms of knowledge of triggering instants. This dataset is then perturbed according to hypothetical reporting scenarios that allow simulation of possible errors in landslide-triggering instants as retrieved from historical archives. The impact of these errors is analysed jointly using different criteria to single out rainfall events from a continuous series and two typical temporal aggregations of rainfall (hourly and daily. The analysis shows that the impacts of the above uncertainty sources can be significant, especially when errors exceed 1 day or the actual instants follow the erroneous ones. Errors generally lead to underestimated thresholds, i.e. lower than those that would be obtained from an error-free dataset. Potentially, the amount of the underestimation can be enough to induce an excessive number of false positives, hence limiting possible landslide mitigation benefits. Moreover, the uncertain knowledge of triggering rainfall limits the possibility to set up links between thresholds and physio-geographical factors.

  11. Performance evaluation of tile-based Fisher Ratio analysis using a benchmark yeast metabolome dataset.

    Science.gov (United States)

    Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E

    2016-08-12

    Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Thresholds of probable problematic gambling involvement for the German population: Results of the Pathological Gambling and Epidemiology (PAGE) Study.

    Science.gov (United States)

    Brosowski, Tim; Hayer, Tobias; Meyer, Gerhard; Rumpf, Hans-Jürgen; John, Ulrich; Bischof, Anja; Meyer, Christian

    2015-09-01

    Consumption measures in gambling research may help to establish thresholds of low-risk gambling as 1 part of evidence-based responsible gambling strategies. The aim of this study is to replicate existing Canadian thresholds of probable low-risk gambling (Currie et al., 2006) in a representative dataset of German gambling behavior (Pathological Gambling and Epidemiology [PAGE]; N = 15,023). Receiver-operating characteristic curves applied in a training dataset (60%) extracted robust thresholds of low-risk gambling across 4 nonexclusive definitions of gambling problems (1 + to 4 + Diagnostic and Statistical Manual for Mental Disorders-Fifth Edition [DSM-5] Composite International Diagnostic Interview [CIDI] symptoms), different indicators of gambling involvement (across all game types; form-specific) and different timeframes (lifetime; last year). Logistic regressions applied in a test dataset (40%) to cross-validate the heuristics of probable low-risk gambling incorporated confounding covariates (age, gender, education, migration, and unemployment) and confirmed the strong concurrent validity of the thresholds. Moreover, it was possible to establish robust form-specific thresholds of low-risk gambling (only for gaming machines and poker). Possible implications for early detection of problem gamblers in offline or online environments are discussed. Results substantiate international knowledge about problem gambling prevention and contribute to a German discussion about empirically based guidelines of low-risk gambling. (c) 2015 APA, all rights reserved).

  13. SPICE: exploration and analysis of post-cytometric complex multivariate datasets.

    Science.gov (United States)

    Roederer, Mario; Nozzi, Joshua L; Nason, Martha C

    2011-02-01

    Polychromatic flow cytometry results in complex, multivariate datasets. To date, tools for the aggregate analysis of these datasets across multiple specimens grouped by different categorical variables, such as demographic information, have not been optimized. Often, the exploration of such datasets is accomplished by visualization of patterns with pie charts or bar charts, without easy access to statistical comparisons of measurements that comprise multiple components. Here we report on algorithms and a graphical interface we developed for these purposes. In particular, we discuss thresholding necessary for accurate representation of data in pie charts, the implications for display and comparison of normalized versus unnormalized data, and the effects of averaging when samples with significant background noise are present. Finally, we define a statistic for the nonparametric comparison of complex distributions to test for difference between groups of samples based on multi-component measurements. While originally developed to support the analysis of T cell functional profiles, these techniques are amenable to a broad range of datatypes. Published 2011 Wiley-Liss, Inc.

  14. Threshold model of cascades in empirical temporal networks

    Science.gov (United States)

    Karimi, Fariba; Holme, Petter

    2013-08-01

    Threshold models try to explain the consequences of social influence like the spread of fads and opinions. Along with models of epidemics, they constitute a major theoretical framework of social spreading processes. In threshold models on static networks, an individual changes her state if a certain fraction of her neighbors has done the same. When there are strong correlations in the temporal aspects of contact patterns, it is useful to represent the system as a temporal network. In such a system, not only contacts but also the time of the contacts are represented explicitly. In many cases, bursty temporal patterns slow down disease spreading. However, as we will see, this is not a universal truth for threshold models. In this work we propose an extension of Watts’s classic threshold model to temporal networks. We do this by assuming that an agent is influenced by contacts which lie a certain time into the past. I.e., the individuals are affected by contacts within a time window. In addition to thresholds in the fraction of contacts, we also investigate the number of contacts within the time window as a basis for influence. To elucidate the model’s behavior, we run the model on real and randomized empirical contact datasets.

  15. Sensory and Motor Peripheral Nerve Function and Longitudinal Changes in Quadriceps Strength

    DEFF Research Database (Denmark)

    Ward, R. E.; Boudreau, R. M.; Caserotti, P.

    2015-01-01

    Background. Poor peripheral nerve function is common in older adults and may be a risk factor for strength decline, although this has not been assessed longitudinally. Methods. We assessed whether sensorimotor peripheral nerve function predicts strength longitudinally in 1,830 participants (age...... was assessed with 10-g and 1.4-g monofilaments and average vibration detection threshold at the toe. Lower-extremity neuropathy symptoms were self-reported. Results. Worse vibration detection threshold predicted 2.4% lower strength in men and worse motor amplitude and two symptoms predicted 2.5% and 8.1% lower...

  16. The strength of friendship ties in proximity sensor data.

    Directory of Open Access Journals (Sweden)

    Vedran Sekara

    Full Text Available Understanding how people interact and socialize is important in many contexts from disease control to urban planning. Datasets that capture this specific aspect of human life have increased in size and availability over the last few years. We have yet to understand, however, to what extent such electronic datasets may serve as a valid proxy for real life social interactions. For an observational dataset, gathered using mobile phones, we analyze the problem of identifying transient and non-important links, as well as how to highlight important social interactions. Applying the Bluetooth signal strength parameter to distinguish between observations, we demonstrate that weak links, compared to strong links, have a lower probability of being observed at later times, while such links-on average-also have lower link-weights and probability of sharing an online friendship. Further, the role of link-strength is investigated in relation to social network properties.

  17. The strength of friendship ties in proximity sensor data.

    Science.gov (United States)

    Sekara, Vedran; Lehmann, Sune

    2014-01-01

    Understanding how people interact and socialize is important in many contexts from disease control to urban planning. Datasets that capture this specific aspect of human life have increased in size and availability over the last few years. We have yet to understand, however, to what extent such electronic datasets may serve as a valid proxy for real life social interactions. For an observational dataset, gathered using mobile phones, we analyze the problem of identifying transient and non-important links, as well as how to highlight important social interactions. Applying the Bluetooth signal strength parameter to distinguish between observations, we demonstrate that weak links, compared to strong links, have a lower probability of being observed at later times, while such links-on average-also have lower link-weights and probability of sharing an online friendship. Further, the role of link-strength is investigated in relation to social network properties.

  18. Poles near the thresholds in the coupled ΛN - ΣN system

    International Nuclear Information System (INIS)

    Yamamura, H.; Miyagawa, K.

    1999-01-01

    We find t-matrix poles near the ΣN threshold for the meson theoretical Nijmegen YN interactions including hard-core models. These poles are connected with the strength of the ΛN - ΣN coupling. We also observe antibound-state poles below the ΛN threshold which correlate with scattering lengths. Refs. 4, tabs. 2 (author)

  19. Threshold values characterizing iodine-induced SCC of zircaloys

    International Nuclear Information System (INIS)

    Une, K.

    1981-01-01

    In this paper, threshold values of stress, stress intensity factor, strain, strain rate and iodine concentration for SCC of unirradiated and irradiated Zircaloys are reviewed. The ratio of σ sub(th)/σ sub(y) adequately represents the effects of cold-work and irradiation on the SCC susceptibility, where threshold stress σ sub(th) is defined as the minimum stress to cause SCC to failure after 10-20 hours and σ sub(y), the yield stress obtained in an inert atmosphere. The ratio becomes gradually smaller with larger σ sub(y) and is less than 1 for materials with yield strengths above about 350MPa. Plastic strain appears to be necessary for SCC; plastic strains to failure range from 0.1 to 1% for high strength materials, even when data for irradiated materials are included. Strain rate significantly affects the susceptibility. A comparison of SCC data between constant strain rate and constant stress tests is presented. (author)

  20. Geochemical Fingerprinting of Coltan Ores by Machine Learning on Uneven Datasets

    International Nuclear Information System (INIS)

    Savu-Krohn, Christian; Rantitsch, Gerd; Auer, Peter; Melcher, Frank; Graupner, Torsten

    2011-01-01

    Two modern machine learning techniques, Linear Programming Boosting (LPBoost) and Support Vector Machines (SVMs), are introduced and applied to a geochemical dataset of niobium–tantalum (“coltan”) ores from Central Africa to demonstrate how such information may be used to distinguish ore provenance, i.e., place of origin. The compositional data used include uni- and multivariate outliers and elemental distributions are not described by parametric frequency distribution functions. The “soft margin” techniques of LPBoost and SVMs can be applied to such data. Optimization of their learning parameters results in an average accuracy of up to c. 92%, if spot measurements are assessed to estimate the provenance of ore samples originating from two geographically defined source areas. A parameterized performance measure, together with common methods for its optimization, was evaluated to account for the presence of uneven datasets. Optimization of the classification function threshold improves the performance, as class importance is shifted towards one of those classes. For this dataset, the average performance of the SVMs is significantly better compared to that of LPBoost.

  1. Threshold couplings of phase-conjugate mirrors with two interaction regions.

    Science.gov (United States)

    Beli, M; Petrovi, M; Sandfuchs, O; Kaiser, F

    1998-03-01

    Using the grating-action method, we determine the threshold coupling strengths of three generic examples of phase-conjugate mirrors with two interaction regions: the cat conjugator, the mutually incoherent beam coupler, and the interconnected ring mirror.

  2. High-Damage-Threshold Pinhole for Glass Fusion Laser Applications

    International Nuclear Information System (INIS)

    Kumit, N.A.; Letzring, S.A.; Johnson, R.P.

    1998-01-01

    We are investigating methods to fabricate high-damage-threshold spatial-filter pinholes that might not be susceptible to plasma closure for relatively high energies and long pulses. These are based on the observation that grazing-incidence reflection from glass can withstand in excess of 5 kJ/cm 2 (normal to the beam) without plasma formation. The high damage threshold results from both the cos q spreading of the energy across the surface and the reflection of a large fraction of the energy from the surface, thereby greatly reducing the field strength within the medium

  3. Comparison of Threshold Detection Methods for the Generalized Pareto Distribution (GPD): Application to the NOAA-NCDC Daily Rainfall Dataset

    Science.gov (United States)

    Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas

    2015-04-01

    One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General

  4. The study on the threshold strain of microvoid formation in TRIP steels during tensile deformation

    International Nuclear Information System (INIS)

    Wang Wurong; Guo Bimeng; Ji Yurong; He Changwei; Wei Xicheng

    2012-01-01

    Highlights: ► The tensile mechanical behaviors of TRIP steels were studied under high rate deformation conditions. ► The threshold strain of microvoid formation was examined quantitatively. ► The effects of retained austenite of TRIP on suppressing microvoid formed during tensile process have been discussed. - Abstract: Transformation Induced Plasticity (TRIP) steels exhibit a better combination of strength and ductility properties than conventional high strength low alloy (HSLA) steels, and therefore receive considerable attention in the automotive industry. In this work, the tensile mechanical behaviors of TRIP-aided steels were studied under the condition of the quasi-static and high deformed rates. The deformed specimens were observed by scanning electron microscope (SEM) along the tensile axis. The threshold strain of microvoid formation was examined quantitatively according to the evolution of deformation. The results showed that: the yield and tensile strengths of TRIP steels increase with the strain rate, whereas their elongations decrease. However, the threshold strain for TRIP steels at high strain rate is larger than that at low strain rate. Comparing with the deformed microstructure and microvoids formed in the necking zone of dual phase (DP) steel, the progressive deformation-induced transformation of retained austenite in TRIP steels remarkably increases the threshold strain of microvoid formation and furthermore postpones its growth and coalescence.

  5. Picosecond Electric-Field-Induced Threshold Switching in Phase-Change Materials.

    Science.gov (United States)

    Zalden, Peter; Shu, Michael J; Chen, Frank; Wu, Xiaoxi; Zhu, Yi; Wen, Haidan; Johnston, Scott; Shen, Zhi-Xun; Landreman, Patrick; Brongersma, Mark; Fong, Scott W; Wong, H-S Philip; Sher, Meng-Ju; Jost, Peter; Kaes, Matthias; Salinga, Martin; von Hoegen, Alexander; Wuttig, Matthias; Lindenberg, Aaron M

    2016-08-05

    Many chalcogenide glasses undergo a breakdown in electronic resistance above a critical field strength. Known as threshold switching, this mechanism enables field-induced crystallization in emerging phase-change memory. Purely electronic as well as crystal nucleation assisted models have been employed to explain the electronic breakdown. Here, picosecond electric pulses are used to excite amorphous Ag_{4}In_{3}Sb_{67}Te_{26}. Field-dependent reversible changes in conductivity and pulse-driven crystallization are observed. The present results show that threshold switching can take place within the electric pulse on subpicosecond time scales-faster than crystals can nucleate. This supports purely electronic models of threshold switching and reveals potential applications as an ultrafast electronic switch.

  6. Characteristics of Omega-Optimized Portfolios at Different Levels of Threshold Returns

    Directory of Open Access Journals (Sweden)

    Renaldas Vilkancas

    2014-12-01

    Full Text Available There is little literature considering effects that the loss-gain threshold used for dividing good and bad outcomes by all downside (upside risk measures has on portfolio optimization and performance. The purpose of this study is to assess the performance of portfolios optimized with respect to the Omega function developed by Keating and Shadwick at different levels of the threshold returns. The most common choices of the threshold values used in various Omega studies cover the risk-free rate and the average market return or simply a zero return, even though the inventors of this measure for risk warn that “using the values of the Omega function at particular points can be critically misleading” and that “only the entire Omega function contains information on distribution”. The obtained results demonstrate the importance of the selected values of the threshold return on portfolio performance – higher levels of the threshold lead to an increase in portfolio returns, albeit at the expense of a higher risk. In fact, within a certain threshold interval, Omega-optimized portfolios achieved the highest net return, compared with all other strategies for portfolio optimization using three different test datasets. However, beyond a certain limit, high threshold values will actually start hurting portfolio performance while meta-heuristic optimizers typically are able to produce a solution at any level of the threshold, and the obtained results would most likely be financially meaningless.

  7. Electric field and electron density thresholds for coherent auroral echo onset

    International Nuclear Information System (INIS)

    Kustov, A.V.; Uspensky, M.V.; Sofko, G.J.; Koehler, J.A.; Jones, G.O.L.; Williams, P.J.S.

    1993-01-01

    The authors study the threshold dependence of electron density and electric field for the observation of coherent auroral echo onset. They make use of Polar Geophysical Institute 83 MHz auroral radar and the EISCAT facility in Scandanavia, to simultaneously get plasma parameter information and coherent scatter observations. They observe an electron density threshold of roughly 2.5x10 11 m -3 for electric fields of 15 - 20 mV/m (near the Farley-Buneman instability threshold). For electric fields of 5 - 10 mV/m echos are not observed for even twice the previous electron density. Echo strength is observed to have other parametric dependences

  8. Preferential access to emotion under attentional blink: evidence for threshold phenomenon

    Directory of Open Access Journals (Sweden)

    Szczepanowski Remigiusz

    2015-03-01

    Full Text Available The present study provides evidence that the activation strength produced by emotional stimuli must pass a threshold level in order to be consciously perceived, contrary to the assumption of continuous quality of representation. An analysis of receiver operating characteristics (ROC for attentional blink performance was used to distinguish between two (continuous vs. threshold models of emotion perception by inspecting two different ROC’s shapes. Across all conditions, the results showed that performance in the attentional blink task was better described by the two-limbs ROC predicted by the Krantz threshold model than by the curvilinear ROC implied by the signal-detection theory.

  9. Reactive Strength Index: A Poor Indicator of Reactive Strength?

    Science.gov (United States)

    Healy, Robin; Kenny, Ian; Harrison, Drew

    2017-11-28

    The primary aim was to assess the relationships between reactive strength measures and associated kinematic and kinetic performance variables achieved during drop jumps. A secondary aim was to highlight issues with the use of reactive strength measures as performance indicators. Twenty eight national and international level sprinters, consisting of fourteen men and women, participated in this cross-sectional analysis. Athletes performed drop jumps from a 0.3 m box onto a force platform with dependent variables contact time (CT), landing time (TLand), push-off time (TPush), flight time (FT), jump height (JH), reactive strength index (RSI, calculated as JH / CT), reactive strength ratio (RSR, calculated as FT / CT) and vertical leg spring stiffness (Kvert) recorded. Pearson's correlation test found very high to near perfect relationships between RSI and RSR (r = 0.91 to 0.97), with mixed relationships found between RSI, RSR and the key performance variables, (Men: r = -0.86 to -0.71 between RSI/RSR and CT, r = 0.80 to 0.92 between RSI/RSR and JH; Women: r = -0.85 to -0.56 between RSR and CT, r = 0.71 between RSI and JH). This study demonstrates that the method of assessing reactive strength (RSI versus RSR) may be influenced by the performance strategies adopted i.e. whether an athlete achieves their best reactive strength scores via low CTs, high JHs or a combination. Coaches are advised to limit the variability in performance strategies by implementing upper and / or lower CT thresholds to accurately compare performances between individuals.

  10. Creep behavior and threshold stress of an extruded Al-6Mg-2Sc-1Zr alloy

    International Nuclear Information System (INIS)

    Deshmukh, S.P.; Mishra, R.S.; Kendig, K.L.

    2004-01-01

    Creep experiments were performed on extruded Al-6Mg-2Sc-1Zr (wt.%) alloy in a temperature range of 423-533 K. A threshold type creep behavior was measured and explained by observed dislocation-particle interactions. The experimental threshold stress values at various temperatures were compared with existing theoretical models. None of the available models could account for the decrease in threshold creep strength with increasing temperature

  11. A New Integrated Threshold Selection Methodology for Spatial Forecast Verification of Extreme Events

    Science.gov (United States)

    Kholodovsky, V.

    2017-12-01

    Extreme weather and climate events such as heavy precipitation, heat waves and strong winds can cause extensive damage to the society in terms of human lives and financial losses. As climate changes, it is important to understand how extreme weather events may change as a result. Climate and statistical models are often independently used to model those phenomena. To better assess performance of the climate models, a variety of spatial forecast verification methods have been developed. However, spatial verification metrics that are widely used in comparing mean states, in most cases, do not have an adequate theoretical justification to benchmark extreme weather events. We proposed a new integrated threshold selection methodology for spatial forecast verification of extreme events that couples existing pattern recognition indices with high threshold choices. This integrated approach has three main steps: 1) dimension reduction; 2) geometric domain mapping; and 3) thresholds clustering. We apply this approach to an observed precipitation dataset over CONUS. The results are evaluated by displaying threshold distribution seasonally, monthly and annually. The method offers user the flexibility of selecting a high threshold that is linked to desired geometrical properties. The proposed high threshold methodology could either complement existing spatial verification methods, where threshold selection is arbitrary, or be directly applicable in extreme value theory.

  12. Efficient computational model for classification of protein localization images using Extended Threshold Adjacency Statistics and Support Vector Machines.

    Science.gov (United States)

    Tahir, Muhammad; Jan, Bismillah; Hayat, Maqsood; Shah, Shakir Ullah; Amin, Muhammad

    2018-04-01

    Discriminative and informative feature extraction is the core requirement for accurate and efficient classification of protein subcellular localization images so that drug development could be more effective. The objective of this paper is to propose a novel modification in the Threshold Adjacency Statistics technique and enhance its discriminative power. In this work, we utilized Threshold Adjacency Statistics from a novel perspective to enhance its discrimination power and efficiency. In this connection, we utilized seven threshold ranges to produce seven distinct feature spaces, which are then used to train seven SVMs. The final prediction is obtained through the majority voting scheme. The proposed ETAS-SubLoc system is tested on two benchmark datasets using 5-fold cross-validation technique. We observed that our proposed novel utilization of TAS technique has improved the discriminative power of the classifier. The ETAS-SubLoc system has achieved 99.2% accuracy, 99.3% sensitivity and 99.1% specificity for Endogenous dataset outperforming the classical Threshold Adjacency Statistics technique. Similarly, 91.8% accuracy, 96.3% sensitivity and 91.6% specificity values are achieved for Transfected dataset. Simulation results validated the effectiveness of ETAS-SubLoc that provides superior prediction performance compared to the existing technique. The proposed methodology aims at providing support to pharmaceutical industry as well as research community towards better drug designing and innovation in the fields of bioinformatics and computational biology. The implementation code for replicating the experiments presented in this paper is available at: https://drive.google.com/file/d/0B7IyGPObWbSqRTRMcXI2bG5CZWs/view?usp=sharing. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Estimates of pitch strength for musicians and nonmusicians

    Science.gov (United States)

    Clarkson, Marsha G.; Zettler, Cynthia M.; Follmer, Michelle J.; Faulk, Margaret; Takagi, Michael J.

    2003-04-01

    To measure the strength of the pitch of iterated rippled noise (IRN), 19 adults were tested in an operant conditioning procedure. Seven adults had music training and currently played an instrument; 12 adults had no training and did not currently play an instrument. To generate IRN, a 500-ms Gaussian noise stimulus was delayed by 5 or 6 ms (pitches of 200 or 166 Hz) and added to the original for 16 iterations. IRN stimuli having one delay were presented repeatedly. On signal trials the delay changed for 6 s. Stimulus level roved from 63-67 dBA (background of 28 dBA). Adults learned to press a button when the stimulus changed. Testing started with IRN stimuli having 0-dB attenuation (i.e., maximal pitch strength). Stimuli having weaker pitches (i.e., progressively greater attenuation applied to the delayed noise) followed. Strength of pitch was quantified as the maximum attenuation for which pitch was discerned. For each subject, threshold attenuation for pitch strength was extrapolated as the 71% point on a psychometric function depicting percent correct performance as a function of attenuation. Mean thresholds revealed that the pitch percept was similar for both nonmusically trained (18.70 dB) and musically trained adults (18.73 dB).

  14. Threshold Studies of the Microwave Instability in Electron Storage Rings

    International Nuclear Information System (INIS)

    Bane, Karl

    2010-01-01

    We use a Vlasov-Fokker-Planck program and a linearized Vlasov solver to study the microwave instability threshold of impedance models: (1) a Q = 1 resonator and (2) shielded coherent synchrotron radiation (CSR), and find the results of the two programs agree well. For shielded CSR we show that only two dimensionless parameters, the shielding parameter Π and the strength parameter S csr , are needed to describe the system. We further show that there is a strong instability associated with CSR, and that the threshold, to good approximation, is given by (S csr )th = 0.5 + 0.12Π. In particular, this means that shielding has little effect in stabilizing the beam for Π ∼ -3/2 . We, in addition, find another instability in the vicinity of Π = 0.7 with a lower threshold, (S csr ) th ∼ 0.2. We find that the threshold to this instability depends strongly on damping time, (S csr ) th ∼ τ p -1/2 , and that the tune spread at threshold is small - both hallmarks of a weak instability.

  15. Social Thresholds and their Translation into Social-ecological Management Practices

    Directory of Open Access Journals (Sweden)

    Lisa Christensen

    2012-03-01

    Full Text Available The objective of this paper is to provide a preliminary discussion of how to improve our conceptualization of social thresholds using (1 a more sociological analysis of social resilience, and (2 results from research carried out in collaboration with the Champagne and Aishihik First Nations of the Yukon Territory, Canada. Our sociological analysis of the concept of resilience begins with a review of the literature followed by placement of the concept in the domain of sociological theory to gain insight into its strengths and limitations. A new notion of social thresholds is proposed and case study research discussed to support the proposition. Our findings suggest that rather than view social thresholds as breakpoints between two regimes, as thresholds are typically conceived in the resilience literature, that they be viewed in terms of collectively recognized points that signify new experiences. Some examples of thresholds identified in our case study include power in decision making, level of healing from historical events, and a preference for small-scale development over large capital intensive projects.

  16. Determining the precipitable water vapor thresholds under different rainfall strengths in Taiwan

    Science.gov (United States)

    Yeh, Ta-Kang; Shih, Hsuan-Chang; Wang, Chuan-Sheng; Choy, Suelynn; Chen, Chieh-Hung; Hong, Jing-Shan

    2018-02-01

    Precipitable Water Vapor (PWV) plays an important role for weather forecasting. It is helpful in evaluating the changes of the weather system via observing the distribution of water vapor. The ability of calculating PWV from Global Positioning System (GPS) signals is useful to understand the special weather phenomenon. In this study, 95 ground-based GPS and rainfall stations in Taiwan were utilized from 2006 to 2012 to analyze the relationship between PWV and rainfall. The PWV data were classified into four classes (no, light, moderate and heavy rainfall), and the vertical gradients of the PWV were obtained and the variations of the PWV were analyzed. The results indicated that as the GPS elevation increased every 100 m, the PWV values decreased by 9.5 mm, 11.0 mm, 12.2 mm and 12.3 mm during the no, light, moderate and heavy rainfall conditions, respectively. After applying correction using the vertical gradients mentioned above, the average PWV thresholds were 41.8 mm, 52.9 mm, 62.5 mm and 64.4 mm under the no, light, moderate and heavy rainfall conditions, respectively. This study offers another type of empirical threshold to assist the rainfall prediction and can be used to distinguish the rainfall features between different areas in Taiwan.

  17. Tensile rock mass strength estimated using InSAR

    KAUST Repository

    Jonsson, Sigurjon

    2012-11-01

    The large-scale strength of rock is known to be lower than the strength determined from small-scale samples in the laboratory. However, it is not well known how strength scales with sample size. I estimate kilometer-scale tensional rock mass strength by measuring offsets across new tensional fractures (joints), formed above a shallow magmatic dike intrusion in western Arabia in 2009. I use satellite radar observations to derive 3D ground displacements and by quantifying the extension accommodated by the joints and the maximum extension that did not result in a fracture, I put bounds on the joint initiation threshold of the surface rocks. The results indicate that the kilometer-scale tensile strength of the granitic rock mass is 1–3 MPa, almost an order of magnitude lower than typical laboratory values.

  18. Tensile rock mass strength estimated using InSAR

    KAUST Repository

    Jonsson, Sigurjon

    2012-01-01

    The large-scale strength of rock is known to be lower than the strength determined from small-scale samples in the laboratory. However, it is not well known how strength scales with sample size. I estimate kilometer-scale tensional rock mass strength by measuring offsets across new tensional fractures (joints), formed above a shallow magmatic dike intrusion in western Arabia in 2009. I use satellite radar observations to derive 3D ground displacements and by quantifying the extension accommodated by the joints and the maximum extension that did not result in a fracture, I put bounds on the joint initiation threshold of the surface rocks. The results indicate that the kilometer-scale tensile strength of the granitic rock mass is 1–3 MPa, almost an order of magnitude lower than typical laboratory values.

  19. Collision strengths and oscillator strengths for excitation to the n = 3 and 4 levels of neon-like ions

    International Nuclear Information System (INIS)

    Zhang, H.; Sampson, D.H.; Clark, R.E.H.; Mann, J.B.

    1987-01-01

    Collision strengths are given for the 88 possible fine-structure transitions between the ground level and the n = 3 and 4 levels in 20 neon-like ions with nuclear charge number Z in the range 18 ≤Z≤74. The results are given for the nine impact-electron energies in threshold units X = 1.0, 1.2, 1.5, 1.9, 2.5, 4.0, 6.0, 10.0, and 15.0. In addition, electric dipole oscillator strengths obtained by various methods are given. copyright 1987 Academic Press, Inc

  20. The Crack Initiation and Propagation in threshold regime and S-N curves of High Strength Spring Steels

    International Nuclear Information System (INIS)

    Gubeljak, N; Predan, J; Senčič, B; Chapetti, M D

    2016-01-01

    An integrated fracture mechanics approach is proposed to account for the estimation of the fatigue resistance of component. Applications, estimations and results showed very good agreements with experimental results. The model is simple to apply, accounts for the main geometrical, mechanical and material parameters that define the fatigue resistance, and allows accurate predictions. It offers a change in design philosophy: It could be used for design, while simultaneously dealing with crack propagation thresholds. Furthermore, it allows quantification of the material defect sensitivity. In the case of the set of fatigue tests carried out by rotational bending of specimens without residual stresses, the estimated results showed good agreement and that an initial crack length of 0.5 mm can conservatively explain experimental data. In the case of fatigue tests carried out on the springs at their final condition with bending at R = 0.1 our data shows the influence of compressive residual stresses on fatigue strength. Results also showed that the procedures allow us to analyze the different combinations of initial crack length and residual stress levels, and how much the fatigue resistance can change by changing that configuration. For this set of tests, the fatigue resistance estimated for an initial crack length equal to 0.35 mm, can explain all testing data observed for the springs. (paper)

  1. Investigation of excimer laser ablation threshold of polymers using a microphone

    Energy Technology Data Exchange (ETDEWEB)

    Krueger, Joerg; Niino, Hiroyuki; Yabe, Akira

    2002-09-30

    KrF excimer laser ablation of polyethylene terephthalate (PET), polyimide (PI) and polycarbonate (PC) in air was studied by an in situ monitoring technique using a microphone. The microphone signal generated by a short acoustic pulse represented the etch rate of laser ablation depending on the laser fluence, i.e., the ablation 'strength'. From a linear relationship between the microphone output voltage and the laser fluence, the single-pulse ablation thresholds were found to be 30 mJ cm{sup -2} for PET, 37 mJ cm{sup -2} for PI and 51 mJ cm{sup -2} for PC (20-pulses threshold). The ablation thresholds of PET and PI were not influenced by the number of pulses per spot, while PC showed an incubation phenomenon. A microphone technique provides a simple method to determine the excimer laser ablation threshold of polymer films.

  2. Establishing a threshold for the number of missing days using 7 d pedometer data.

    Science.gov (United States)

    Kang, Minsoo; Hart, Peter D; Kim, Youngdeok

    2012-11-01

    The purpose of this study was to examine the threshold of the number of missing days of recovery using the individual information (II)-centered approach. Data for this study came from 86 participants, aged from 17 to 79 years old, who had 7 consecutive days of complete pedometer (Yamax SW 200) wear. Missing datasets (1 d through 5 d missing) were created by a SAS random process 10,000 times each. All missing values were replaced using the II-centered approach. A 7 d average was calculated for each dataset, including the complete dataset. Repeated measure ANOVA was used to determine the differences between 1 d through 5 d missing datasets and the complete dataset. Mean absolute percentage error (MAPE) was also computed. Mean (SD) daily step count for the complete 7 d dataset was 7979 (3084). Mean (SD) values for the 1 d through 5 d missing datasets were 8072 (3218), 8066 (3109), 7968 (3273), 7741 (3050) and 8314 (3529), respectively (p > 0.05). The lower MAPEs were estimated for 1 d missing (5.2%, 95% confidence interval (CI) 4.4-6.0) and 2 d missing (8.4%, 95% CI 7.0-9.8), while all others were greater than 10%. The results of this study show that the 1 d through 5 d missing datasets, with replaced values, were not significantly different from the complete dataset. Based on the MAPE results, it is not recommended to replace more than two days of missing step counts.

  3. EPA Nanorelease Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — EPA Nanorelease Dataset. This dataset is associated with the following publication: Wohlleben, W., C. Kingston, J. Carter, E. Sahle-Demessie, S. Vazquez-Campos, B....

  4. CLARA-A1: a cloud, albedo, and radiation dataset from 28 yr of global AVHRR data

    Directory of Open Access Journals (Sweden)

    K.-G. Karlsson

    2013-05-01

    Full Text Available A new satellite-derived climate dataset – denoted CLARA-A1 ("The CM SAF cLoud, Albedo and RAdiation dataset from AVHRR data" – is described. The dataset covers the 28 yr period from 1982 until 2009 and consists of cloud, surface albedo, and radiation budget products derived from the AVHRR (Advanced Very High Resolution Radiometer sensor carried by polar-orbiting operational meteorological satellites. Its content, anticipated accuracies, limitations, and potential applications are described. The dataset is produced by the EUMETSAT Climate Monitoring Satellite Application Facility (CM SAF project. The dataset has its strengths in the long duration, its foundation upon a homogenized AVHRR radiance data record, and in some unique features, e.g. the availability of 28 yr of summer surface albedo and cloudiness parameters over the polar regions. Quality characteristics are also well investigated and particularly useful results can be found over the tropics, mid to high latitudes and over nearly all oceanic areas. Being the first CM SAF dataset of its kind, an intensive evaluation of the quality of the datasets was performed and major findings with regard to merits and shortcomings of the datasets are reported. However, the CM SAF's long-term commitment to perform two additional reprocessing events within the time frame 2013–2018 will allow proper handling of limitations as well as upgrading the dataset with new features (e.g. uncertainty estimates and extension of the temporal coverage.

  5. Enhanced radiative strength in the quasicontinuum of 117Sn.

    Science.gov (United States)

    Agvaanluvsan, U; Larsen, A C; Chankova, R; Guttormsen, M; Mitchell, G E; Schiller, A; Siem, S; Voinov, A

    2009-04-24

    The radiative strength function of 117Sn has been measured up to the neutron separation energy using the (3He, 3He' gamma) reaction. An increase in the slope of the strength function around E gamma=4.5 MeV indicates the onset of a resonancelike structure, giving a significant enhancement of the radiative strength function compared to standard models in the energy region 4.5threshold in the quasicontinuum region.

  6. Soil chemistry in lithologically diverse datasets: the quartz dilution effect

    Science.gov (United States)

    Bern, Carleton R.

    2009-01-01

    National- and continental-scale soil geochemical datasets are likely to move our understanding of broad soil geochemistry patterns forward significantly. Patterns of chemistry and mineralogy delineated from these datasets are strongly influenced by the composition of the soil parent material, which itself is largely a function of lithology and particle size sorting. Such controls present a challenge by obscuring subtler patterns arising from subsequent pedogenic processes. Here the effect of quartz concentration is examined in moist-climate soils from a pilot dataset of the North American Soil Geochemical Landscapes Project. Due to variable and high quartz contents (6.2–81.7 wt.%), and its residual and inert nature in soil, quartz is demonstrated to influence broad patterns in soil chemistry. A dilution effect is observed whereby concentrations of various elements are significantly and strongly negatively correlated with quartz. Quartz content drives artificial positive correlations between concentrations of some elements and obscures negative correlations between others. Unadjusted soil data show the highly mobile base cations Ca, Mg, and Na to be often strongly positively correlated with intermediately mobile Al or Fe, and generally uncorrelated with the relatively immobile high-field-strength elements (HFS) Ti and Nb. Both patterns are contrary to broad expectations for soils being weathered and leached. After transforming bulk soil chemistry to a quartz-free basis, the base cations are generally uncorrelated with Al and Fe, and negative correlations generally emerge with the HFS elements. Quartz-free element data may be a useful tool for elucidating patterns of weathering or parent-material chemistry in large soil datasets.

  7. Validity and reliability of stillbirth data using linked self-reported and administrative datasets.

    Science.gov (United States)

    Hure, Alexis J; Chojenta, Catherine L; Powers, Jennifer R; Byles, Julie E; Loxton, Deborah

    2015-01-01

    A high rate of stillbirth was previously observed in the Australian Longitudinal Study of Women's Health (ALSWH). Our primary objective was to test the validity and reliability of self-reported stillbirth data linked to state-based administrative datasets. Self-reported data, collected as part of the ALSWH cohort born in 1973-1978, were linked to three administrative datasets for women in New South Wales, Australia (n = 4374): the Midwives Data Collection; Admitted Patient Data Collection; and Perinatal Death Review Database. Linkages were obtained from the Centre for Health Record Linkage for the period 1996-2009. True cases of stillbirth were defined by being consistently recorded in two or more independent data sources. Sensitivity, specificity, positive predictive value, negative predictive value, percent agreement, and kappa statistics were calculated for each dataset. Forty-nine women reported 53 stillbirths. No dataset was 100% accurate. The administrative datasets performed better than self-reported data, with high accuracy and agreement. Self-reported data showed high sensitivity (100%) but low specificity (30%), meaning women who had a stillbirth always reported it, but there was also over-reporting of stillbirths. About half of the misreported cases in the ALSWH were able to be removed by identifying inconsistencies in longitudinal data. Data linkage provides great opportunity to assess the validity and reliability of self-reported study data. Conversely, self-reported study data can help to resolve inconsistencies in administrative datasets. Quantifying the strengths and limitations of both self-reported and administrative data can improve epidemiological research, especially by guiding methods and interpretation of findings.

  8. Mammogram segmentation using maximal cell strength updation in cellular automata.

    Science.gov (United States)

    Anitha, J; Peter, J Dinesh

    2015-08-01

    Breast cancer is the most frequently diagnosed type of cancer among women. Mammogram is one of the most effective tools for early detection of the breast cancer. Various computer-aided systems have been introduced to detect the breast cancer from mammogram images. In a computer-aided diagnosis system, detection and segmentation of breast masses from the background tissues is an important issue. In this paper, an automatic segmentation method is proposed to identify and segment the suspicious mass regions of mammogram using a modified transition rule named maximal cell strength updation in cellular automata (CA). In coarse-level segmentation, the proposed method performs an adaptive global thresholding based on the histogram peak analysis to obtain the rough region of interest. An automatic seed point selection is proposed using gray-level co-occurrence matrix-based sum average feature in the coarse segmented image. Finally, the method utilizes CA with the identified initial seed point and the modified transition rule to segment the mass region. The proposed approach is evaluated over the dataset of 70 mammograms with mass from mini-MIAS database. Experimental results show that the proposed approach yields promising results to segment the mass region in the mammograms with the sensitivity of 92.25% and accuracy of 93.48%.

  9. The Lunar Source Disk: Old Lunar Datasets on a New CD-ROM

    Science.gov (United States)

    Hiesinger, H.

    1998-01-01

    A compilation of previously published datasets on CD-ROM is presented. This Lunar Source Disk is intended to be a first step in the improvement/expansion of the Lunar Consortium Disk, in order to create an "image-cube"-like data pool that can be easily accessed and might be useful for a variety of future lunar investigations. All datasets were transformed to a standard map projection that allows direct comparison of different types of information on a pixel-by pixel basis. Lunar observations have a long history and have been important to mankind for centuries, notably since the work of Plutarch and Galileo. As a consequence of centuries of lunar investigations, knowledge of the characteristics and properties of the Moon has accumulated over time. However, a side effect of this accumulation is that it has become more and more complicated for scientists to review all the datasets obtained through different techniques, to interpret them properly, to recognize their weaknesses and strengths in detail, and to combine them synoptically in geologic interpretations. Such synoptic geologic interpretations are crucial for the study of planetary bodies through remote-sensing data in order to avoid misinterpretation. In addition, many of the modem datasets, derived from Earth-based telescopes as well as from spacecraft missions, are acquired at different geometric and radiometric conditions. These differences make it challenging to compare or combine datasets directly or to extract information from different datasets on a pixel-by-pixel basis. Also, as there is no convention for the presentation of lunar datasets, different authors choose different map projections, depending on the location of the investigated areas and their personal interests. Insufficient or incomplete information on the map parameters used by different authors further complicates the reprojection of these datasets to a standard geometry. The goal of our efforts was to transfer previously published lunar

  10. Proteomics dataset

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Carlsen, Thomas Gelsing; Ellingsen, Torkell

    2017-01-01

    The datasets presented in this article are related to the research articles entitled “Neutrophil Extracellular Traps in Ulcerative Colitis: A Proteome Analysis of Intestinal Biopsies” (Bennike et al., 2015 [1]), and “Proteome Analysis of Rheumatoid Arthritis Gut Mucosa” (Bennike et al., 2017 [2])...... been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifiers PXD001608 for ulcerative colitis and control samples, and PXD003082 for rheumatoid arthritis samples....

  11. A derivation of the stable cavitation threshold accounting for bubble-bubble interactions.

    Science.gov (United States)

    Guédra, Matthieu; Cornu, Corentin; Inserra, Claude

    2017-09-01

    The subharmonic emission of sound coming from the nonlinear response of a bubble population is the most used indicator for stable cavitation. When driven at twice their resonance frequency, bubbles can exhibit subharmonic spherical oscillations if the acoustic pressure amplitude exceeds a threshold value. Although various theoretical derivations exist for the subharmonic emission by free or coated bubbles, they all rest on the single bubble model. In this paper, we propose an analytical expression of the subharmonic threshold for interacting bubbles in a homogeneous, monodisperse cloud. This theory predicts a shift of the subharmonic resonance frequency and a decrease of the corresponding pressure threshold due to the interactions. For a given sonication frequency, these results show that an optimal value of the interaction strength (i.e. the number density of bubbles) can be found for which the subharmonic threshold is minimum, which is consistent with recently published experiments conducted on ultrasound contrast agents. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Neurocomputational account of memory and perception: Thresholded and graded signals in the hippocampus.

    Science.gov (United States)

    Elfman, Kane W; Aly, Mariam; Yonelinas, Andrew P

    2014-12-01

    Recent evidence suggests that the hippocampus, a region critical for long-term memory, also supports certain forms of high-level visual perception. A seemingly paradoxical finding is that, unlike the thresholded hippocampal signals associated with memory, the hippocampus produces graded, strength-based signals in perception. This article tests a neurocomputational model of the hippocampus, based on the complementary learning systems framework, to determine if the same model can account for both memory and perception, and whether it produces the appropriate thresholded and strength-based signals in these two types of tasks. The simulations showed that the hippocampus, and most prominently the CA1 subfield, produced graded signals when required to discriminate between highly similar stimuli in a perception task, but generated thresholded patterns of activity in recognition memory. A threshold was observed in recognition memory because pattern completion occurred for only some trials and completely failed to occur for others; conversely, in perception, pattern completion always occurred because of the high degree of item similarity. These results offer a neurocomputational account of the distinct hippocampal signals associated with perception and memory, and are broadly consistent with proposals that CA1 functions as a comparator of expected versus perceived events. We conclude that the hippocampal computations required for high-level perceptual discrimination are congruous with current neurocomputational models that account for recognition memory, and fit neatly into a broader description of the role of the hippocampus for the processing of complex relational information. © 2014 Wiley Periodicals, Inc.

  13. Study of the Threshold Anomaly in the Scattering of Li Isotopes on 27Al

    International Nuclear Information System (INIS)

    Fernandez Niello, J.O.; Figueira, J.M.; Abriola, D.; Arazi, A.; Capurro, O.A.; Marti, G.V.; Martinez Heinmann, D.; Pacheco, A.J.; Barbara, E. de; Padron, I.; Gomes, P.R.S.; Lubian, J.

    2007-01-01

    Angular distributions of 6,7 Li scattered by 27 Al have been measured at different bombarding energies between 6 and 18 MeV. The results obtained from an optical model analysis using several potentials indicate that there is an isotopic dependence in the energy variation of the real and imaginary strengths. Whereas the 7 Li + 27 Al system shows no indication of any threshold anomaly, the 6 Li + 27 Al reaction suggests the presence of the so-called breakup threshold anomaly

  14. The Statistical Differences Between the Gridded Temperature Datasets, and its Implications for Stochastic Modelling

    Science.gov (United States)

    Fredriksen, H. B.; Løvsletten, O.; Rypdal, M.; Rypdal, K.

    2014-12-01

    Several research groups around the world collect instrumental temperature data and combine them in different ways to obtain global gridded temperature fields. The three most well known datasets are HadCRUT4 produced by the Climatic Research Unit and the Met Office Hadley Centre in UK, one produced by NASA GISS, and one produced by NOAA. Recently Berkeley Earth has also developed a gridded dataset. All these four will be compared in our analysis. The statistical properties we will focus on are the standard deviation and the Hurst exponent. These two parameters are sufficient to describe the temperatures as long-range memory stochastic processes; the standard deviation describes the general fluctuation level, while the Hurst exponent relates the strength of the long-term variability to the strength of the short-term variability. A higher Hurst exponent means that the slow variations are stronger compared to the fast, and that the autocovariance function will have a stronger tail. Hence the Hurst exponent gives us information about the persistence or memory of the process. We make use of these data to show that data averaged over a larger area exhibit higher Hurst exponents and lower variance than data averaged over a smaller area, which provides information about the relationship between temporal and spatial correlations of the temperature fluctuations. Interpolation in space has some similarities with averaging over space, although interpolation is more weighted towards the measurement locations. We demonstrate that the degree of spatial interpolation used can explain some differences observed between the variances and memory exponents computed from the various datasets.

  15. Two Types of Social Grooming discovered in Primitive and Modern Communication Data-Sets

    OpenAIRE

    Takano, Masanori

    2017-01-01

    Social networking sites (SNS) provide innovative social bonding methods known as social grooming. These have drastically decreased time and distance constraints of social grooming. Here we show two type social grooming (elaborate social grooming and lightweight social grooming) discovered in a model constructed by thirty communication data-sets including face to face, SNS, mobile phones, and Chacma baboons. This demarcation is caused by a trade-off between the number and strength of social re...

  16. Detection threshold for sound distortion resulting from noise reduction in normal-hearing and hearing-impaired listeners.

    Science.gov (United States)

    Brons, Inge; Dreschler, Wouter A; Houben, Rolph

    2014-09-01

    Hearing-aid noise reduction should reduce background noise, but not disturb the target speech. This objective is difficult because noise reduction suffers from a trade-off between the amount of noise removed and signal distortion. It is unknown if this important trade-off differs between normal-hearing (NH) and hearing-impaired (HI) listeners. This study separated the negative effect of noise reduction (distortion) from the positive effect (reduction of noise) to allow the measurement of the detection threshold for noise-reduction (NR) distortion. Twelve NH subjects and 12 subjects with mild to moderate sensorineural hearing loss participated in this study. The detection thresholds for distortion were determined using an adaptive procedure with a three-interval, two-alternative forced-choice paradigm. Different levels of distortion were obtained by changing the maximum amount of noise reduction. Participants were also asked to indicate their preferred NR strength. The detection threshold for overall distortion was higher for HI subjects than for NH subjects, suggesting that stronger noise reduction can be applied for HI listeners without affecting the perceived sound quality. However, the preferred NR strength of HI listeners was closer to their individual detection threshold for distortion than in NH listeners. This implies that HI listeners tolerate fewer audible distortions than NH listeners.

  17. Can we set a global threshold age to define mature forests?

    DEFF Research Database (Denmark)

    Martin, Philip; Jung, Martin; Brearley, Francis Q.

    2016-01-01

    ) whether we can set a threshold age for mature forests. Using data from previously published studies we modelled the impacts of forest age and climate on BD using linear mixed effects models. We examined the potential biases in the dataset by comparing how representative it was of global mature forests......Globally, mature forests appear to be increasing in biomass density (BD). There is disagreement whether these increases are the result of increases in atmospheric CO2 concentrations or a legacy effect of previous land-use. Recently, it was suggested that a threshold of 450 years should be used...... to define mature forests and that many forests increasing in BD may be younger than this. However, the study making these suggestions failed to account for the interactions between forest age and climate. Here we revisit the issue to identify: (1) how climate and forest age control global forest BD and (2...

  18. Evidence for the contribution of a threshold retrieval process to semantic memory.

    Science.gov (United States)

    Kempnich, Maria; Urquhart, Josephine A; O'Connor, Akira R; Moulin, Chris J A

    2017-10-01

    It is widely held that episodic retrieval can recruit two processes: a threshold context retrieval process (recollection) and a continuous signal strength process (familiarity). Conversely the processes recruited during semantic retrieval are less well specified. We developed a semantic task analogous to single-item episodic recognition to interrogate semantic recognition receiver-operating characteristics (ROCs) for a marker of a threshold retrieval process. We fitted observed ROC points to three signal detection models: two models typically used in episodic recognition (unequal variance and dual-process signal detection models) and a novel dual-process recollect-to-reject (DP-RR) signal detection model that allows a threshold recollection process to aid both target identification and lure rejection. Given the nature of most semantic questions, we anticipated the DP-RR model would best fit the semantic task data. Experiment 1 (506 participants) provided evidence for a threshold retrieval process in semantic memory, with overall best fits to the DP-RR model. Experiment 2 (316 participants) found within-subjects estimates of episodic and semantic threshold retrieval to be uncorrelated. Our findings add weight to the proposal that semantic and episodic memory are served by similar dual-process retrieval systems, though the relationship between the two threshold processes needs to be more fully elucidated.

  19. Threshold models of recognition and the recognition heuristic

    Directory of Open Access Journals (Sweden)

    Edgar Erdfelder

    2011-02-01

    Full Text Available According to the recognition heuristic (RH theory, decisions follow the recognition principle: Given a high validity of the recognition cue, people should prefer recognized choice options compared to unrecognized ones. Assuming that the memory strength of choice options is strongly correlated with both the choice criterion and recognition judgments, the RH is a reasonable strategy that approximates optimal decisions with a minimum of cognitive effort (Davis-Stober, Dana, and Budescu, 2010. However, theories of recognition memory are not generally compatible with this assumption. For example, some threshold models of recognition presume that recognition judgments can arise from two types of cognitive states: (1 certainty states in which judgments are almost perfectly correlated with memory strength and (2 uncertainty states in which recognition judgments reflect guessing rather than differences in memory strength. We report an experiment designed to test the prediction that the RH applies to certainty states only. Our results show that memory states rather than recognition judgments affect use of recognition information in binary decisions.

  20. Global-scale evaluation of 22 precipitation datasets using gauge observations and hydrological modeling

    Directory of Open Access Journals (Sweden)

    H. E. Beck

    2017-12-01

    , and hence the importance of P dataset selection in both research and operational applications. The good performance of MSWEP emphasizes that careful data merging can exploit the complementary strengths of gauge-, satellite-, and reanalysis-based P estimates.

  1. Nuclear molecular halo: the ubiquitous occurrence of van der Waals molecular states near threshold in molecular, nuclear and particle physics

    International Nuclear Information System (INIS)

    Gai, Moshe

    1999-01-01

    The observation of large E1 strength near threshold in the electromagnetic dissociation of 11 Li poses a fundamental question: Is the large E1 strength due to the threshold or is it due to a low lying E1 state? Such molecular cluster states were observed in 18 O and in several nuclei near the drip line. We discuss the nature of the threshold effect as well as review the situation in Molecular (and Particle Physics) where such Molecular States are observed near the dissociation limit. We suggest that the situation in 11 Li is reminiscent of the argon-benzene molecule where the argon atom is loosely bound by a polarization (van der Waals) mechanism and thus leads to a very extended object lying near the dissociation limit. Such states are also suggested to dominate the structure of mesons [α 0 (980), f 0 (975)] and baryons [λ(1405)] with proposed Kaon molecular structure (Dalitz) near threshold. The inspection of such states throughout Physics allows us to gain insight into this phenomenon and suggest that a new collective Molecular Dipole Degree of Freedom plays a major role in the structure of hadrons (halo nuclei, mesons and baryons), and that quantitative tools such as the E1 Molecular Sum Rule are useful for elucidating the nature of the observed low lying E1 strength in halo nuclei. (author)

  2. Importance of tie strengths in the prisoner's dilemma game on social networks

    Science.gov (United States)

    Xu, Bo; Liu, Lu; You, Weijia

    2011-06-01

    Though numerous researches have shown that tie strengths play a key role in the formation of collective behavior in social networks, little work has been done to explore their impact on the outcome of evolutionary games. In this Letter, we studied the effect of tie strength in the dynamics of evolutionary prisoner's dilemma games by using online social network datasets. The results show that the fraction of cooperators has a non-trivial dependence on tie strength. Weak ties, just like previous researches on epidemics and information diffusion have shown, play a key role by the maintenance of cooperators in evolutionary prisoner's dilemma games.

  3. The rubber hand illusion increases heat pain threshold.

    Science.gov (United States)

    Hegedüs, G; Darnai, G; Szolcsányi, T; Feldmann, Á; Janszky, J; Kállai, J

    2014-09-01

    Accumulating evidence shows that manipulations of cortical body representation, for example, by simply viewing one's own body, can relieve pain in healthy subjects. Despite the widespread use of the rubber hand illusion (RHI) as an effective experimental tool for the manipulation of bodily awareness, previous studies examining the analgesic effect of the RHI have produced conflicting results. We used noxious heat stimuli to induce finger pain in 29 healthy subjects, and we recorded the participants' pain thresholds and subjective pain ratings during the RHI and during the control conditions. Two control conditions were included in our experiment - a standard one with reduced illusion strength (asynchronous stroking control) and an additional one in which the participants viewed their own hand. Raw data showed that both the RHI and the vision of the own hand resulted in slightly higher pain thresholds than the asynchronous stroking control (illusion: 47.79 °C; own-hand: 47.99 °C; asynchronous: 47.52 °C). After logarithmic transformation to achieve normality, paired t-tests revealed that both increases in pain threshold were significant (illusion/asynchronous: p = 0.036; own-hand/asynchronous: p = 0.007). In contrast, there was no significant difference in pain threshold between the illusion and the own-hand conditions (p = 0.656). Pain rating scores were not log-normal, and Wilcoxon singed-rank tests found no significant differences in pain ratings between the study conditions. The RHI increases heat pain threshold and the analgesic effect of the RHI is comparable with that of seeing one's own hand. The latter finding may have clinical implications. © 2014 European Pain Federation - EFIC®

  4. Direct infusion mass spectrometry metabolomics dataset: a benchmark for data processing and quality control

    Science.gov (United States)

    Kirwan, Jennifer A; Weber, Ralf J M; Broadhurst, David I; Viant, Mark R

    2014-01-01

    Direct-infusion mass spectrometry (DIMS) metabolomics is an important approach for characterising molecular responses of organisms to disease, drugs and the environment. Increasingly large-scale metabolomics studies are being conducted, necessitating improvements in both bioanalytical and computational workflows to maintain data quality. This dataset represents a systematic evaluation of the reproducibility of a multi-batch DIMS metabolomics study of cardiac tissue extracts. It comprises of twenty biological samples (cow vs. sheep) that were analysed repeatedly, in 8 batches across 7 days, together with a concurrent set of quality control (QC) samples. Data are presented from each step of the workflow and are available in MetaboLights. The strength of the dataset is that intra- and inter-batch variation can be corrected using QC spectra and the quality of this correction assessed independently using the repeatedly-measured biological samples. Originally designed to test the efficacy of a batch-correction algorithm, it will enable others to evaluate novel data processing algorithms. Furthermore, this dataset serves as a benchmark for DIMS metabolomics, derived using best-practice workflows and rigorous quality assessment. PMID:25977770

  5. Threshold states in /sup 26/Al revisited

    Energy Technology Data Exchange (ETDEWEB)

    Champagne, A E; McDonald, A B; Wang, T F; Howard, A J; Magnus, P V; Parker, P D

    1986-03-31

    Threshold states in /sup 26/Al have been re-examined using the /sup 27/Al(/sup 3/He,..cap alpha..)/sup 26/Al and /sup 25/Mg(/sup 3/He,d..gamma..)/sup 26/Al reactions in order to resolve apparent ambiguities in some of the previously reported properties of these states. In particular, the s-wave resonance strength reported at E=37 keV is now found to be located at E/sub c.m./=57.54 keV, and the proton width for the 374 keV resonance has been revised to GAMMA/sub p/=0.82 eV. These results have been used to calculate a new resonance strength of ..omega gamma..= 1.6 x 10/sup -13/ eV for the 57.54 keV resonance. As a result, the stellar production rate for /sup 26/Al is increased by a factor approx.=3-38 for temperatures T/sub 9/=0.05-0.1.

  6. RARD: The Related-Article Recommendation Dataset

    OpenAIRE

    Beel, Joeran; Carevic, Zeljko; Schaible, Johann; Neusch, Gabor

    2017-01-01

    Recommender-system datasets are used for recommender-system evaluations, training machine-learning algorithms, and exploring user behavior. While there are many datasets for recommender systems in the domains of movies, books, and music, there are rather few datasets from research-paper recommender systems. In this paper, we introduce RARD, the Related-Article Recommendation Dataset, from the digital library Sowiport and the recommendation-as-a-service provider Mr. DLib. The dataset contains ...

  7. Handling limited datasets with neural networks in medical applications: A small-data approach.

    Science.gov (United States)

    Shaikhina, Torgyn; Khovanova, Natalia A

    2017-01-01

    Single-centre studies in medical domain are often characterised by limited samples due to the complexity and high costs of patient data collection. Machine learning methods for regression modelling of small datasets (less than 10 observations per predictor variable) remain scarce. Our work bridges this gap by developing a novel framework for application of artificial neural networks (NNs) for regression tasks involving small medical datasets. In order to address the sporadic fluctuations and validation issues that appear in regression NNs trained on small datasets, the method of multiple runs and surrogate data analysis were proposed in this work. The approach was compared to the state-of-the-art ensemble NNs; the effect of dataset size on NN performance was also investigated. The proposed framework was applied for the prediction of compressive strength (CS) of femoral trabecular bone in patients suffering from severe osteoarthritis. The NN model was able to estimate the CS of osteoarthritic trabecular bone from its structural and biological properties with a standard error of 0.85MPa. When evaluated on independent test samples, the NN achieved accuracy of 98.3%, outperforming an ensemble NN model by 11%. We reproduce this result on CS data of another porous solid (concrete) and demonstrate that the proposed framework allows for an NN modelled with as few as 56 samples to generalise on 300 independent test samples with 86.5% accuracy, which is comparable to the performance of an NN developed with 18 times larger dataset (1030 samples). The significance of this work is two-fold: the practical application allows for non-destructive prediction of bone fracture risk, while the novel methodology extends beyond the task considered in this study and provides a general framework for application of regression NNs to medical problems characterised by limited dataset sizes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Determination of erosion thresholds and aeolian dune stabilization mechanisms via robotic shear strength measurements

    Science.gov (United States)

    Qian, F.; Lee, D. B.; Bodek, S.; Roberts, S.; Topping, T. T.; Robele, Y.; Koditschek, D. E.; Jerolmack, D. J.

    2017-12-01

    Understanding the parameters that control the spatial variation in aeolian soil erodibility is crucial to the development of sediment transport models. Currently, in-situ measurements of erodibility are time consuming and lack robustness. In an attempt to remedy this issue, we perform field and laboratory tests to determine the suitability of a novel mechanical shear strength method to assess soil erodibility. These tests can be performed quickly ( 1 minute) by a semi-autonomous robot using its direct-drive leg, while environmental controls such as soil moisture and grain size are simultaneously characterized. The robot was deployed at White Sands National Monument to delineate and understand erodibility gradients at two different scales: (1) from dry dune crest to moist interdune (distance 10s m), where we determined that shear strength increases by a factor of three with increasing soil moisture; and (2) from barren barchan dunes to vegetated and crusted parabolics downwind (distance 5 km), where we found that shear strength was enhanced by a factor of two relative to loose sand. Interestingly, shear strength varied little from carbonate-crusted dune surfaces to bio-crust covered interdunes in the downwind parabolic region, indicating that varied surface crusts contribute similarly to erosion resistance. To isolate the control of soil moisture on erodibility, we performed laboratory experiments in a sandbox. These results verify that the observed increase in soil erodibility from barchan crest to interdune at White Sands is dominated by soil moisture, and the variation in parabolic dune and barchan interdune areas results from a combination of soil moisture, bio-activity, and crust development. This study highlights that spatial variation of soil erodibility in arid environments is large enough to significantly affect sediment transport, and that probing soil erodibility with a robot has the potential to improve our understanding of this multifaceted problem.

  9. Isfahan MISP Dataset.

    Science.gov (United States)

    Kashefpur, Masoud; Kafieh, Rahele; Jorjandi, Sahar; Golmohammadi, Hadis; Khodabande, Zahra; Abbasi, Mohammadreza; Teifuri, Nilufar; Fakharzadeh, Ali Akbar; Kashefpoor, Maryam; Rabbani, Hossein

    2017-01-01

    An online depository was introduced to share clinical ground truth with the public and provide open access for researchers to evaluate their computer-aided algorithms. PHP was used for web programming and MySQL for database managing. The website was entitled "biosigdata.com." It was a fast, secure, and easy-to-use online database for medical signals and images. Freely registered users could download the datasets and could also share their own supplementary materials while maintaining their privacies (citation and fee). Commenting was also available for all datasets, and automatic sitemap and semi-automatic SEO indexing have been set for the site. A comprehensive list of available websites for medical datasets is also presented as a Supplementary (http://journalonweb.com/tempaccess/4800.584.JMSS_55_16I3253.pdf).

  10. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    Energy Technology Data Exchange (ETDEWEB)

    Hosntalab, Mohammad [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Aghaeizadeh Zoroofi, Reza [University of Tehran, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, Tehran (Iran); Abbaspour Tehrani-Fard, Ali [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Sharif University of Technology, Department of Electrical Engineering, Tehran (Iran); Shirani, Gholamreza [Faculty of Dentistry Medical Science of Tehran University, Oral and Maxillofacial Surgery Department, Tehran (Iran)

    2008-09-15

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  11. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    International Nuclear Information System (INIS)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza

    2008-01-01

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  12. Threshold quantum cryptography

    International Nuclear Information System (INIS)

    Tokunaga, Yuuki; Okamoto, Tatsuaki; Imoto, Nobuyuki

    2005-01-01

    We present the concept of threshold collaborative unitary transformation or threshold quantum cryptography, which is a kind of quantum version of threshold cryptography. Threshold quantum cryptography states that classical shared secrets are distributed to several parties and a subset of them, whose number is greater than a threshold, collaborates to compute a quantum cryptographic function, while keeping each share secretly inside each party. The shared secrets are reusable if no cheating is detected. As a concrete example of this concept, we show a distributed protocol (with threshold) of conjugate coding

  13. Measurements of NN→dπ very near threshold. Pt. 2

    International Nuclear Information System (INIS)

    Korkmaz, E.; Li, Jin; Elliott, J.B.; Mack, D.J.; Rodning, N.L.; Hutcheon, D.A.; Abegg, R.; Greeniaus, L.G.; Miller, C.A.

    1991-01-01

    We have measured analyzing powers for the reaction pp→dπ + at beam energies 3 and 7 MeV above pion-production threshold. With the use of Watson's theorem to fix phases and effective range estimates of pion d-wave strength, we have been able to determine the NN→dπ s-wave amplitude and the major p-wave amplitude at these energies. The results are compared to the Faddeev model predictions of Blankleider. (orig.)

  14. Open University Learning Analytics dataset.

    Science.gov (United States)

    Kuzilek, Jakub; Hlosta, Martin; Zdrahal, Zdenek

    2017-11-28

    Learning Analytics focuses on the collection and analysis of learners' data to improve their learning experience by providing informed guidance and to optimise learning materials. To support the research in this area we have developed a dataset, containing data from courses presented at the Open University (OU). What makes the dataset unique is the fact that it contains demographic data together with aggregated clickstream data of students' interactions in the Virtual Learning Environment (VLE). This enables the analysis of student behaviour, represented by their actions. The dataset contains the information about 22 courses, 32,593 students, their assessment results, and logs of their interactions with the VLE represented by daily summaries of student clicks (10,655,280 entries). The dataset is freely available at https://analyse.kmi.open.ac.uk/open_dataset under a CC-BY 4.0 license.

  15. Acute effects of dynamic exercises on the relationship between the motor unit firing rate and the recruitment threshold.

    Science.gov (United States)

    Ye, Xin; Beck, Travis W; DeFreitas, Jason M; Wages, Nathan P

    2015-04-01

    The aim of this study was to compare the acute effects of concentric versus eccentric exercise on motor control strategies. Fifteen men performed six sets of 10 repetitions of maximal concentric exercises or eccentric isokinetic exercises with their dominant elbow flexors on separate experimental visits. Before and after the exercise, maximal strength testing and submaximal trapezoid isometric contractions (40% of the maximal force) were performed. Both exercise conditions caused significant strength loss in the elbow flexors, but the loss was greater following the eccentric exercise (t=2.401, P=.031). The surface electromyographic signals obtained from the submaximal trapezoid isometric contractions were decomposed into individual motor unit action potential trains. For each submaximal trapezoid isometric contraction, the relationship between the average motor unit firing rate and the recruitment threshold was examined using linear regression analysis. In contrast to the concentric exercise, which did not cause significant changes in the mean linear slope coefficient and y-intercept of the linear regression line, the eccentric exercise resulted in a lower mean linear slope and an increased mean y-intercept, thereby indicating that increasing the firing rates of low-threshold motor units may be more important than recruiting high-threshold motor units to compensate for eccentric exercise-induced strength loss. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Threshold resummation and the total cross section for top quark production

    International Nuclear Information System (INIS)

    Berger, E.L.; Contopanagos, H.

    1997-01-01

    We discuss the motivation for resummation of the effects of initial-state soft gluon radiation, to all orders in the strong coupling strength, for processes in which the near-threshold region in the partonic subenergy is important. We summarize our calculation of the total cross section for top quark production at hadron colliders. Comments are included on the differences between our treatment of subleading logarithmic terms and other methods

  17. Improved protein structure reconstruction using secondary structures, contacts at higher distance thresholds, and non-contacts.

    Science.gov (United States)

    Adhikari, Badri; Cheng, Jianlin

    2017-08-29

    Residue-residue contacts are key features for accurate de novo protein structure prediction. For the optimal utilization of these predicted contacts in folding proteins accurately, it is important to study the challenges of reconstructing protein structures using true contacts. Because contact-guided protein modeling approach is valuable for predicting the folds of proteins that do not have structural templates, it is necessary for reconstruction studies to focus on hard-to-predict protein structures. Using a data set consisting of 496 structural domains released in recent CASP experiments and a dataset of 150 representative protein structures, in this work, we discuss three techniques to improve the reconstruction accuracy using true contacts - adding secondary structures, increasing contact distance thresholds, and adding non-contacts. We find that reconstruction using secondary structures and contacts can deliver accuracy higher than using full contact maps. Similarly, we demonstrate that non-contacts can improve reconstruction accuracy not only when the used non-contacts are true but also when they are predicted. On the dataset consisting of 150 proteins, we find that by simply using low ranked predicted contacts as non-contacts and adding them as additional restraints, can increase the reconstruction accuracy by 5% when the reconstructed models are evaluated using TM-score. Our findings suggest that secondary structures are invaluable companions of contacts for accurate reconstruction. Confirming some earlier findings, we also find that larger distance thresholds are useful for folding many protein structures which cannot be folded using the standard definition of contacts. Our findings also suggest that for more accurate reconstruction using predicted contacts it is useful to predict contacts at higher distance thresholds (beyond 8 Å) and predict non-contacts.

  18. Differential reconstructed gene interaction networks for deriving toxicity threshold in chemical risk assessment.

    Science.gov (United States)

    Yang, Yi; Maxwell, Andrew; Zhang, Xiaowei; Wang, Nan; Perkins, Edward J; Zhang, Chaoyang; Gong, Ping

    2013-01-01

    Pathway alterations reflected as changes in gene expression regulation and gene interaction can result from cellular exposure to toxicants. Such information is often used to elucidate toxicological modes of action. From a risk assessment perspective, alterations in biological pathways are a rich resource for setting toxicant thresholds, which may be more sensitive and mechanism-informed than traditional toxicity endpoints. Here we developed a novel differential networks (DNs) approach to connect pathway perturbation with toxicity threshold setting. Our DNs approach consists of 6 steps: time-series gene expression data collection, identification of altered genes, gene interaction network reconstruction, differential edge inference, mapping of genes with differential edges to pathways, and establishment of causal relationships between chemical concentration and perturbed pathways. A one-sample Gaussian process model and a linear regression model were used to identify genes that exhibited significant profile changes across an entire time course and between treatments, respectively. Interaction networks of differentially expressed (DE) genes were reconstructed for different treatments using a state space model and then compared to infer differential edges/interactions. DE genes possessing differential edges were mapped to biological pathways in databases such as KEGG pathways. Using the DNs approach, we analyzed a time-series Escherichia coli live cell gene expression dataset consisting of 4 treatments (control, 10, 100, 1000 mg/L naphthenic acids, NAs) and 18 time points. Through comparison of reconstructed networks and construction of differential networks, 80 genes were identified as DE genes with a significant number of differential edges, and 22 KEGG pathways were altered in a concentration-dependent manner. Some of these pathways were perturbed to a degree as high as 70% even at the lowest exposure concentration, implying a high sensitivity of our DNs approach

  19. The Associations between Pain Sensitivity and Knee Muscle Strength in Healthy Volunteers

    DEFF Research Database (Denmark)

    Henriksen, Marius; Klokker, Louise; Bartholdy, Cecilie

    2013-01-01

    lateralis, deltoid, and infrapatellar fat pad. Quadriceps and hamstring muscle strength was assessed isometrically at 60-degree knee flexion using a dynamometer. Associations between pain sensitivity and muscle strength were investigated using multiple regressions including age, gender, and body mass index...... as covariates. Results. Knee extension strength was associated with computer-controlled PPT on the vastus lateralis muscle. Computer-controlled PPTs were significantly correlated between sites (r > 0.72) and with cuff PPT (r > 0.4). Saline induced pain intensity and duration were correlated between sites (r > 0......Objectives. To investigate associations between muscle strength and pain sensitivity among healthy volunteers and associations between different pain sensitivity measures. Methods. Twenty-eight healthy volunteers (21 females) participated. Pressure pain thresholds (PPTs) were obtained from 1...

  20. STOCHASTIC MODELING OF COMPRESSIVE STRENGTH OF PHOSPHORUS SLAG CONTENT CEMENT

    Directory of Open Access Journals (Sweden)

    Ali Allahverdi

    2016-07-01

    Full Text Available One of the common methods for quick determination of compressive strength as one of the most important properties for assessment of cement quality is to apply various modeling approaches. This study is aimed at finding a model for estimating the compressive strength of phosphorus slag content cements. For this purpose, the compressive strengths of chemically activated high phosphorus slag content cement prepared from phosphorus slag (80 wt.%, Portland cement (14 wt.% and a compound chemical activator containing sodium sulfate and anhydrite (6 wt.% were measured at various Blaine finenesses and curing times. Based on the obtained results, a primary stochastic model in terms of curing time and Blaine fineness has been developed. Then, another different dataset was used to incorporate composition variable including weight fractions of phosphorus slag, cement, and activator in the model. This model can be effectively used to predict the compressive strength of phosphorus slag content cements at various Blaine finenesses, curing times, and compositions.

  1. Dipole strength distributions from HIGS Experiments

    Directory of Open Access Journals (Sweden)

    Werner V.

    2015-01-01

    76Ge and 76Se, in order to investigate their dipole response up to the neutron separation threshold. Gamma-ray beams from bremsstrahlung at the S-DALINAC and from Compton-backscattering at HIGS have been used to measure absolute cross sections and parities of dipole excited states, respectively. The HIGS data allows for indirect measurement of averaged branching ratios, which leads to significant corrections in the observed excitation cross sections. Results are compared to statistical calculations, to test photon strength functions and the Axel-Brink hypothesis

  2. CARA Risk Assessment Thresholds

    Science.gov (United States)

    Hejduk, M. D.

    2016-01-01

    Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).

  3. Mridangam stroke dataset

    OpenAIRE

    CompMusic

    2014-01-01

    The audio examples were recorded from a professional Carnatic percussionist in a semi-anechoic studio conditions by Akshay Anantapadmanabhan using SM-58 microphones and an H4n ZOOM recorder. The audio was sampled at 44.1 kHz and stored as 16 bit wav files. The dataset can be used for training models for each Mridangam stroke. /n/nA detailed description of the Mridangam and its strokes can be found in the paper below. A part of the dataset was used in the following paper. /nAkshay Anantapadman...

  4. Strategies for fracture toughness, strength and reliability optimisation of ceramic-ceramic laminates

    Czech Academy of Sciences Publication Activity Database

    Šestáková, L.; Bermejo, R.; Chlup, Zdeněk; Danzer, R.

    2011-01-01

    Roč. 102, č. 6 (2011), s. 613-626 ISSN 1862-5282 Institutional research plan: CEZ:AV0Z20410507 Keywords : Ceramic laminates * Layered ceramics * Residual stress * Fracture toughness * Threshold strength Subject RIV: JL - Materials Fatigue, Friction Mechanics Impact factor: 0.830, year: 2011

  5. Measurements on NN → dπ very near threshold. II

    International Nuclear Information System (INIS)

    Korkmaz, E.; Li, J.; Hutcheon, D.A.; Abegg, R.; Elliott, J.B.; Greeniaus, L.G.; Mack, D.J.; Miller, C.A.; Rodning, N.L.

    1991-04-01

    We have measured analyzing powers for the reaction pp → dπ + at beam energies 3 and 7 MeV above pion production threshold. With the use of Watson's Theorem to fix phases and effective range estimates of pion d-wave strength, we have been able to determine the NN → dπ s-wave amplitude and the two p-wave amplitudes at these energies. The results are compared to the Faddeev model predictions of Blankleider. (Author) 11 refs., 2 tabs., 11 figs

  6. 2008 TIGER/Line Nationwide Dataset

    Data.gov (United States)

    California Natural Resource Agency — This dataset contains a nationwide build of the 2008 TIGER/Line datasets from the US Census Bureau downloaded in April 2009. The TIGER/Line Shapefiles are an extract...

  7. Statistical segmentation of multidimensional brain datasets

    Science.gov (United States)

    Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro

    2001-07-01

    This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.

  8. Design of an audio advertisement dataset

    Science.gov (United States)

    Fu, Yutao; Liu, Jihong; Zhang, Qi; Geng, Yuting

    2015-12-01

    Since more and more advertisements swarm into radios, it is necessary to establish an audio advertising dataset which could be used to analyze and classify the advertisement. A method of how to establish a complete audio advertising dataset is presented in this paper. The dataset is divided into four different kinds of advertisements. Each advertisement's sample is given in *.wav file format, and annotated with a txt file which contains its file name, sampling frequency, channel number, broadcasting time and its class. The classifying rationality of the advertisements in this dataset is proved by clustering the different advertisements based on Principal Component Analysis (PCA). The experimental results show that this audio advertisement dataset offers a reliable set of samples for correlative audio advertisement experimental studies.

  9. Background qualitative analysis of the European reference life cycle database (ELCD) energy datasets - part II: electricity datasets.

    Science.gov (United States)

    Garraín, Daniel; Fazio, Simone; de la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda; Mathieux, Fabrice

    2015-01-01

    The aim of this paper is to identify areas of potential improvement of the European Reference Life Cycle Database (ELCD) electricity datasets. The revision is based on the data quality indicators described by the International Life Cycle Data system (ILCD) Handbook, applied on sectorial basis. These indicators evaluate the technological, geographical and time-related representativeness of the dataset and the appropriateness in terms of completeness, precision and methodology. Results show that ELCD electricity datasets have a very good quality in general terms, nevertheless some findings and recommendations in order to improve the quality of Life-Cycle Inventories have been derived. Moreover, these results ensure the quality of the electricity-related datasets to any LCA practitioner, and provide insights related to the limitations and assumptions underlying in the datasets modelling. Giving this information, the LCA practitioner will be able to decide whether the use of the ELCD electricity datasets is appropriate based on the goal and scope of the analysis to be conducted. The methodological approach would be also useful for dataset developers and reviewers, in order to improve the overall Data Quality Requirements of databases.

  10. Datasets collected in general practice: an international comparison using the example of obesity.

    Science.gov (United States)

    Sturgiss, Elizabeth; van Boven, Kees

    2018-06-04

    International datasets from general practice enable the comparison of how conditions are managed within consultations in different primary healthcare settings. The Australian Bettering the Evaluation and Care of Health (BEACH) and TransHIS from the Netherlands collect in-consultation general practice data that have been used extensively to inform local policy and practice. Obesity is a global health issue with different countries applying varying approaches to management. The objective of the present paper is to compare the primary care management of obesity in Australia and the Netherlands using data collected from consultations. Despite the different prevalence in obesity in the two countries, the number of patients per 1000 patient-years seen with obesity is similar. Patients in Australia with obesity are referred to allied health practitioners more often than Dutch patients. Without quality general practice data, primary care researchers will not have data about the management of conditions within consultations. We use obesity to highlight the strengths of these general practice data sources and to compare their differences. What is known about the topic? Australia had one of the longest-running consecutive datasets about general practice activity in the world, but it has recently lost government funding. The Netherlands has a longitudinal general practice dataset of information collected within consultations since 1985. What does this paper add? We discuss the benefits of general practice-collected data in two countries. Using obesity as a case example, we compare management in general practice between Australia and the Netherlands. This type of analysis should start all international collaborations of primary care management of any health condition. Having a national general practice dataset allows international comparisons of the management of conditions with primary care. Without a current, quality general practice dataset, primary care researchers will not

  11. Determination of electric field threshold for electrofusion of erythrocyte ghosts. Comparison of pulse-first and contact-first protocols.

    OpenAIRE

    Wu, Y; Montes, J G; Sjodin, R A

    1992-01-01

    Rabbit erythrocyte ghosts were fused by means of electric pulses to determine the electrofusion thresholds for these membranes. Two protocols were used to investigate fusion events: contact-first, and pulse-first. Electrical capacitance discharge (CD) pulses were used to induce fusion. Plots of fusion yield vs peak field strength yielded curves that intersected the field strength axis at positive values (pseudothresholds) which depended on the protocol and decay half time of the pulses. It wa...

  12. Importance of tie strengths in the prisoner's dilemma game on social networks

    International Nuclear Information System (INIS)

    Xu, Bo; Liu, Lu; You, Weijia

    2011-01-01

    Though numerous researches have shown that tie strengths play a key role in the formation of collective behavior in social networks, little work has been done to explore their impact on the outcome of evolutionary games. In this Letter, we studied the effect of tie strength in the dynamics of evolutionary prisoner's dilemma games by using online social network datasets. The results show that the fraction of cooperators has a non-trivial dependence on tie strength. Weak ties, just like previous researches on epidemics and information diffusion have shown, play a key role by the maintenance of cooperators in evolutionary prisoner's dilemma games. -- Highlights: → Tie strength is used to measure heterogeneous influences of different pairs of nodes. → Weak ties play a role in maintaining cooperation in prisoner's dilemma games. → Micro-dynamics of nodes are illustrated to explain the conclusion.

  13. The GTZAN dataset

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge...... of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN...

  14. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    Science.gov (United States)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  15. Environmental effects on the tensile strength of chemically vapor deposited silicon carbide fibers

    Science.gov (United States)

    Bhatt, R. T.; Kraitchman, M. D.

    1985-01-01

    The room temperature and elevated temperature tensile strengths of commercially available chemically vapor-deposited (CVD) silicon carbide fibers were measured after 15 min heat treatment to 1600 C in various environments. These environments included oxygen, air, argon and nitrogen at one atmosphere and vacuum at 10/9 atmosphere. Two types of fibers were examined which differed in the SiC content of their carbon-rich coatings. Threshold temperature for fiber strength degradation was observed to be dependent on the as-received fiber-flaw structure, on the environment and on the coating. Fractographic analyses and flexural strength measurements indicate that tensile strength losses were caused by surface degradation. Oxidation of the surface coating is suggested as one possible degradation mechanism. The SiC fibers containing the higher percentage of SiC near the surface of the carbon-rich coating show better strength retention and higher elevated temperature strength.

  16. Theory of threshold phenomena

    International Nuclear Information System (INIS)

    Hategan, Cornel

    2002-01-01

    Theory of Threshold Phenomena in Quantum Scattering is developed in terms of Reduced Scattering Matrix. Relationships of different types of threshold anomalies both to nuclear reaction mechanisms and to nuclear reaction models are established. Magnitude of threshold effect is related to spectroscopic factor of zero-energy neutron state. The Theory of Threshold Phenomena, based on Reduced Scattering Matrix, does establish relationships between different types of threshold effects and nuclear reaction mechanisms: the cusp and non-resonant potential scattering, s-wave threshold anomaly and compound nucleus resonant scattering, p-wave anomaly and quasi-resonant scattering. A threshold anomaly related to resonant or quasi resonant scattering is enhanced provided the neutron threshold state has large spectroscopic amplitude. The Theory contains, as limit cases, Cusp Theories and also results of different nuclear reactions models as Charge Exchange, Weak Coupling, Bohr and Hauser-Feshbach models. (author)

  17. Marsh collapse thresholds for coastal Louisiana estimated using elevation and vegetation index data

    Science.gov (United States)

    Couvillion, Brady R.; Beck, Holly

    2013-01-01

    Forecasting marsh collapse in coastal Louisiana as a result of changes in sea-level rise, subsidence, and accretion deficits necessitates an understanding of thresholds beyond which inundation stress impedes marsh survival. The variability in thresholds at which different marsh types cease to occur (i.e., marsh collapse) is not well understood. We utilized remotely sensed imagery, field data, and elevation data to help gain insight into the relationships between vegetation health and inundation. A Normalized Difference Vegetation Index (NDVI) dataset was calculated using remotely sensed data at peak biomass (August) and used as a proxy for vegetation health and productivity. Statistics were calculated for NDVI values by marsh type for intermediate, brackish, and saline marsh in coastal Louisiana. Marsh-type specific NDVI values of 1.5 and 2 standard deviations below the mean were used as upper and lower limits to identify conditions indicative of collapse. As marshes seldom occur beyond these values, they are believed to represent a range within which marsh collapse is likely to occur. Inundation depth was selected as the primary candidate for evaluation of marsh collapse thresholds. Elevation relative to mean water level (MWL) was calculated by subtracting MWL from an elevation dataset compiled from multiple data types including light detection and ranging (lidar) and bathymetry. A polynomial cubic regression was used to examine a random subset of pixels to determine the relationship between elevation (relative to MWL) and NDVI. The marsh collapse uncertainty range values were found by locating the intercept of the regression line with the 1.5 and 2 standard deviations below the mean NDVI value for each marsh type. Results indicate marsh collapse uncertainty ranges of 30.7–35.8 cm below MWL for intermediate marsh, 20–25.6 cm below MWL for brackish marsh, and 16.9–23.5 cm below MWL for saline marsh. These values are thought to represent the ranges of

  18. Fatigue Strength of Titanium Risers - Defect Sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Babalola, Olusegun Tunde

    2001-07-01

    This study is centred on assessment of the fatigue strength of titanium fusion welds for deep-water riser's applications. Deep-water risers are subjected to significant fatigue loading. Relevant fatigue data for titanium fusion welds are very scarce. Hence there is a need for fatigue data and life prediction models for such weldments. The study has covered three topics: Fatigue testing, Fractography and defect assessment, and Fracture Mechanics modelling of fatigue crack growth. Two series of welded grade of titanium consisting of 14 specimens in each series were fatigue tested under constant amplitude loading. Prior to fatigue testing, strain gauge measurements of some specimens was conducted to enable the definition of stress range in the fatigue assessment procedure. The results were compared with finite solid element analysis and related to fatigue stresses in a riser pipe wall. Distribution and geometry of internal and surface defects both in the as-welded and in the post-weld machined conditions were assessed using fractography. This served as a tool to determine the fatigue initiation point in the welds. Fracture mechanics was applied to model fatigue strength of titanium welds with initiation from weld defects. Two different stress intensity factor formulations for embedded eccentrically placed cracks were used for analysis of elliptical cracks with the major axis parallel and close to one of the free surfaces. The methods were combined to give a satisfactory model for crack growth analysis. The model analyses crack growth of elliptical and semi-elliptical cracks in two directions, with updating of the crack geometry. Fatigue strength assessment was conducted using two crack growth models, the Paris-Erdogan relation with no threshold and the Donahue et al. relation with an implied threshold. The model was validated against experimental data, with a discussion on the choice of crack growth model. (author)

  19. Particles near threshold

    International Nuclear Information System (INIS)

    Bhattacharya, T.; Willenbrock, S.

    1993-01-01

    We propose returning to the definition of the width of a particle in terms of the pole in the particle's propagator. Away from thresholds, this definition of width is equivalent to the standard perturbative definition, up to next-to-leading order; however, near a threshold, the two definitions differ significantly. The width as defined by the pole position provides more information in the threshold region than the standard perturbative definition and, in contrast with the perturbative definition, does not vanish when a two-particle s-wave threshold is approached from below

  20. Use of country of birth as an indicator of refugee background in health datasets

    Science.gov (United States)

    2014-01-01

    Background Routine public health databases contain a wealth of data useful for research among vulnerable or isolated groups, who may be under-represented in traditional medical research. Identifying specific vulnerable populations, such as resettled refugees, can be particularly challenging; often country of birth is the sole indicator of whether an individual has a refugee background. The objective of this article was to review strengths and weaknesses of different methodological approaches to identifying resettled refugees and comparison groups from routine health datasets and to propose the application of additional methodological rigour in future research. Discussion Methodological approaches to selecting refugee and comparison groups from existing routine health datasets vary widely and are often explained in insufficient detail. Linked data systems or datasets from specialized refugee health services can accurately select resettled refugee and asylum seeker groups but have limited availability and can be selective. In contrast, country of birth is commonly collected in routine health datasets but a robust method for selecting humanitarian source countries based solely on this information is required. The authors recommend use of national immigration data to objectively identify countries of birth with high proportions of humanitarian entrants, matched by time period to the study dataset. When available, additional migration indicators may help to better understand migration as a health determinant. Methodologically, if multiple countries of birth are combined, the proportion of the sample represented by each country of birth should be included, with sub-analysis of individual countries of birth potentially providing further insights, if population size allows. United Nations-defined world regions provide an objective framework for combining countries of birth when necessary. A comparison group of economic migrants from the same world region may be appropriate

  1. 25-hydroxyvitamin D status and change in physical performance and strength in older adults : the Health, Aging, and Body Composition Study

    NARCIS (Netherlands)

    Houston, Denise K; Tooze, Janet A; Neiberg, Rebecca H; Hausman, Dorothy B; Johnson, Mary Ann; Cauley, Jane A; Bauer, Doug C; Cawthon, Peggy M; Shea, M Kyla; Schwartz, Gary G; Williamson, Jeff D; Tylavsky, Frances A; Visser, Marjolein; Simonsick, Eleanor M; Harris, Tamara B; Kritchevsky, Stephen B

    2012-01-01

    Low 25-hydroxyvitamin D (25(OH)D) concentrations are common among older adults and are associated with poorer physical performance and strength, but results from longitudinal studies have been inconsistent. The 25(OH)D threshold for physical performance and strength was determined, and both

  2. Threshold states in /sup 26/Al. Pt. 2. Extraction of resonance strengths

    Energy Technology Data Exchange (ETDEWEB)

    Champagne, A E; Howard, A J; Parker, P D [Yale Univ., New Haven, CT (USA). Wright Nuclear Structure Lab.

    1983-06-20

    The total width of the Esub(c.m.)=376 keV resonance in the /sup 25/Mg+p system has been measured using the /sup 25/Mg(p,..gamma..)/sup 26/Al reaction and is found to be 460+-70 eV. From this information, a resonance strength ..omega gamma..=5.7x10/sup 16/ eV is obtained for the astrophysically important 37.2 keV resonance in /sup 26/Al through an R-matrix parameterization of the relative energy dependence of the resonance width. In a similar manner, an upper limit ..omega gamma..<=1.0x10/sup -11/ eV is deduced for a possible resonance at Esub(c.m.)=94.0 keV.

  3. Editorial: Datasets for Learning Analytics

    NARCIS (Netherlands)

    Dietze, Stefan; George, Siemens; Davide, Taibi; Drachsler, Hendrik

    2018-01-01

    The European LinkedUp and LACE (Learning Analytics Community Exchange) project have been responsible for setting up a series of data challenges at the LAK conferences 2013 and 2014 around the LAK dataset. The LAK datasets consists of a rich collection of full text publications in the domain of

  4. The Geometry of Finite Equilibrium Datasets

    DEFF Research Database (Denmark)

    Balasko, Yves; Tvede, Mich

    We investigate the geometry of finite datasets defined by equilibrium prices, income distributions, and total resources. We show that the equilibrium condition imposes no restrictions if total resources are collinear, a property that is robust to small perturbations. We also show that the set...... of equilibrium datasets is pathconnected when the equilibrium condition does impose restrictions on datasets, as for example when total resources are widely non collinear....

  5. Threshold intensity factors as lower boundaries for crack propagation in ceramics

    Directory of Open Access Journals (Sweden)

    Walter Per-Ole

    2004-11-01

    Full Text Available Abstract Background Slow crack growth can be described in a v (crack velocity versus KI (stress intensity factor diagram. Slow crack growth in ceramics is attributed to corrosion assisted stress at the crack tip or at any pre-existing defect in the ceramic. The combined effect of high stresses at the crack tip and the presence of water or body fluid molecules (reducing surface energy at the crack tip induces crack propagation, which eventually may result in fatigue. The presence of a threshold in the stress intensity factor, below which no crack propagation occurs, has been the subject of important research in the last years. The higher this threshold, the higher the reliability of the ceramic, and consequently the longer its lifetime. Methods We utilize the Irwin K-field displacement relation to deduce crack tip stress intensity factors from the near crack tip profile. Cracks are initiated by indentation impressions. The threshold stress intensity factor is determined as the time limit of the tip stress intensity when the residual stresses have (nearly disappeared. Results We determined the threshold stress intensity factors for most of the all ceramic materials presently important for dental restorations in Europe. Of special significance is the finding that alumina ceramic has a threshold limit nearly identical with that of zirconia. Conclusion The intention of the present paper is to stress the point that the threshold stress intensity factor represents a more intrinsic property for a given ceramic material than the widely used toughness (bend strength or fracture toughness, which refers only to fast crack growth. Considering two ceramics with identical threshold limits, although with different critical stress intensity limits, means that both ceramics have identical starting points for slow crack growth. Fast catastrophic crack growth leading to spontaneous fatigue, however, is different. This growth starts later in those ceramic materials

  6. The evolution of hillslope strength following large earthquakes

    Science.gov (United States)

    Brain, Matthew; Rosser, Nick; Tunstall, Neil

    2017-04-01

    Earthquake-induced landslides play an important role in the evolution of mountain landscapes. Earthquake ground shaking triggers near-instantaneous landsliding, but has also been shown to weaken hillslopes, preconditioning them for failure during subsequent seismicity and/or precipitation events. The temporal evolution of hillslope strength during and following primary seismicity, and if and how this ultimately results in failure, is poorly constrained due to the rarity of high-magnitude earthquakes and limited availability of suitable field datasets. We present results obtained from novel geotechnical laboratory tests to better constrain the mechanisms that control strength evolution in Earth materials of differing rheology. We consider how the strength of hillslope materials responds to ground-shaking events of different magnitude and if and how this persists to influence landslide activity during interseismic periods. We demonstrate the role of stress path and stress history, strain rate and foreshock and aftershock sequences in controlling the evolution of hillslope strength and stability. Critically, we show how hillslopes can be strengthened rather than weakened in some settings, challenging conventional assumptions. On the basis of our laboratory data, we consider the implications for earthquake-induced geomorphic perturbations in mountain landscapes over multiple timescales and in different seismogenic settings.

  7. Development and validation of a national data registry for midwife-led births: the Midwives Alliance of North America Statistics Project 2.0 dataset.

    Science.gov (United States)

    Cheyney, Melissa; Bovbjerg, Marit; Everson, Courtney; Gordon, Wendy; Hannibal, Darcy; Vedam, Saraswathi

    2014-01-01

    In 2004, the Midwives Alliance of North America's (MANA's) Division of Research developed a Web-based data collection system to gather information on the practices and outcomes associated with midwife-led births in the United States. This system, called the MANA Statistics Project (MANA Stats), grew out of a widely acknowledged need for more reliable data on outcomes by intended place of birth. This article describes the history and development of the MANA Stats birth registry and provides an analysis of the 2.0 dataset's content, strengths, and limitations. Data collection and review procedures for the MANA Stats 2.0 dataset are described, along with methods for the assessment of data accuracy. We calculated descriptive statistics for client demographics and contributing midwife credentials, and assessed the quality of data by calculating point estimates, 95% confidence intervals, and kappa statistics for key outcomes on pre- and postreview samples of records. The MANA Stats 2.0 dataset (2004-2009) contains 24,848 courses of care, 20,893 of which are for women who planned a home or birth center birth at the onset of labor. The majority of these records were planned home births (81%). Births were attended primarily by certified professional midwives (73%), and clients were largely white (92%), married (87%), and college-educated (49%). Data quality analyses of 9932 records revealed no differences between pre- and postreviewed samples for 7 key benchmarking variables (kappa, 0.98-1.00). The MANA Stats 2.0 data were accurately entered by participants; any errors in this dataset are likely random and not systematic. The primary limitation of the 2.0 dataset is that the sample was captured through voluntary participation; thus, it may not accurately reflect population-based outcomes. The dataset's primary strength is that it will allow for the examination of research questions on normal physiologic birth and midwife-led birth outcomes by intended place of birth.

  8. Dipole strength distributions from HIGS Experiments

    Science.gov (United States)

    Werner, V.; Cooper, N.; Goddard, P. M.; Humby, P.; Ilieva, R. S.; Rusev, G.; Beller, J.; Bernards, C.; Crider, B. P.; Isaak, J.; Kelley, J. H.; Kwan, E.; Löher, B.; Peters, E. E.; Pietralla, N.; Romig, C.; Savran, D.; Scheck, M.; Tonchev, A. P.; Tornow, W.; Yates, S. W.; Zweidinger, M.

    2015-05-01

    A series of photon scattering experiments has been performed on the double-beta decay partners 76Ge and 76Se, in order to investigate their dipole response up to the neutron separation threshold. Gamma-ray beams from bremsstrahlung at the S-DALINAC and from Compton-backscattering at HIGS have been used to measure absolute cross sections and parities of dipole excited states, respectively. The HIGS data allows for indirect measurement of averaged branching ratios, which leads to significant corrections in the observed excitation cross sections. Results are compared to statistical calculations, to test photon strength functions and the Axel-Brink hypothesis

  9. Music effect on pain threshold evaluated with current perception threshold

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    AIM: Music relieves anxiety and psychotic tension. This effect of music is applied to surgical operation in the hospital and dental office. It is still unclear whether this music effect is only limited to the psychological aspect but not to the physical aspect or whether its music effect is influenced by the mood or emotion of audience. To elucidate these issues, we evaluated the music effect on pain threshold by current perception threshold (CPT) and profile of mood states (POMC) test. METHODS: Healthy 30 subjects (12 men, 18 women, 25-49 years old, mean age 34.9) were tested. (1)After POMC test, all subjects were evaluated pain threshold with CPT by Neurometer (Radionics, USA) under 6 conditions, silence, listening to the slow tempo classic music, nursery music, hard rock music, classic paino music and relaxation music with 30 seconds interval. (2)After Stroop color word test as the stresser, pain threshold was evaluated with CPT under 2 conditions, silence and listening to the slow tempo classic music. RESULTS: Under litening to the music, CPT sores increased, especially 2 000 Hz level related with compression, warm and pain sensation. Type of music, preference of music and stress also affected CPT score. CONCLUSION: The present study demonstrated that the concentration on the music raise the pain threshold and that stress and mood influence the music effect on pain threshold.

  10. The SAIL databank: linking multiple health and social care datasets.

    Science.gov (United States)

    Lyons, Ronan A; Jones, Kerina H; John, Gareth; Brooks, Caroline J; Verplancke, Jean-Philippe; Ford, David V; Brown, Ginevra; Leake, Ken

    2009-01-16

    Vast amounts of data are collected about patients and service users in the course of health and social care service delivery. Electronic data systems for patient records have the potential to revolutionise service delivery and research. But in order to achieve this, it is essential that the ability to link the data at the individual record level be retained whilst adhering to the principles of information governance. The SAIL (Secure Anonymised Information Linkage) databank has been established using disparate datasets, and over 500 million records from multiple health and social care service providers have been loaded to date, with further growth in progress. Having established the infrastructure of the databank, the aim of this work was to develop and implement an accurate matching process to enable the assignment of a unique Anonymous Linking Field (ALF) to person-based records to make the databank ready for record-linkage research studies. An SQL-based matching algorithm (MACRAL, Matching Algorithm for Consistent Results in Anonymised Linkage) was developed for this purpose. Firstly the suitability of using a valid NHS number as the basis of a unique identifier was assessed using MACRAL. Secondly, MACRAL was applied in turn to match primary care, secondary care and social services datasets to the NHS Administrative Register (NHSAR), to assess the efficacy of this process, and the optimum matching technique. The validation of using the NHS number yielded specificity values > 99.8% and sensitivity values > 94.6% using probabilistic record linkage (PRL) at the 50% threshold, and error rates were SAIL databank represents a research-ready platform for record-linkage studies.

  11. An Annotated Dataset of 14 Meat Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given.......This note describes a dataset consisting of 14 annotated images of meat. Points of correspondence are placed on each image. As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given....

  12. An iterative two-threshold analysis for single-subject functional MRI of the human brain

    Energy Technology Data Exchange (ETDEWEB)

    Auer, Tibor; Schweizer, Renate; Frahm, Jens [Biomedizinische NMR Forschungs GmbH am Max-Planck-Institut fuer Biophysikalische Chemie, Goettingen (Germany)

    2011-11-15

    Current thresholding strategies for the analysis of functional MRI (fMRI) datasets may suffer from specific limitations (e.g. with respect to the required smoothness) or lead to reduced performance for a low signal-to-noise ratio (SNR). Although a previously proposed two-threshold (TT) method offers a promising solution to these problems, the use of preset settings limits its performance. This work presents an optimised TT approach that estimates the required parameters in an iterative manner. The iterative TT (iTT) method is compared with the original TT method, as well as other established voxel-based and cluster-based thresholding approaches and spatial mixture modelling (SMM) for both simulated data and fMRI of a hometown walking task at different experimental settings (spatial resolution, filtering and SNR). In general, the iTT method presents with remarkable sensitivity and good specificity that outperforms all conventional approaches tested except for SMM in a few cases. This also holds true for challenging conditions such as high spatial resolution, the absence of filtering, high noise level, or a low number of task repetitions. Thus, iTT emerges as a good candidate for both scientific fMRI studies at high spatial resolution and more routine applications for clinical purposes. (orig.)

  13. Comparison of recent SnIa datasets

    International Nuclear Information System (INIS)

    Sanchez, J.C. Bueno; Perivolaropoulos, L.; Nesseris, S.

    2009-01-01

    We rank the six latest Type Ia supernova (SnIa) datasets (Constitution (C), Union (U), ESSENCE (Davis) (E), Gold06 (G), SNLS 1yr (S) and SDSS-II (D)) in the context of the Chevalier-Polarski-Linder (CPL) parametrization w(a) = w 0 +w 1 (1−a), according to their Figure of Merit (FoM), their consistency with the cosmological constant (ΛCDM), their consistency with standard rulers (Cosmic Microwave Background (CMB) and Baryon Acoustic Oscillations (BAO)) and their mutual consistency. We find a significant improvement of the FoM (defined as the inverse area of the 95.4% parameter contour) with the number of SnIa of these datasets ((C) highest FoM, (U), (G), (D), (E), (S) lowest FoM). Standard rulers (CMB+BAO) have a better FoM by about a factor of 3, compared to the highest FoM SnIa dataset (C). We also find that the ranking sequence based on consistency with ΛCDM is identical with the corresponding ranking based on consistency with standard rulers ((S) most consistent, (D), (C), (E), (U), (G) least consistent). The ranking sequence of the datasets however changes when we consider the consistency with an expansion history corresponding to evolving dark energy (w 0 ,w 1 ) = (−1.4,2) crossing the phantom divide line w = −1 (it is practically reversed to (G), (U), (E), (S), (D), (C)). The SALT2 and MLCS2k2 fitters are also compared and some peculiar features of the SDSS-II dataset when standardized with the MLCS2k2 fitter are pointed out. Finally, we construct a statistic to estimate the internal consistency of a collection of SnIa datasets. We find that even though there is good consistency among most samples taken from the above datasets, this consistency decreases significantly when the Gold06 (G) dataset is included in the sample

  14. SIMADL: Simulated Activities of Daily Living Dataset

    Directory of Open Access Journals (Sweden)

    Talal Alshammari

    2018-04-01

    Full Text Available With the realisation of the Internet of Things (IoT paradigm, the analysis of the Activities of Daily Living (ADLs, in a smart home environment, is becoming an active research domain. The existence of representative datasets is a key requirement to advance the research in smart home design. Such datasets are an integral part of the visualisation of new smart home concepts as well as the validation and evaluation of emerging machine learning models. Machine learning techniques that can learn ADLs from sensor readings are used to classify, predict and detect anomalous patterns. Such techniques require data that represent relevant smart home scenarios, for training, testing and validation. However, the development of such machine learning techniques is limited by the lack of real smart home datasets, due to the excessive cost of building real smart homes. This paper provides two datasets for classification and anomaly detection. The datasets are generated using OpenSHS, (Open Smart Home Simulator, which is a simulation software for dataset generation. OpenSHS records the daily activities of a participant within a virtual environment. Seven participants simulated their ADLs for different contexts, e.g., weekdays, weekends, mornings and evenings. Eighty-four files in total were generated, representing approximately 63 days worth of activities. Forty-two files of classification of ADLs were simulated in the classification dataset and the other forty-two files are for anomaly detection problems in which anomalous patterns were simulated and injected into the anomaly detection dataset.

  15. The NOAA Dataset Identifier Project

    Science.gov (United States)

    de la Beaujardiere, J.; Mccullough, H.; Casey, K. S.

    2013-12-01

    The US National Oceanic and Atmospheric Administration (NOAA) initiated a project in 2013 to assign persistent identifiers to datasets archived at NOAA and to create informational landing pages about those datasets. The goals of this project are to enable the citation of datasets used in products and results in order to help provide credit to data producers, to support traceability and reproducibility, and to enable tracking of data usage and impact. A secondary goal is to encourage the submission of datasets for long-term preservation, because only archived datasets will be eligible for a NOAA-issued identifier. A team was formed with representatives from the National Geophysical, Oceanographic, and Climatic Data Centers (NGDC, NODC, NCDC) to resolve questions including which identifier scheme to use (answer: Digital Object Identifier - DOI), whether or not to embed semantics in identifiers (no), the level of granularity at which to assign identifiers (as coarsely as reasonable), how to handle ongoing time-series data (do not break into chunks), creation mechanism for the landing page (stylesheet from formal metadata record preferred), and others. Decisions made and implementation experience gained will inform the writing of a Data Citation Procedural Directive to be issued by the Environmental Data Management Committee in 2014. Several identifiers have been issued as of July 2013, with more on the way. NOAA is now reporting the number as a metric to federal Open Government initiatives. This paper will provide further details and status of the project.

  16. Control Measure Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — The EPA Control Measure Dataset is a collection of documents describing air pollution control available to regulated facilities for the control and abatement of air...

  17. The Kinetics Human Action Video Dataset

    OpenAIRE

    Kay, Will; Carreira, Joao; Simonyan, Karen; Zhang, Brian; Hillier, Chloe; Vijayanarasimhan, Sudheendra; Viola, Fabio; Green, Tim; Back, Trevor; Natsev, Paul; Suleyman, Mustafa; Zisserman, Andrew

    2017-01-01

    We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some ...

  18. Prediction of bone strength by μCT and MDCT-based finite-element-models: How much spatial resolution is needed?

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, Jan S., E-mail: jsb@tum.de [Department of Radiology, Technische Universität München, Munich (Germany); Department of Radiology, University of California, San Francisco, CA (United States); Max Planck Institute for Extraterrestrial Physics, Garching (Germany); Sidorenko, Irina [Max Planck Institute for Extraterrestrial Physics, Garching (Germany); Mueller, Dirk [Department of Radiology, Universität Köln (Germany); Baum, Thomas [Department of Radiology, Technische Universität München, Munich (Germany); Department of Radiology, University of California, San Francisco, CA (United States); Max Planck Institute for Extraterrestrial Physics, Garching (Germany); Issever, Ahi Sema [Department of Radiology, University of California, San Francisco, CA (United States); Department of Radiology, Charite, Berlin (Germany); Eckstein, Felix [Institute of Anatomy and Musculoskeletal Research, Paracelsus Medical University, Salzburg (Austria); Rummeny, Ernst J. [Department of Radiology, Technische Universität München, Munich (Germany); Link, Thomas M. [Department of Radiology, University of California, San Francisco, CA (United States); Raeth, Christoph W. [Max Planck Institute for Extraterrestrial Physics, Garching (Germany)

    2014-01-15

    Objectives: Finite-element-models (FEM) are a promising technology to predict bone strength and fracture risk. Usually, the highest spatial resolution technically available is used, but this requires excessive computation time and memory in numerical simulations of large volumes. Thus, FEM were compared at decreasing resolutions with respect to local strain distribution and prediction of failure load to (1) validate MDCT-based FEM and to (2) optimize spatial resolution to save computation time. Materials and methods: 20 cylindrical trabecular bone specimens (diameter 12 mm, length 15–20 mm) were harvested from elderly formalin-fixed human thoracic spines. All specimens were examined by micro-CT (isotropic resolution 30 μm) and whole-body multi-row-detector computed tomography (MDCT, 250 μm × 250 μm × 500 μm). The resolution of all datasets was lowered in eight steps to ∼2000 μm × 2000 μm × 500 μm and FEM were calculated at all resolutions. Failure load was determined by biomechanical testing. Probability density functions of local micro-strains were compared in all datasets and correlations between FEM-based and biomechanically measured failure loads were determined. Results: The distribution of local micro-strains was similar for micro-CT and MDCT at comparable resolutions and showed a shift toward higher average values with decreasing resolution, corresponding to the increasing apparent trabecular thickness. Small micro-strains (ε{sub eff} < 0.005) could be calculated down to 250 μm × 250 μm × 500 μm. Biomechanically determined failure load showed significant correlations with all FEM, up to r = 0.85 and did not significantly change with lower resolution but decreased with high thresholds, due to loss of trabecular connectivity. Conclusion: When choosing connectivity-preserving thresholds, both micro-CT- and MDCT-based finite-element-models well predicted failure load and still accurately revealed the distribution of local micro-strains in

  19. Prediction of bone strength by μCT and MDCT-based finite-element-models: How much spatial resolution is needed?

    International Nuclear Information System (INIS)

    Bauer, Jan S.; Sidorenko, Irina; Mueller, Dirk; Baum, Thomas; Issever, Ahi Sema; Eckstein, Felix; Rummeny, Ernst J.; Link, Thomas M.; Raeth, Christoph W.

    2014-01-01

    Objectives: Finite-element-models (FEM) are a promising technology to predict bone strength and fracture risk. Usually, the highest spatial resolution technically available is used, but this requires excessive computation time and memory in numerical simulations of large volumes. Thus, FEM were compared at decreasing resolutions with respect to local strain distribution and prediction of failure load to (1) validate MDCT-based FEM and to (2) optimize spatial resolution to save computation time. Materials and methods: 20 cylindrical trabecular bone specimens (diameter 12 mm, length 15–20 mm) were harvested from elderly formalin-fixed human thoracic spines. All specimens were examined by micro-CT (isotropic resolution 30 μm) and whole-body multi-row-detector computed tomography (MDCT, 250 μm × 250 μm × 500 μm). The resolution of all datasets was lowered in eight steps to ∼2000 μm × 2000 μm × 500 μm and FEM were calculated at all resolutions. Failure load was determined by biomechanical testing. Probability density functions of local micro-strains were compared in all datasets and correlations between FEM-based and biomechanically measured failure loads were determined. Results: The distribution of local micro-strains was similar for micro-CT and MDCT at comparable resolutions and showed a shift toward higher average values with decreasing resolution, corresponding to the increasing apparent trabecular thickness. Small micro-strains (ε eff < 0.005) could be calculated down to 250 μm × 250 μm × 500 μm. Biomechanically determined failure load showed significant correlations with all FEM, up to r = 0.85 and did not significantly change with lower resolution but decreased with high thresholds, due to loss of trabecular connectivity. Conclusion: When choosing connectivity-preserving thresholds, both micro-CT- and MDCT-based finite-element-models well predicted failure load and still accurately revealed the distribution of local micro-strains in spatial

  20. Comparison of CORA and EN4 in-situ datasets validation methods, toward a better quality merged dataset.

    Science.gov (United States)

    Szekely, Tanguy; Killick, Rachel; Gourrion, Jerome; Reverdin, Gilles

    2017-04-01

    CORA and EN4 are both global delayed time mode validated in-situ ocean temperature and salinity datasets distributed by the Met Office (http://www.metoffice.gov.uk/) and Copernicus (www.marine.copernicus.eu). A large part of the profiles distributed by CORA and EN4 in recent years are Argo profiles from the ARGO DAC, but profiles are also extracted from the World Ocean Database and TESAC profiles from GTSPP. In the case of CORA, data coming from the EUROGOOS Regional operationnal oserving system( ROOS) operated by European institutes no managed by National Data Centres and other datasets of profiles povided by scientific sources can also be found (Sea mammals profiles from MEOP, XBT datasets from cruises ...). (EN4 also takes data from the ASBO dataset to supplement observations in the Arctic). First advantage of this new merge product is to enhance the space and time coverage at global and european scales for the period covering 1950 till a year before the current year. This product is updated once a year and T&S gridded fields are alos generated for the period 1990-year n-1. The enhancement compared to the revious CORA product will be presented Despite the fact that the profiles distributed by both datasets are mostly the same, the quality control procedures developed by the Met Office and Copernicus teams differ, sometimes leading to different quality control flags for the same profile. Started in 2016 a new study started that aims to compare both validation procedures to move towards a Copernicus Marine Service dataset with the best features of CORA and EN4 validation.A reference data set composed of the full set of in-situ temperature and salinity measurements collected by Coriolis during 2015 is used. These measurements have been made thanks to wide range of instruments (XBTs, CTDs, Argo floats, Instrumented sea mammals,...), covering the global ocean. The reference dataset has been validated simultaneously by both teams.An exhaustive comparison of the

  1. Determination of action thresholds for electromagnetic tracking system-guided hypofractionated prostate radiotherapy using volumetric modulated arc therapy

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Pengpeng; Mah, Dennis; Happersett, Laura; Cox, Brett; Hunt, Margie; Mageras, Gig [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10021 (United States); Department of Radiation Oncology, Montefiore Medical Center, Bronx, New York 10467 (United States); Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10021 (United States); Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York 10021 (United States); Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10021 (United States)

    2011-07-15

    Purpose: Hypofractionated prostate radiotherapy may benefit from both volumetric modulated arc therapy (VMAT) due to shortened treatment time and intrafraction real-time monitoring provided by implanted radiofrequency(RF) transponders. The authors investigate dosimetrically driven action thresholds (whether treatment needs to be interrupted and patient repositioned) in VMAT treatment with electromagnetic (EM) tracking. Methods: VMAT plans for five patients are generated for prescription doses of 32.5 and 42.5 Gy in five fractions. Planning target volume (PTV) encloses the clinical target volume (CTV) with a 3 mm margin at the prostate-rectal interface and 5 mm elsewhere. The VMAT delivery is modeled using 180 equi-spaced static beams. Intrafraction prostate motion is simulated in the plan by displacing the beam isocenter at each beam assuming rigid organ motion according to a previously recorded trajectory of the transponder centroid. The cumulative dose delivered in each fraction is summed over all beams. Two sets of 57 prostate motion trajectories were randomly selected to form a learning and a testing dataset. Dosimetric end points including CTV D95%, rectum wall D1cc, bladder wall D1cc, and urethra Dmax, are analyzed against motion characteristics including the maximum amplitude of the anterior-posterior (AP), superior-inferior (SI), and left-right components. Action thresholds are triggered when intrafraction motion causes any violations of dose constraints to target and organs at risk (OAR), so that treatment is interrupted and patient is repositioned. Results: Intrafraction motion has a little effect on CTV D95%, indicating PTV margins are adequate. Tight posterior and inferior action thresholds around 1 mm need to be set in a patient specific manner to spare organs at risk, especially when the prescription dose is 42.5 Gy. Advantages of setting patient specific action thresholds are to reduce false positive alarms by 25% when prescription dose is low, and

  2. Determination of action thresholds for electromagnetic tracking system-guided hypofractionated prostate radiotherapy using volumetric modulated arc therapy

    International Nuclear Information System (INIS)

    Zhang, Pengpeng; Mah, Dennis; Happersett, Laura; Cox, Brett; Hunt, Margie; Mageras, Gig

    2011-01-01

    Purpose: Hypofractionated prostate radiotherapy may benefit from both volumetric modulated arc therapy (VMAT) due to shortened treatment time and intrafraction real-time monitoring provided by implanted radiofrequency(RF) transponders. The authors investigate dosimetrically driven action thresholds (whether treatment needs to be interrupted and patient repositioned) in VMAT treatment with electromagnetic (EM) tracking. Methods: VMAT plans for five patients are generated for prescription doses of 32.5 and 42.5 Gy in five fractions. Planning target volume (PTV) encloses the clinical target volume (CTV) with a 3 mm margin at the prostate-rectal interface and 5 mm elsewhere. The VMAT delivery is modeled using 180 equi-spaced static beams. Intrafraction prostate motion is simulated in the plan by displacing the beam isocenter at each beam assuming rigid organ motion according to a previously recorded trajectory of the transponder centroid. The cumulative dose delivered in each fraction is summed over all beams. Two sets of 57 prostate motion trajectories were randomly selected to form a learning and a testing dataset. Dosimetric end points including CTV D95%, rectum wall D1cc, bladder wall D1cc, and urethra Dmax, are analyzed against motion characteristics including the maximum amplitude of the anterior-posterior (AP), superior-inferior (SI), and left-right components. Action thresholds are triggered when intrafraction motion causes any violations of dose constraints to target and organs at risk (OAR), so that treatment is interrupted and patient is repositioned. Results: Intrafraction motion has a little effect on CTV D95%, indicating PTV margins are adequate. Tight posterior and inferior action thresholds around 1 mm need to be set in a patient specific manner to spare organs at risk, especially when the prescription dose is 42.5 Gy. Advantages of setting patient specific action thresholds are to reduce false positive alarms by 25% when prescription dose is low, and

  3. Towards the high-accuracy determination of the 238U fission cross section at the threshold region at CERN – n_TOF

    Directory of Open Access Journals (Sweden)

    Diakaki M.

    2016-01-01

    Full Text Available The 238U fission cross section is an international standard beyond 2 MeV where the fission plateau starts. However, due to its importance in fission reactors, this cross-section should be very accurately known also in the threshold region below 2 MeV. The 238U fission cross section has been measured relative to the 235U fission cross section at CERN – n_TOF with different detection systems. These datasets have been collected and suitably combined to increase the counting statistics in the threshold region from about 300 keV up to 3 MeV. The results are compared with other experimental data, evaluated libraries, and the IAEA standards.

  4. Clusterflock: a flocking algorithm for isolating congruent phylogenomic datasets.

    Science.gov (United States)

    Narechania, Apurva; Baker, Richard; DeSalle, Rob; Mathema, Barun; Kolokotronis, Sergios-Orestis; Kreiswirth, Barry; Planet, Paul J

    2016-10-24

    Collective animal behavior, such as the flocking of birds or the shoaling of fish, has inspired a class of algorithms designed to optimize distance-based clusters in various applications, including document analysis and DNA microarrays. In a flocking model, individual agents respond only to their immediate environment and move according to a few simple rules. After several iterations the agents self-organize, and clusters emerge without the need for partitional seeds. In addition to its unsupervised nature, flocking offers several computational advantages, including the potential to reduce the number of required comparisons. In the tool presented here, Clusterflock, we have implemented a flocking algorithm designed to locate groups (flocks) of orthologous gene families (OGFs) that share an evolutionary history. Pairwise distances that measure phylogenetic incongruence between OGFs guide flock formation. We tested this approach on several simulated datasets by varying the number of underlying topologies, the proportion of missing data, and evolutionary rates, and show that in datasets containing high levels of missing data and rate heterogeneity, Clusterflock outperforms other well-established clustering techniques. We also verified its utility on a known, large-scale recombination event in Staphylococcus aureus. By isolating sets of OGFs with divergent phylogenetic signals, we were able to pinpoint the recombined region without forcing a pre-determined number of groupings or defining a pre-determined incongruence threshold. Clusterflock is an open-source tool that can be used to discover horizontally transferred genes, recombined areas of chromosomes, and the phylogenetic 'core' of a genome. Although we used it here in an evolutionary context, it is generalizable to any clustering problem. Users can write extensions to calculate any distance metric on the unit interval, and can use these distances to 'flock' any type of data.

  5. Impacts of a lengthening open water season on Alaskan coastal communities: deriving locally relevant indices from large-scale datasets and community observations

    Science.gov (United States)

    Rolph, Rebecca J.; Mahoney, Andrew R.; Walsh, John; Loring, Philip A.

    2018-05-01

    Using thresholds of physical climate variables developed from community observations, together with two large-scale datasets, we have produced local indices directly relevant to the impacts of a reduced sea ice cover on Alaska coastal communities. The indices include the number of false freeze-ups defined by transient exceedances of ice concentration prior to a corresponding exceedance that persists, false break-ups, timing of freeze-up and break-up, length of the open water duration, number of days when the winds preclude hunting via boat (wind speed threshold exceedances), the number of wind events conducive to geomorphological work or damage to infrastructure from ocean waves, and the number of these wind events with on- and along-shore components promoting water setup along the coastline. We demonstrate how community observations can inform use of large-scale datasets to derive these locally relevant indices. The two primary large-scale datasets are the Historical Sea Ice Atlas for Alaska and the atmospheric output from a regional climate model used to downscale the ERA-Interim atmospheric reanalysis. We illustrate the variability and trends of these indices by application to the rural Alaska communities of Kotzebue, Shishmaref, and Utqiaġvik (previously Barrow), although the same procedure and metrics can be applied to other coastal communities. Over the 1979-2014 time period, there has been a marked increase in the number of combined false freeze-ups and false break-ups as well as the number of days too windy for hunting via boat for all three communities, especially Utqiaġvik. At Utqiaġvik, there has been an approximate tripling of the number of wind events conducive to coastline erosion from 1979 to 2014. We have also found a delay in freeze-up and earlier break-up, leading to a lengthened open water period for all of the communities examined.

  6. Impacts of a lengthening open water season on Alaskan coastal communities: deriving locally relevant indices from large-scale datasets and community observations

    Directory of Open Access Journals (Sweden)

    R. J. Rolph

    2018-05-01

    Full Text Available Using thresholds of physical climate variables developed from community observations, together with two large-scale datasets, we have produced local indices directly relevant to the impacts of a reduced sea ice cover on Alaska coastal communities. The indices include the number of false freeze-ups defined by transient exceedances of ice concentration prior to a corresponding exceedance that persists, false break-ups, timing of freeze-up and break-up, length of the open water duration, number of days when the winds preclude hunting via boat (wind speed threshold exceedances, the number of wind events conducive to geomorphological work or damage to infrastructure from ocean waves, and the number of these wind events with on- and along-shore components promoting water setup along the coastline. We demonstrate how community observations can inform use of large-scale datasets to derive these locally relevant indices. The two primary large-scale datasets are the Historical Sea Ice Atlas for Alaska and the atmospheric output from a regional climate model used to downscale the ERA-Interim atmospheric reanalysis. We illustrate the variability and trends of these indices by application to the rural Alaska communities of Kotzebue, Shishmaref, and Utqiaġvik (previously Barrow, although the same procedure and metrics can be applied to other coastal communities. Over the 1979–2014 time period, there has been a marked increase in the number of combined false freeze-ups and false break-ups as well as the number of days too windy for hunting via boat for all three communities, especially Utqiaġvik. At Utqiaġvik, there has been an approximate tripling of the number of wind events conducive to coastline erosion from 1979 to 2014. We have also found a delay in freeze-up and earlier break-up, leading to a lengthened open water period for all of the communities examined.

  7. A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie.

    Science.gov (United States)

    Hanke, Michael; Baumgartner, Florian J; Ibe, Pierre; Kaule, Falko R; Pollmann, Stefan; Speck, Oliver; Zinke, Wolf; Stadler, Jörg

    2014-01-01

    Here we present a high-resolution functional magnetic resonance (fMRI) dataset - 20 participants recorded at high field strength (7 Tesla) during prolonged stimulation with an auditory feature film ("Forrest Gump"). In addition, a comprehensive set of auxiliary data (T1w, T2w, DTI, susceptibility-weighted image, angiography) as well as measurements to assess technical and physiological noise components have been acquired. An initial analysis confirms that these data can be used to study common and idiosyncratic brain response patterns to complex auditory stimulation. Among the potential uses of this dataset are the study of auditory attention and cognition, language and music perception, and social perception. The auxiliary measurements enable a large variety of additional analysis strategies that relate functional response patterns to structural properties of the brain. Alongside the acquired data, we provide source code and detailed information on all employed procedures - from stimulus creation to data analysis. In order to facilitate replicative and derived works, only free and open-source software was utilized.

  8. Quality-control of an hourly rainfall dataset and climatology of extremes for the UK.

    Science.gov (United States)

    Blenkinsop, Stephen; Lewis, Elizabeth; Chan, Steven C; Fowler, Hayley J

    2017-02-01

    Sub-daily rainfall extremes may be associated with flash flooding, particularly in urban areas but, compared with extremes on daily timescales, have been relatively little studied in many regions. This paper describes a new, hourly rainfall dataset for the UK based on ∼1600 rain gauges from three different data sources. This includes tipping bucket rain gauge data from the UK Environment Agency (EA), which has been collected for operational purposes, principally flood forecasting. Significant problems in the use of such data for the analysis of extreme events include the recording of accumulated totals, high frequency bucket tips, rain gauge recording errors and the non-operation of gauges. Given the prospect of an intensification of short-duration rainfall in a warming climate, the identification of such errors is essential if sub-daily datasets are to be used to better understand extreme events. We therefore first describe a series of procedures developed to quality control this new dataset. We then analyse ∼380 gauges with near-complete hourly records for 1992-2011 and map the seasonal climatology of intense rainfall based on UK hourly extremes using annual maxima, n-largest events and fixed threshold approaches. We find that the highest frequencies and intensities of hourly extreme rainfall occur during summer when the usual orographically defined pattern of extreme rainfall is replaced by a weaker, north-south pattern. A strong diurnal cycle in hourly extremes, peaking in late afternoon to early evening, is also identified in summer and, for some areas, in spring. This likely reflects the different mechanisms that generate sub-daily rainfall, with convection dominating during summer. The resulting quality-controlled hourly rainfall dataset will provide considerable value in several contexts, including the development of standard, globally applicable quality-control procedures for sub-daily data, the validation of the new generation of very high

  9. Importance of tie strengths in the prisoner's dilemma game on social networks

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Bo, E-mail: xubosuper@163.com [Department of Information Systems, School of Economics and Management, Beihang University (China); Liu, Lu; You, Weijia [Department of Information Systems, School of Economics and Management, Beihang University (China)

    2011-06-13

    Though numerous researches have shown that tie strengths play a key role in the formation of collective behavior in social networks, little work has been done to explore their impact on the outcome of evolutionary games. In this Letter, we studied the effect of tie strength in the dynamics of evolutionary prisoner's dilemma games by using online social network datasets. The results show that the fraction of cooperators has a non-trivial dependence on tie strength. Weak ties, just like previous researches on epidemics and information diffusion have shown, play a key role by the maintenance of cooperators in evolutionary prisoner's dilemma games. -- Highlights: → Tie strength is used to measure heterogeneous influences of different pairs of nodes. → Weak ties play a role in maintaining cooperation in prisoner's dilemma games. → Micro-dynamics of nodes are illustrated to explain the conclusion.

  10. Threshold Signature Schemes Application

    Directory of Open Access Journals (Sweden)

    Anastasiya Victorovna Beresneva

    2015-10-01

    Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.

  11. Thresholds in radiobiology

    International Nuclear Information System (INIS)

    Katz, R.; Hofmann, W.

    1982-01-01

    Interpretations of biological radiation effects frequently use the word 'threshold'. The meaning of this word is explored together with its relationship to the fundamental character of radiation effects and to the question of perception. It is emphasised that although the existence of either a dose or an LET threshold can never be settled by experimental radiobiological investigations, it may be argued on fundamental statistical grounds that for all statistical processes, and especially where the number of observed events is small, the concept of a threshold is logically invalid. (U.K.)

  12. Fluxnet Synthesis Dataset Collaboration Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Deborah A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Humphrey, Marty [Univ. of Virginia, Charlottesville, VA (United States); van Ingen, Catharine [Microsoft. San Francisco, CA (United States); Beekwilder, Norm [Univ. of Virginia, Charlottesville, VA (United States); Goode, Monte [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jackson, Keith [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rodriguez, Matt [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Weber, Robin [Univ. of California, Berkeley, CA (United States)

    2008-02-06

    The Fluxnet synthesis dataset originally compiled for the La Thuile workshop contained approximately 600 site years. Since the workshop, several additional site years have been added and the dataset now contains over 920 site years from over 240 sites. A data refresh update is expected to increase those numbers in the next few months. The ancillary data describing the sites continues to evolve as well. There are on the order of 120 site contacts and 60proposals have been approved to use thedata. These proposals involve around 120 researchers. The size and complexity of the dataset and collaboration has led to a new approach to providing access to the data and collaboration support and the support team attended the workshop and worked closely with the attendees and the Fluxnet project office to define the requirements for the support infrastructure. As a result of this effort, a new website (http://www.fluxdata.org) has been created to provide access to the Fluxnet synthesis dataset. This new web site is based on a scientific data server which enables browsing of the data on-line, data download, and version tracking. We leverage database and data analysis tools such as OLAP data cubes and web reports to enable browser and Excel pivot table access to the data.

  13. Simulation of Smart Home Activity Datasets

    Directory of Open Access Journals (Sweden)

    Jonathan Synnott

    2015-06-01

    Full Text Available A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  14. Simulation of Smart Home Activity Datasets.

    Science.gov (United States)

    Synnott, Jonathan; Nugent, Chris; Jeffers, Paul

    2015-06-16

    A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  15. High Temperature Strength of Oxide Dispersion Strengthened Aluminium

    DEFF Research Database (Denmark)

    Clauer, A.H.; Hansen, Niels

    1984-01-01

    constant (except for the material with the lowest oxide content). The high temperature values of the modulus-corrected yield stresses are approximately two-thirds of the low temperature value. During high temperature creep, there is a definite indication of a threshold stress. This threshold stress......The tensile flow stress of coarse-grained dispersion strengthened Al-Al2O3 materials were measured as a function of temperature (77–873 K) and volume fraction (0.19-0.92 vol.%) of aluminium oxide. For the same material, the creep strength was determined as a function of temperature in the range 573......–873 K. The modulus-corrected yield stress (0.01 offset) is found to be temperature independent at low temperature (195–472 K). Between 473 and 573 K, the yield stress starts to decrease with increasing temperature. At high temperatures (573–873 K), the modulus-corrected yield stress is approximately...

  16. Solar Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov (United States)

    Solar Integration National Dataset Toolkit Solar Integration National Dataset Toolkit NREL is working on a Solar Integration National Dataset (SIND) Toolkit to enable researchers to perform U.S . regional solar generation integration studies. It will provide modeled, coherent subhourly solar power data

  17. Estimation of Received Signal Strength Distribution for Smart Meters with Biased Measurement Data Set

    DEFF Research Database (Denmark)

    Kielgast, Mathias Rønholt; Rasmussen, Anders Charly; Laursen, Mathias Hjorth

    2017-01-01

    This letter presents an experimental study and a novel modelling approach of the wireless channel of smart utility meters placed in basements or sculleries. The experimental data consist of signal strength measurements of consumption report packets. Since such packets are only registered if they ......This letter presents an experimental study and a novel modelling approach of the wireless channel of smart utility meters placed in basements or sculleries. The experimental data consist of signal strength measurements of consumption report packets. Since such packets are only registered...... if they can be decoded by the receiver, the part of the signal strength distribution that falls below the receiver sensitivity threshold is not observable. We combine a Rician fading model with a bias function that captures the cut-off in the observed signal strength measurements. Two sets of experimental...... data are analysed. It is shown that the proposed method offers an approximation of the distribution of the signal strength measurements that is better than a naïve Rician fitting....

  18. Relative Error Evaluation to Typical Open Global dem Datasets in Shanxi Plateau of China

    Science.gov (United States)

    Zhao, S.; Zhang, S.; Cheng, W.

    2018-04-01

    Produced by radar data or stereo remote sensing image pairs, global DEM datasets are one of the most important types for DEM data. Relative error relates to surface quality created by DEM data, so it relates to geomorphology and hydrologic applications using DEM data. Taking Shanxi Plateau of China as the study area, this research evaluated the relative error to typical open global DEM datasets including Shuttle Radar Terrain Mission (SRTM) data with 1 arc second resolution (SRTM1), SRTM data with 3 arc second resolution (SRTM3), ASTER global DEM data in the second version (GDEM-v2) and ALOS world 3D-30m (AW3D) data. Through process and selection, more than 300,000 ICESat/GLA14 points were used as the GCP data, and the vertical error was computed and compared among four typical global DEM datasets. Then, more than 2,600,000 ICESat/GLA14 point pairs were acquired using the distance threshold between 100 m and 500 m. Meanwhile, the horizontal distance between every point pair was computed, so the relative error was achieved using slope values based on vertical error difference and the horizontal distance of the point pairs. Finally, false slope ratio (FSR) index was computed through analyzing the difference between DEM and ICESat/GLA14 values for every point pair. Both relative error and FSR index were categorically compared for the four DEM datasets under different slope classes. Research results show: Overall, AW3D has the lowest relative error values in mean error, mean absolute error, root mean square error and standard deviation error; then the SRTM1 data, its values are a little higher than AW3D data; the SRTM3 and GDEM-v2 data have the highest relative error values, and the values for the two datasets are similar. Considering different slope conditions, all the four DEM data have better performance in flat areas but worse performance in sloping regions; AW3D has the best performance in all the slope classes, a litter better than SRTM1; with slope increasing

  19. Identifying the most appropriate age threshold for TNM stage grouping of well-differentiated thyroid cancer.

    Science.gov (United States)

    Hendrickson-Rebizant, J; Sigvaldason, H; Nason, R W; Pathak, K A

    2015-08-01

    Age is integrated in most risk stratification systems for well-differentiated thyroid cancer (WDTC). The most appropriate age threshold for stage grouping of WDTC is debatable. The objective of this study was to evaluate the best age threshold for stage grouping by comparing multivariable models designed to evaluate the independent impact of various prognostic factors, including age based stage grouping, on the disease specific survival (DSS) of our population-based cohort. Data from population-based thyroid cancer cohort of 2125 consecutive WDTC, diagnosed during 1970-2010, with a median follow-up of 11.5 years, was used to calculate DSS using the Kaplan Meier method. Multivariable analysis with Cox proportional hazard model was used to assess independent impact of different prognostic factors on DSS. The Akaike information criterion (AIC), a measure of statistical model fit, was used to identify the most appropriate age threshold model. Delta AIC, Akaike weight, and evidence ratios were calculated to compare the relative strength of different models. The mean age of the patients was 47.3 years. DSS of the cohort was 95.6% and 92.8% at 10 and 20 years respectively. A threshold of 55 years, with the lowest AIC, was identified as the best model. Akaike weight indicated an 85% chance that this age threshold is the best among the compared models, and is 16.8 times more likely to be the best model as compared to a threshold of 45 years. The age threshold of 55 years was found to be the best for TNM stage grouping. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. PROVIDING GEOGRAPHIC DATASETS AS LINKED DATA IN SDI

    Directory of Open Access Journals (Sweden)

    E. Hietanen

    2016-06-01

    Full Text Available In this study, a prototype service to provide data from Web Feature Service (WFS as linked data is implemented. At first, persistent and unique Uniform Resource Identifiers (URI are created to all spatial objects in the dataset. The objects are available from those URIs in Resource Description Framework (RDF data format. Next, a Web Ontology Language (OWL ontology is created to describe the dataset information content using the Open Geospatial Consortium’s (OGC GeoSPARQL vocabulary. The existing data model is modified in order to take into account the linked data principles. The implemented service produces an HTTP response dynamically. The data for the response is first fetched from existing WFS. Then the Geographic Markup Language (GML format output of the WFS is transformed on-the-fly to the RDF format. Content Negotiation is used to serve the data in different RDF serialization formats. This solution facilitates the use of a dataset in different applications without replicating the whole dataset. In addition, individual spatial objects in the dataset can be referred with URIs. Furthermore, the needed information content of the objects can be easily extracted from the RDF serializations available from those URIs. A solution for linking data objects to the dataset URI is also introduced by using the Vocabulary of Interlinked Datasets (VoID. The dataset is divided to the subsets and each subset is given its persistent and unique URI. This enables the whole dataset to be explored with a web browser and all individual objects to be indexed by search engines.

  1. Wind Integration National Dataset Toolkit | Grid Modernization | NREL

    Science.gov (United States)

    Integration National Dataset Toolkit Wind Integration National Dataset Toolkit The Wind Integration National Dataset (WIND) Toolkit is an update and expansion of the Eastern Wind Integration Data Set and Western Wind Integration Data Set. It supports the next generation of wind integration studies. WIND

  2. Repeated-Sprint Sequences During Female Soccer Matches Using Fixed and Individual Speed Thresholds.

    Science.gov (United States)

    Nakamura, Fábio Y; Pereira, Lucas A; Loturco, Irineu; Rosseti, Marcelo; Moura, Felipe A; Bradley, Paul S

    2017-07-01

    Nakamura, FY, Pereira, LA, Loturco, I, Rosseti, M, Moura, FA, and Bradley, PS. Repeated-sprint sequences during female soccer matches using fixed and individual speed thresholds. J Strength Cond Res 31(7): 1802-1810, 2017-The main objective of this study was to characterize the occurrence of single sprint and repeated-sprint sequences (RSS) during elite female soccer matches, using fixed (20 km·h) and individually based speed thresholds (>90% of the mean speed from a 20-m sprint test). Eleven elite female soccer players from the same team participated in the study. All players performed a 20-m linear sprint test, and were assessed in up to 10 official matches using Global Positioning System technology. Magnitude-based inferences were used to test for meaningful differences. Results revealed that irrespective of adopting fixed or individual speed thresholds, female players produced only a few RSS during matches (2.3 ± 2.4 sequences using the fixed threshold and 3.3 ± 3.0 sequences using the individually based threshold), with most sequences composing of just 2 sprints. Additionally, central defenders performed fewer sprints (10.2 ± 4.1) than other positions (fullbacks: 28.1 ± 5.5; midfielders: 21.9 ± 10.5; forwards: 31.9 ± 11.1; with the differences being likely to almost certainly associated with effect sizes ranging from 1.65 to 2.72), and sprinting ability declined in the second half. The data do not support the notion that RSS occurs frequently during soccer matches in female players, irrespective of using fixed or individual speed thresholds to define sprint occurrence. However, repeated-sprint ability development cannot be ruled out from soccer training programs because of its association with match-related performance.

  3. Shifts in the relationship between motor unit recruitment thresholds versus derecruitment thresholds during fatigue.

    Science.gov (United States)

    Stock, Matt S; Mota, Jacob A

    2017-12-01

    Muscle fatigue is associated with diminished twitch force amplitude. We examined changes in the motor unit recruitment versus derecruitment threshold relationship during fatigue. Nine men (mean age = 26 years) performed repeated isometric contractions at 50% maximal voluntary contraction (MVC) knee extensor force until exhaustion. Surface electromyographic signals were detected from the vastus lateralis, and were decomposed into their constituent motor unit action potential trains. Motor unit recruitment and derecruitment thresholds and firing rates at recruitment and derecruitment were evaluated at the beginning, middle, and end of the protocol. On average, 15 motor units were studied per contraction. For the initial contraction, three subjects showed greater recruitment thresholds than derecruitment thresholds for all motor units. Five subjects showed greater recruitment thresholds than derecruitment thresholds for only low-threshold motor units at the beginning, with a mean cross-over of 31.6% MVC. As the muscle fatigued, many motor units were derecruited at progressively higher forces. In turn, decreased slopes and increased y-intercepts were observed. These shifts were complemented by increased firing rates at derecruitment relative to recruitment. As the vastus lateralis fatigued, the central nervous system's compensatory adjustments resulted in a shift of the regression line of the recruitment versus derecruitment threshold relationship. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Analysis of bound-state spectra near the threshold of neutral particle interaction potentials

    International Nuclear Information System (INIS)

    Ou Fang; Cao Zhuangqi; Chen Jianping; Xu Junjie

    2006-01-01

    It is understood that conventional semiclassical approximations deteriorate towards threshold in a typical neutral particle interaction potential which is important for the study of ultra-cold atoms and molecules. In this Letter we give an example of the Lennard-Jones potential with tuning of the strength parameter on the basis of the analytical transfer matrix (ATM) method. Highly accurate quantum mechanical results, such as number of the bound states, energy level density and the eigenvalues with extremely low energies have been derived

  5. A global dataset of sub-daily rainfall indices

    Science.gov (United States)

    Fowler, H. J.; Lewis, E.; Blenkinsop, S.; Guerreiro, S.; Li, X.; Barbero, R.; Chan, S.; Lenderink, G.; Westra, S.

    2017-12-01

    It is still uncertain how hydrological extremes will change with global warming as we do not fully understand the processes that cause extreme precipitation under current climate variability. The INTENSE project is using a novel and fully-integrated data-modelling approach to provide a step-change in our understanding of the nature and drivers of global precipitation extremes and change on societally relevant timescales, leading to improved high-resolution climate model representation of extreme rainfall processes. The INTENSE project is in conjunction with the World Climate Research Programme (WCRP)'s Grand Challenge on 'Understanding and Predicting Weather and Climate Extremes' and the Global Water and Energy Exchanges Project (GEWEX) Science questions. A new global sub-daily precipitation dataset has been constructed (data collection is ongoing). Metadata for each station has been calculated, detailing record lengths, missing data, station locations. A set of global hydroclimatic indices have been produced based upon stakeholder recommendations including indices that describe maximum rainfall totals and timing, the intensity, duration and frequency of storms, frequency of storms above specific thresholds and information about the diurnal cycle. This will provide a unique global data resource on sub-daily precipitation whose derived indices will be freely available to the wider scientific community.

  6. Threshold factorization redux

    Science.gov (United States)

    Chay, Junegone; Kim, Chul

    2018-05-01

    We reanalyze the factorization theorems for the Drell-Yan process and for deep inelastic scattering near threshold, as constructed in the framework of the soft-collinear effective theory (SCET), from a new, consistent perspective. In order to formulate the factorization near threshold in SCET, we should include an additional degree of freedom with small energy, collinear to the beam direction. The corresponding collinear-soft mode is included to describe the parton distribution function (PDF) near threshold. The soft function is modified by subtracting the contribution of the collinear-soft modes in order to avoid double counting on the overlap region. As a result, the proper soft function becomes infrared finite, and all the factorized parts are free of rapidity divergence. Furthermore, the separation of the relevant scales in each factorized part becomes manifest. We apply the same idea to the dihadron production in e+e- annihilation near threshold, and show that the resultant soft function is also free of infrared and rapidity divergences.

  7. Water's Interfacial Hydrogen Bonding Structure Reveals the Effective Strength of Surface-Water Interactions.

    Science.gov (United States)

    Shin, Sucheol; Willard, Adam P

    2018-06-05

    We combine all-atom molecular dynamics simulations with a mean field model of interfacial hydrogen bonding to analyze the effect of surface-water interactions on the structural and energetic properties of the liquid water interface. We show that the molecular structure of water at a weakly interacting ( i.e., hydrophobic) surface is resistant to change unless the strength of surface-water interactions are above a certain threshold. We find that below this threshold water's interfacial structure is homogeneous and insensitive to the details of the disordered surface, however, above this threshold water's interfacial structure is heterogeneous. Despite this heterogeneity, we demonstrate that the equilibrium distribution of molecular orientations can be used to quantify the energetic component of the surface-water interactions that contribute specifically to modifying the interfacial hydrogen bonding network. We identify this specific energetic component as a new measure of hydrophilicity, which we refer to as the intrinsic hydropathy.

  8. Comparison between intensity- duration thresholds and cumulative rainfall thresholds for the forecasting of landslide

    Science.gov (United States)

    Lagomarsino, Daniela; Rosi, Ascanio; Rossi, Guglielmo; Segoni, Samuele; Catani, Filippo

    2014-05-01

    This work makes a quantitative comparison between the results of landslide forecasting obtained using two different rainfall threshold models, one using intensity-duration thresholds and the other based on cumulative rainfall thresholds in an area of northern Tuscany of 116 km2. The first methodology identifies rainfall intensity-duration thresholds by means a software called MaCumBA (Massive CUMulative Brisk Analyzer) that analyzes rain-gauge records, extracts the intensities (I) and durations (D) of the rainstorms associated with the initiation of landslides, plots these values on a diagram, and identifies thresholds that define the lower bounds of the I-D values. A back analysis using data from past events can be used to identify the threshold conditions associated with the least amount of false alarms. The second method (SIGMA) is based on the hypothesis that anomalous or extreme values of rainfall are responsible for landslide triggering: the statistical distribution of the rainfall series is analyzed, and multiples of the standard deviation (σ) are used as thresholds to discriminate between ordinary and extraordinary rainfall events. The name of the model, SIGMA, reflects the central role of the standard deviations in the proposed methodology. The definition of intensity-duration rainfall thresholds requires the combined use of rainfall measurements and an inventory of dated landslides, whereas SIGMA model can be implemented using only rainfall data. These two methodologies were applied in an area of 116 km2 where a database of 1200 landslides was available for the period 2000-2012. The results obtained are compared and discussed. Although several examples of visual comparisons between different intensity-duration rainfall thresholds are reported in the international literature, a quantitative comparison between thresholds obtained in the same area using different techniques and approaches is a relatively undebated research topic.

  9. A New Outlier Detection Method for Multidimensional Datasets

    KAUST Repository

    Abdel Messih, Mario A.

    2012-07-01

    This study develops a novel hybrid method for outlier detection (HMOD) that combines the idea of distance based and density based methods. The proposed method has two main advantages over most of the other outlier detection methods. The first advantage is that it works well on both dense and sparse datasets. The second advantage is that, unlike most other outlier detection methods that require careful parameter setting and prior knowledge of the data, HMOD is not very sensitive to small changes in parameter values within certain parameter ranges. The only required parameter to set is the number of nearest neighbors. In addition, we made a fully parallelized implementation of HMOD that made it very efficient in applications. Moreover, we proposed a new way of using the outlier detection for redundancy reduction in datasets where the confidence level that evaluates how accurate the less redundant dataset can be used to represent the original dataset can be specified by users. HMOD is evaluated on synthetic datasets (dense and mixed “dense and sparse”) and a bioinformatics problem of redundancy reduction of dataset of position weight matrices (PWMs) of transcription factor binding sites. In addition, in the process of assessing the performance of our redundancy reduction method, we developed a simple tool that can be used to evaluate the confidence level of reduced dataset representing the original dataset. The evaluation of the results shows that our method can be used in a wide range of problems.

  10. An experimental assessment of hysteresis in near-threshold fatigue crack propagation regime of a low alloy ferritic steel under closure-free testing conditions

    International Nuclear Information System (INIS)

    Vaidya, W.V.

    1991-01-01

    Near-threshold fatigue crack propagation behavior of a high strength steel was investigated in laboratory air under closure-free testing conditions at R = 0.7 (= R eff ), and at two different K-gradients. Depending on the criterion assumed, the threshold value differed; the criterion of non-propagation gave a lower threshold value than that assumed by the propagation criterion. Nevertheless, the subsequent propagation following a load increase was discontinuous in both the cases, and da/dN vs ΔK curves obtained on the same specimen during the K-decreasing and the K-increasing test were not necessarily identical in the threshold regime. This behavior, hysteresis, is analyzed mainly from the experimental viewpoint, and it is shown that hysteresis is not an artifact. (orig.) With 13 figs., 3 appendices [de

  11. Detection thresholds of macaque otolith afferents.

    Science.gov (United States)

    Yu, Xiong-Jie; Dickman, J David; Angelaki, Dora E

    2012-06-13

    The vestibular system is our sixth sense and is important for spatial perception functions, yet the sensory detection and discrimination properties of vestibular neurons remain relatively unexplored. Here we have used signal detection theory to measure detection thresholds of otolith afferents using 1 Hz linear accelerations delivered along three cardinal axes. Direction detection thresholds were measured by comparing mean firing rates centered on response peak and trough (full-cycle thresholds) or by comparing peak/trough firing rates with spontaneous activity (half-cycle thresholds). Thresholds were similar for utricular and saccular afferents, as well as for lateral, fore/aft, and vertical motion directions. When computed along the preferred direction, full-cycle direction detection thresholds were 7.54 and 3.01 cm/s(2) for regular and irregular firing otolith afferents, respectively. Half-cycle thresholds were approximately double, with excitatory thresholds being half as large as inhibitory thresholds. The variability in threshold among afferents was directly related to neuronal gain and did not depend on spike count variance. The exact threshold values depended on both the time window used for spike count analysis and the filtering method used to calculate mean firing rate, although differences between regular and irregular afferent thresholds were independent of analysis parameters. The fact that minimum thresholds measured in macaque otolith afferents are of the same order of magnitude as human behavioral thresholds suggests that the vestibular periphery might determine the limit on our ability to detect or discriminate small differences in head movement, with little noise added during downstream processing.

  12. NP-PAH Interaction Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  13. A dataset on tail risk of commodities markets.

    Science.gov (United States)

    Powell, Robert J; Vo, Duc H; Pham, Thach N; Singh, Abhay K

    2017-12-01

    This article contains the datasets related to the research article "The long and short of commodity tails and their relationship to Asian equity markets"(Powell et al., 2017) [1]. The datasets contain the daily prices (and price movements) of 24 different commodities decomposed from the S&P GSCI index and the daily prices (and price movements) of three share market indices including World, Asia, and South East Asia for the period 2004-2015. Then, the dataset is divided into annual periods, showing the worst 5% of price movements for each year. The datasets are convenient to examine the tail risk of different commodities as measured by Conditional Value at Risk (CVaR) as well as their changes over periods. The datasets can also be used to investigate the association between commodity markets and share markets.

  14. Threshold guidance update

    International Nuclear Information System (INIS)

    Wickham, L.E.

    1986-01-01

    The Department of Energy (DOE) is developing the concept of threshold quantities for use in determining which waste materials must be handled as radioactive waste and which may be disposed of as nonradioactive waste at its sites. Waste above this concentration level would be managed as radioactive or mixed waste (if hazardous chemicals are present); waste below this level would be handled as sanitary waste. Last years' activities (1984) included the development of a threshold guidance dose, the development of threshold concentrations corresponding to the guidance dose, the development of supporting documentation, review by a technical peer review committee, and review by the DOE community. As a result of the comments, areas have been identified for more extensive analysis, including an alternative basis for selection of the guidance dose and the development of quality assurance guidelines. Development of quality assurance guidelines will provide a reasonable basis for determining that a given waste stream qualifies as a threshold waste stream and can then be the basis for a more extensive cost-benefit analysis. The threshold guidance and supporting documentation will be revised, based on the comments received. The revised documents will be provided to DOE by early November. DOE-HQ has indicated that the revised documents will be available for review by DOE field offices and their contractors

  15. Hydrodynamic modelling and global datasets: Flow connectivity and SRTM data, a Bangkok case study.

    Science.gov (United States)

    Trigg, M. A.; Bates, P. B.; Michaelides, K.

    2012-04-01

    The rise in the global interconnected manufacturing supply chains requires an understanding and consistent quantification of flood risk at a global scale. Flood risk is often better quantified (or at least more precisely defined) in regions where there has been an investment in comprehensive topographical data collection such as LiDAR coupled with detailed hydrodynamic modelling. Yet in regions where these data and modelling are unavailable, the implications of flooding and the knock on effects for global industries can be dramatic, as evidenced by the recent floods in Bangkok, Thailand. There is a growing momentum in terms of global modelling initiatives to address this lack of a consistent understanding of flood risk and they will rely heavily on the application of available global datasets relevant to hydrodynamic modelling, such as Shuttle Radar Topography Mission (SRTM) data and its derivatives. These global datasets bring opportunities to apply consistent methodologies on an automated basis in all regions, while the use of coarser scale datasets also brings many challenges such as sub-grid process representation and downscaled hydrology data from global climate models. There are significant opportunities for hydrological science in helping define new, realistic and physically based methodologies that can be applied globally as well as the possibility of gaining new insights into flood risk through analysis of the many large datasets that will be derived from this work. We use Bangkok as a case study to explore some of the issues related to using these available global datasets for hydrodynamic modelling, with particular focus on using SRTM data to represent topography. Research has shown that flow connectivity on the floodplain is an important component in the dynamics of flood flows on to and off the floodplain, and indeed within different areas of the floodplain. A lack of representation of flow connectivity, often due to data resolution limitations, means

  16. Proteomics dataset

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Carlsen, Thomas Gelsing; Ellingsen, Torkell

    2017-01-01

    patients (Morgan et al., 2012; Abraham and Medzhitov, 2011; Bennike, 2014) [8–10. Therefore, we characterized the proteome of colon mucosa biopsies from 10 inflammatory bowel disease ulcerative colitis (UC) patients, 11 gastrointestinal healthy rheumatoid arthritis (RA) patients, and 10 controls. We...... been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifiers PXD001608 for ulcerative colitis and control samples, and PXD003082 for rheumatoid arthritis samples....

  17. Comparison of Shallow Survey 2012 Multibeam Datasets

    Science.gov (United States)

    Ramirez, T. M.

    2012-12-01

    The purpose of the Shallow Survey common dataset is a comparison of the different technologies utilized for data acquisition in the shallow survey marine environment. The common dataset consists of a series of surveys conducted over a common area of seabed using a variety of systems. It provides equipment manufacturers the opportunity to showcase their latest systems while giving hydrographic researchers and scientists a chance to test their latest algorithms on the dataset so that rigorous comparisons can be made. Five companies collected data for the Common Dataset in the Wellington Harbor area in New Zealand between May 2010 and May 2011; including Kongsberg, Reson, R2Sonic, GeoAcoustics, and Applied Acoustics. The Wellington harbor and surrounding coastal area was selected since it has a number of well-defined features, including the HMNZS South Seas and HMNZS Wellington wrecks, an armored seawall constructed of Tetrapods and Akmons, aquifers, wharves and marinas. The seabed inside the harbor basin is largely fine-grained sediment, with gravel and reefs around the coast. The area outside the harbor on the southern coast is an active environment, with moving sand and exposed reefs. A marine reserve is also in this area. For consistency between datasets, the coastal research vessel R/V Ikatere and crew were used for all surveys conducted for the common dataset. Using Triton's Perspective processing software multibeam datasets collected for the Shallow Survey were processed for detail analysis. Datasets from each sonar manufacturer were processed using the CUBE algorithm developed by the Center for Coastal and Ocean Mapping/Joint Hydrographic Center (CCOM/JHC). Each dataset was gridded at 0.5 and 1.0 meter resolutions for cross comparison and compliance with International Hydrographic Organization (IHO) requirements. Detailed comparisons were made of equipment specifications (transmit frequency, number of beams, beam width), data density, total uncertainty, and

  18. National Hydrography Dataset (NHD)

    Data.gov (United States)

    Kansas Data Access and Support Center — The National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that comprise the...

  19. The SAIL databank: linking multiple health and social care datasets

    Directory of Open Access Journals (Sweden)

    Ford David V

    2009-01-01

    Full Text Available Abstract Background Vast amounts of data are collected about patients and service users in the course of health and social care service delivery. Electronic data systems for patient records have the potential to revolutionise service delivery and research. But in order to achieve this, it is essential that the ability to link the data at the individual record level be retained whilst adhering to the principles of information governance. The SAIL (Secure Anonymised Information Linkage databank has been established using disparate datasets, and over 500 million records from multiple health and social care service providers have been loaded to date, with further growth in progress. Methods Having established the infrastructure of the databank, the aim of this work was to develop and implement an accurate matching process to enable the assignment of a unique Anonymous Linking Field (ALF to person-based records to make the databank ready for record-linkage research studies. An SQL-based matching algorithm (MACRAL, Matching Algorithm for Consistent Results in Anonymised Linkage was developed for this purpose. Firstly the suitability of using a valid NHS number as the basis of a unique identifier was assessed using MACRAL. Secondly, MACRAL was applied in turn to match primary care, secondary care and social services datasets to the NHS Administrative Register (NHSAR, to assess the efficacy of this process, and the optimum matching technique. Results The validation of using the NHS number yielded specificity values > 99.8% and sensitivity values > 94.6% using probabilistic record linkage (PRL at the 50% threshold, and error rates were Conclusion With the infrastructure that has been put in place, the reliable matching process that has been developed enables an ALF to be consistently allocated to records in the databank. The SAIL databank represents a research-ready platform for record-linkage studies.

  20. The Harvard organic photovoltaic dataset.

    Science.gov (United States)

    Lopez, Steven A; Pyzer-Knapp, Edward O; Simm, Gregor N; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-09-27

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications.

  1. Tables and figure datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — Soil and air concentrations of asbestos in Sumas study. This dataset is associated with the following publication: Wroble, J., T. Frederick, A. Frame, and D....

  2. Pressure pain thresholds and musculoskeletal morbidity in automobile manufacturing workers.

    Science.gov (United States)

    Gold, Judith E; Punnett, Laura; Katz, Jeffrey N

    2006-02-01

    Reduced pressure pain thresholds (PPTs) have been reported in occupational groups with symptoms of upper extremity musculoskeletal disorders (UEMSDs). The purpose of this study was to determine whether automobile manufacturing workers (n=460) with signs and symptoms of UEMSDs had reduced PPTs (greater sensitivity to pain through pressure applied to the skin) when compared with unaffected members of the cohort, which served as the reference group. The association of PPTs with symptom severity and localization of PE findings was investigated, as was the hypothesis that reduced thresholds would be found on the affected side in those with unilateral physical examination (PE) findings. PPTs were measured during the workday at 12 upper extremity sites. A PE for signs of UEMSDs and symptom questionnaire was administered. After comparison of potential covariates using t tests, linear regression multivariable models were constructed with the average of 12 sites (avgPPT) as the outcome. Subjects with PE findings and/or symptoms had a statistically significant lower avgPPT than non-cases. AvgPPT was reduced in those with more widespread PE findings and in those with greater symptom severity (test for trend, Pstrength below the gender-adjusted mean. After adjusting for the above confounders, avgPPT was associated with muscle/tendon PE findings and symptom severity in multivariable models. PPTs were associated with signs and symptoms of UEMSDs, after adjusting for gender, age and grip strength. The utility of this noninvasive testing modality should be assessed on the basis of prospective large cohort studies to determine if low PPTs are predictive of UEMSDs in asymptomatic individuals or of progression and spread of UEMSDs from localized to more diffuse disorders.

  3. Degree-strength correlation reveals anomalous trading behavior.

    Science.gov (United States)

    Sun, Xiao-Qian; Shen, Hua-Wei; Cheng, Xue-Qi; Wang, Zhao-Yang

    2012-01-01

    Manipulation is an important issue for both developed and emerging stock markets. Many efforts have been made to detect manipulation in stock markets. However, it is still an open problem to identify the fraudulent traders, especially when they collude with each other. In this paper, we focus on the problem of identifying the anomalous traders using the transaction data of eight manipulated stocks and forty-four non-manipulated stocks during a one-year period. By analyzing the trading networks of stocks, we find that the trading networks of manipulated stocks exhibit significantly higher degree-strength correlation than the trading networks of non-manipulated stocks and the randomized trading networks. We further propose a method to detect anomalous traders of manipulated stocks based on statistical significance analysis of degree-strength correlation. Experimental results demonstrate that our method is effective at distinguishing the manipulated stocks from non-manipulated ones. Our method outperforms the traditional weight-threshold method at identifying the anomalous traders in manipulated stocks. More importantly, our method is difficult to be fooled by colluded traders.

  4. Degree-strength correlation reveals anomalous trading behavior.

    Directory of Open Access Journals (Sweden)

    Xiao-Qian Sun

    Full Text Available Manipulation is an important issue for both developed and emerging stock markets. Many efforts have been made to detect manipulation in stock markets. However, it is still an open problem to identify the fraudulent traders, especially when they collude with each other. In this paper, we focus on the problem of identifying the anomalous traders using the transaction data of eight manipulated stocks and forty-four non-manipulated stocks during a one-year period. By analyzing the trading networks of stocks, we find that the trading networks of manipulated stocks exhibit significantly higher degree-strength correlation than the trading networks of non-manipulated stocks and the randomized trading networks. We further propose a method to detect anomalous traders of manipulated stocks based on statistical significance analysis of degree-strength correlation. Experimental results demonstrate that our method is effective at distinguishing the manipulated stocks from non-manipulated ones. Our method outperforms the traditional weight-threshold method at identifying the anomalous traders in manipulated stocks. More importantly, our method is difficult to be fooled by colluded traders.

  5. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    Science.gov (United States)

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  6. PHYSICS PERFORMANCE AND DATASET (PPD)

    CERN Multimedia

    L. Silvestris

    2013-01-01

    The first part of the Long Shutdown period has been dedicated to the preparation of the samples for the analysis targeting the summer conferences. In particular, the 8 TeV data acquired in 2012, including most of the “parked datasets”, have been reconstructed profiting from improved alignment and calibration conditions for all the sub-detectors. A careful planning of the resources was essential in order to deliver the datasets well in time to the analysts, and to schedule the update of all the conditions and calibrations needed at the analysis level. The newly reprocessed data have undergone detailed scrutiny by the Dataset Certification team allowing to recover some of the data for analysis usage and further improving the certification efficiency, which is now at 91% of the recorded luminosity. With the aim of delivering a consistent dataset for 2011 and 2012, both in terms of conditions and release (53X), the PPD team is now working to set up a data re-reconstruction and a new MC pro...

  7. Integrated Surface Dataset (Global)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Integrated Surface (ISD) Dataset (ISD) is composed of worldwide surface weather observations from over 35,000 stations, though the best spatial coverage is...

  8. Aaron Journal article datasets

    Data.gov (United States)

    U.S. Environmental Protection Agency — All figures used in the journal article are in netCDF format. This dataset is associated with the following publication: Sims, A., K. Alapaty , and S. Raman....

  9. Market Squid Ecology Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset contains ecological information collected on the major adult spawning and juvenile habitats of market squid off California and the US Pacific Northwest....

  10. Retrospective Attention Interacts with Stimulus Strength to Shape Working Memory Performance.

    Science.gov (United States)

    Wildegger, Theresa; Humphreys, Glyn; Nobre, Anna C

    2016-01-01

    Orienting attention retrospectively to selective contents in working memory (WM) influences performance. A separate line of research has shown that stimulus strength shapes perceptual representations. There is little research on how stimulus strength during encoding shapes WM performance, and how effects of retrospective orienting might vary with changes in stimulus strength. We explore these questions in three experiments using a continuous-recall WM task. In Experiment 1 we show that benefits of cueing spatial attention retrospectively during WM maintenance (retrocueing) varies according to stimulus contrast during encoding. Retrocueing effects emerge for supraliminal but not sub-threshold stimuli. However, once stimuli are supraliminal, performance is no longer influenced by stimulus contrast. In Experiments 2 and 3 we used a mixture-model approach to examine how different sources of error in WM are affected by contrast and retrocueing. For high-contrast stimuli (Experiment 2), retrocues increased the precision of successfully remembered items. For low-contrast stimuli (Experiment 3), retrocues decreased the probability of mistaking a target with distracters. These results suggest that the processes by which retrospective attentional orienting shape WM performance are dependent on the quality of WM representations, which in turn depends on stimulus strength during encoding.

  11. Automatic segmentation of coronary arteries from computed tomography angiography data cloud using optimal thresholding

    Science.gov (United States)

    Ansari, Muhammad Ahsan; Zai, Sammer; Moon, Young Shik

    2017-01-01

    Manual analysis of the bulk data generated by computed tomography angiography (CTA) is time consuming, and interpretation of such data requires previous knowledge and expertise of the radiologist. Therefore, an automatic method that can isolate the coronary arteries from a given CTA dataset is required. We present an automatic yet effective segmentation method to delineate the coronary arteries from a three-dimensional CTA data cloud. Instead of a region growing process, which is usually time consuming and prone to leakages, the method is based on the optimal thresholding, which is applied globally on the Hessian-based vesselness measure in a localized way (slice by slice) to track the coronaries carefully to their distal ends. Moreover, to make the process automatic, we detect the aorta using the Hough transform technique. The proposed segmentation method is independent of the starting point to initiate its process and is fast in the sense that coronary arteries are obtained without any preprocessing or postprocessing steps. We used 12 real clinical datasets to show the efficiency and accuracy of the presented method. Experimental results reveal that the proposed method achieves 95% average accuracy.

  12. Effects of a concurrent strength and endurance training on running performance and running economy in recreational marathon runners.

    Science.gov (United States)

    Ferrauti, Alexander; Bergermann, Matthias; Fernandez-Fernandez, Jaime

    2010-10-01

    The purpose of this study was to investigate the effects of a concurrent strength and endurance training program on running performance and running economy of middle-aged runners during their marathon preparation. Twenty-two (8 women and 14 men) recreational runners (mean ± SD: age 40.0 ± 11.7 years; body mass index 22.6 ± 2.1 kg·m⁻²) were separated into 2 groups (n = 11; combined endurance running and strength training program [ES]: 9 men, 2 women and endurance running [E]: 7 men, and 4 women). Both completed an 8-week intervention period that consisted of either endurance training (E: 276 ± 108 minute running per week) or a combined endurance and strength training program (ES: 240 ± 121-minute running plus 2 strength training sessions per week [120 minutes]). Strength training was focused on trunk (strength endurance program) and leg muscles (high-intensity program). Before and after the intervention, subjects completed an incremental treadmill run and maximal isometric strength tests. The initial values for VO2peak (ES: 52.0 ± 6.1 vs. E: 51.1 ± 7.5 ml·kg⁻¹·min⁻¹) and anaerobic threshold (ES: 3.5 ± 0.4 vs. E: 3.4 ± 0.5 m·s⁻¹) were identical in both groups. A significant time × intervention effect was found for maximal isometric force of knee extension (ES: from 4.6 ± 1.4 to 6.2 ± 1.0 N·kg⁻¹, p marathon running velocities (2.4 and 2.8 m·s⁻¹) and submaximal blood lactate thresholds (2.0, 3.0, and 4.0 mmol·L⁻¹). Stride length and stride frequency also remained unchanged. The results suggest no benefits of an 8-week concurrent strength training for running economy and coordination of recreational marathon runners despite a clear improvement in leg strength, maybe because of an insufficient sample size or a short intervention period.

  13. Proton-threshold states in sup 27 Al and the production of sup 27 Al at low stellar temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Champagne, A E [Princeton Univ., NJ (USA). Dept. of Physics; Magnus, P V; Smith, M S [Yale Univ., New Haven, CT (USA). Wright Nuclear Structure Lab.; Howard, A J [Trinity Coll., Hartford, CT (USA). Dept. of Physics and Astronomy

    1990-06-04

    The {sup 26}Mg({sup 3}He, d){sup 27}Al reaction has been employed to measure excitation energies and proton spectroscopic factors for states corresponding to {sup 26}Mg+p resonances in the vicinity of the proton-capture threshold. The width ratio {Gamma}{sub {gamma}}/{Gamma} was measured for three previously established resonances via a study of the {sup 26}Mg({sup 3}He, d{gamma}){sup 27}Al reaction, and corresponding values of the proton widths were obtained. Combining this information establishes strengths for four of the states lying within 150 keV of the proton threshold. A {sup 26}Mg+p reaction rate is deduced, and its astrophysical implications are discussed. (orig.).

  14. ATLAS File and Dataset Metadata Collection and Use

    CERN Document Server

    Albrand, S; The ATLAS collaboration; Lambert, F; Gallas, E J

    2012-01-01

    The ATLAS Metadata Interface (“AMI”) was designed as a generic cataloguing system, and as such it has found many uses in the experiment including software release management, tracking of reconstructed event sizes and control of dataset nomenclature. The primary use of AMI is to provide a catalogue of datasets (file collections) which is searchable using physics criteria. In this paper we discuss the various mechanisms used for filling the AMI dataset and file catalogues. By correlating information from different sources we can derive aggregate information which is important for physics analysis; for example the total number of events contained in dataset, and possible reasons for missing events such as a lost file. Finally we will describe some specialized interfaces which were developed for the Data Preparation and reprocessing coordinators. These interfaces manipulate information from both the dataset domain held in AMI, and the run-indexed information held in the ATLAS COMA application (Conditions and ...

  15. Norwegian Hydrological Reference Dataset for Climate Change Studies

    Energy Technology Data Exchange (ETDEWEB)

    Magnussen, Inger Helene; Killingland, Magnus; Spilde, Dag

    2012-07-01

    Based on the Norwegian hydrological measurement network, NVE has selected a Hydrological Reference Dataset for studies of hydrological change. The dataset meets international standards with high data quality. It is suitable for monitoring and studying the effects of climate change on the hydrosphere and cryosphere in Norway. The dataset includes streamflow, groundwater, snow, glacier mass balance and length change, lake ice and water temperature in rivers and lakes.(Author)

  16. The Harvard organic photovoltaic dataset

    Science.gov (United States)

    Lopez, Steven A.; Pyzer-Knapp, Edward O.; Simm, Gregor N.; Lutzow, Trevor; Li, Kewei; Seress, Laszlo R.; Hachmann, Johannes; Aspuru-Guzik, Alán

    2016-01-01

    The Harvard Organic Photovoltaic Dataset (HOPV15) presented in this work is a collation of experimental photovoltaic data from the literature, and corresponding quantum-chemical calculations performed over a range of conformers, each with quantum chemical results using a variety of density functionals and basis sets. It is anticipated that this dataset will be of use in both relating electronic structure calculations to experimental observations through the generation of calibration schemes, as well as for the creation of new semi-empirical methods and the benchmarking of current and future model chemistries for organic electronic applications. PMID:27676312

  17. Synthetic and Empirical Capsicum Annuum Image Dataset

    NARCIS (Netherlands)

    Barth, R.

    2016-01-01

    This dataset consists of per-pixel annotated synthetic (10500) and empirical images (50) of Capsicum annuum, also known as sweet or bell pepper, situated in a commercial greenhouse. Furthermore, the source models to generate the synthetic images are included. The aim of the datasets are to

  18. Comparison of Four Precipitation Forcing Datasets in Land Information System Simulations over the Continental U.S.

    Science.gov (United States)

    Case, Jonathan L.; Kumar, Sujay V.; Kuligowski, Robert J.; Langston, Carrie

    2013-01-01

    The NASA Short ]term Prediction Research and Transition (SPoRT) Center in Huntsville, AL is running a real ]time configuration of the NASA Land Information System (LIS) with the Noah land surface model (LSM). Output from the SPoRT ]LIS run is used to initialize land surface variables for local modeling applications at select National Weather Service (NWS) partner offices, and can be displayed in decision support systems for situational awareness and drought monitoring. The SPoRT ]LIS is run over a domain covering the southern and eastern United States, fully nested within the National Centers for Environmental Prediction Stage IV precipitation analysis grid, which provides precipitation forcing to the offline LIS ]Noah runs. The SPoRT Center seeks to expand the real ]time LIS domain to the entire Continental U.S. (CONUS); however, geographical limitations with the Stage IV analysis product have inhibited this expansion. Therefore, a goal of this study is to test alternative precipitation forcing datasets that can enable the LIS expansion by improving upon the current geographical limitations of the Stage IV product. The four precipitation forcing datasets that are inter ]compared on a 4 ]km resolution CONUS domain include the Stage IV, an experimental GOES quantitative precipitation estimate (QPE) from NESDIS/STAR, the National Mosaic and QPE (NMQ) product from the National Severe Storms Laboratory, and the North American Land Data Assimilation System phase 2 (NLDAS ]2) analyses. The NLDAS ]2 dataset is used as the control run, with each of the other three datasets considered experimental runs compared against the control. The regional strengths, weaknesses, and biases of each precipitation analysis are identified relative to the NLDAS ]2 control in terms of accumulated precipitation pattern and amount, and the impacts on the subsequent LSM spin ]up simulations. The ultimate goal is to identify an alternative precipitation forcing dataset that can best support an

  19. The responsiveness of sensibility and strength tests in patients undergoing carpal tunnel decompression

    Directory of Open Access Journals (Sweden)

    Miller Leanne

    2011-10-01

    Full Text Available Abstract Background Several clinical measures of sensory and motor function are used alongside patient-rated questionnaires to assess outcomes of carpal tunnel decompression. However there is a lack of evidence regarding which clinical tests are most responsive to clinically important change over time. Methods In a prospective cohort study 63 patients undergoing carpal tunnel decompression were assessed using standardised clinician-derived and patient reported outcomes before surgery, at 4 and 8 months follow up. Clinical sensory assessments included: touch threshold with monofilaments (WEST, shape-texture identification (STI™ test, static two-point discrimination (Mackinnon-Dellon Disk-Criminator and the locognosia test. Motor assessments included: grip and tripod pinch strength using a digital grip analyser (MIE, manual muscle testing of abductor pollicis brevis and opponens pollicis using the Rotterdam Intrinsic Handheld Myometer (RIHM. The Boston Carpal Tunnel Questionnaire (BCTQ was used as a patient rated outcome measure. Results Relative responsiveness at 4 months was highest for the BCTQ symptom severity scale with moderate to large effects sizes (ES = -1.43 followed by the BCTQ function scale (ES = -0.71. The WEST and STI™ were the most responsive sensory tests at 4 months showing moderate effect sizes (WEST ES = 0.55, STI ES = 0.52. Grip and pinch strength had a relatively higher responsiveness compared to thenar muscle strength but effect sizes for all motor tests were very small (ES ≤0.10 or negative indicating a decline compared to baseline in some patients. Conclusions For clinical assessment of sensibility touch threshold assessed by monofilaments (WEST and tactile gnosis measured with the STI™ test are the most responsive tests and are recommended for future studies. The use of handheld myometry (RIHM for manual muscle testing, despite more specifically targeting thenar muscles, was less responsive than grip or tripod

  20. Mixed maximal and explosive strength training in recreational endurance runners.

    Science.gov (United States)

    Taipale, Ritva S; Mikkola, Jussi; Salo, Tiina; Hokka, Laura; Vesterinen, Ville; Kraemer, William J; Nummela, Ari; Häkkinen, Keijo

    2014-03-01

    Supervised periodized mixed maximal and explosive strength training added to endurance training in recreational endurance runners was examined during an 8-week intervention preceded by an 8-week preparatory strength training period. Thirty-four subjects (21-45 years) were divided into experimental groups: men (M, n = 9), women (W, n = 9), and control groups: men (MC, n = 7), women (WC, n = 9). The experimental groups performed mixed maximal and explosive exercises, whereas control subjects performed circuit training with body weight. Endurance training included running at an intensity below lactate threshold. Strength, power, endurance performance characteristics, and hormones were monitored throughout the study. Significance was set at p ≤ 0.05. Increases were observed in both experimental groups that were more systematic than in the control groups in explosive strength (12 and 13% in men and women, respectively), muscle activation, maximal strength (6 and 13%), and peak running speed (14.9 ± 1.2 to 15.6 ± 1.2 and 12.9 ± 0.9 to 13.5 ± 0.8 km Ł h). The control groups showed significant improvements in maximal and explosive strength, but Speak increased only in MC. Submaximal running characteristics (blood lactate and heart rate) improved in all groups. Serum hormones fluctuated significantly in men (testosterone) and in women (thyroid stimulating hormone) but returned to baseline by the end of the study. Mixed strength training combined with endurance training may be more effective than circuit training in recreational endurance runners to benefit overall fitness that may be important for other adaptive processes and larger training loads associated with, e.g., marathon training.

  1. Electric field strength determination in filamentary DBDs by CARS-based four-wave mixing

    Science.gov (United States)

    Boehm, Patrick; Kettlitz, Manfred; Brandenburg, Ronny; Hoeft, Hans; Czarnetzki, Uwe

    2016-09-01

    The electric field strength is a basic parameter of non-thermal plasmas. Therefore, a profound knowledge of the electric field distribution is crucial. In this contribution a four wave mixing technique based on Coherent Anti-Stokes Raman spectroscopy (CARS) is used to measure electric field strengths in filamentary dielectric barrier discharges (DBDs). The discharges are operated with a pulsed voltage in nitrogen at atmospheric pressure. Small amounts hydrogen (10 vol%) are admixed as tracer gas to evaluate the electric field strength in the 1 mm discharge gap. Absolute values of the electric field strength are determined by calibration of the CARS setup with high voltage amplitudes below the ignition threshold of the arrangement. Alteration of the electric field strength has been observed during the internal polarity reversal and the breakdown process. In this case the major advantage over emission based methods is that this technique can be used independently from emission, e.g. in the pre-phase and in between two consecutive, opposite discharge pulses where no emission occurs at all. This work was supported by the Deutsche Forschungsgemeinschaft, Forschergruppe FOR 1123 and Sonderforschungsbereich TRR 24 ``Fundamentals of complex plasmas''.

  2. Characterization of precipitation features over CONUS derived from satellite, radar, and rain gauge datasets (2002-2012)

    Science.gov (United States)

    Prat, O. P.; Nelson, B. R.

    2013-12-01

    We use a suite of quantitative precipitation estimates (QPEs) derived from satellite, radar, surface observations, and models to derive precipitation characteristics over CONUS for the period 2002-2012. This comparison effort includes satellite multi-sensor datasets of TMPA 3B42, CMORPH, and PERSIANN. The satellite based QPEs are compared over the concurrent period with the NCEP Stage IV product, which is a near real time product providing precipitation data at the hourly temporal scale gridded at a nominal 4-km spatial resolution. In addition, remotely sensed precipitation datasets are compared with surface observations from the Global Historical Climatology Network (GHCN-Daily) and from the PRISM (Parameter-elevation Regressions on Independent Slopes Model), which provides gridded precipitation estimates that are used as a baseline for multi-sensor QPE products comparison. The comparisons are performed at the annual, seasonal, monthly, and daily scales with focus on selected river basins (Southeastern US, Pacific Northwest, Great Plains). While, unconditional annual rain rates present a satisfying agreement between all products, results suggest that satellite QPE datasets exhibit important biases in particular at higher rain rates (≥4 mm/day). Conversely, on seasonal scales differences between remotely sensed data and ground surface observations can be greater than 50% and up to 90% for low daily accumulation (≤1 mm/day) such as in the Western US (summer) and Central US (winter). The conditional analysis performed using different daily rainfall accumulation thresholds (from low rainfall intensity to intense precipitation) shows that while intense events measured at the ground are infrequent (around 2% for daily accumulation above 2 inches/day), remotely sensed products displayed differences from 20-50% and up to 90-100%. A discussion on the impact of differing spatial and temporal resolutions with respect to the datasets ability to capture extreme

  3. Determining Threshold Distance Providing Less Interference for Wireless Medical Implant Communication Systems in Coexisting Environments under Shadow Fading Conditions

    Directory of Open Access Journals (Sweden)

    Selman KULAÇ

    2017-08-01

    Full Text Available Important interference problems will be able to be encountered especially close areas to the hospitals where wireless implantable medical systems' communication traffic occurs heavily in near future. It is possible that these interferences could cause wireless implant devices to malfunction and harmful effects on patients. In this study, it is proposed to determine threshold distance in order to get less interference for wireless implantable medical systems under shadow fading conditions where MICS band and MetAids band users coexist intensely simultaneously. In this method, threshold power according to the \\cite{FCC} is pulled down by adding extra distance margin in order to minimize the interference effects to the MICS systems using confidence interval calculations. Because received signal strength just below the monitoring threshold power according to the \\cite{FCC} brings about much more interferences for the MICS systems even if listen-before-talk technique is applied.

  4. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    Directory of Open Access Journals (Sweden)

    Alireza Alemi

    2015-08-01

    Full Text Available Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the

  5. A New Wavelet Threshold Function and Denoising Application

    Directory of Open Access Journals (Sweden)

    Lu Jing-yi

    2016-01-01

    Full Text Available In order to improve the effects of denoising, this paper introduces the basic principles of wavelet threshold denoising and traditional structures threshold functions. Meanwhile, it proposes wavelet threshold function and fixed threshold formula which are both improved here. First, this paper studies the problems existing in the traditional wavelet threshold functions and introduces the adjustment factors to construct the new threshold function basis on soft threshold function. Then, it studies the fixed threshold and introduces the logarithmic function of layer number of wavelet decomposition to design the new fixed threshold formula. Finally, this paper uses hard threshold, soft threshold, Garrote threshold, and improved threshold function to denoise different signals. And the paper also calculates signal-to-noise (SNR and mean square errors (MSE of the hard threshold functions, soft thresholding functions, Garrote threshold functions, and the improved threshold function after denoising. Theoretical analysis and experimental results showed that the proposed approach could improve soft threshold functions with constant deviation and hard threshold with discontinuous function problems. The proposed approach could improve the different decomposition scales that adopt the same threshold value to deal with the noise problems, also effectively filter the noise in the signals, and improve the SNR and reduce the MSE of output signals.

  6. EEG datasets for motor imagery brain-computer interface.

    Science.gov (United States)

    Cho, Hohyun; Ahn, Minkyu; Ahn, Sangtae; Kwon, Moonyoung; Jun, Sung Chan

    2017-07-01

    Most investigators of brain-computer interface (BCI) research believe that BCI can be achieved through induced neuronal activity from the cortex, but not by evoked neuronal activity. Motor imagery (MI)-based BCI is one of the standard concepts of BCI, in that the user can generate induced activity by imagining motor movements. However, variations in performance over sessions and subjects are too severe to overcome easily; therefore, a basic understanding and investigation of BCI performance variation is necessary to find critical evidence of performance variation. Here we present not only EEG datasets for MI BCI from 52 subjects, but also the results of a psychological and physiological questionnaire, EMG datasets, the locations of 3D EEG electrodes, and EEGs for non-task-related states. We validated our EEG datasets by using the percentage of bad trials, event-related desynchronization/synchronization (ERD/ERS) analysis, and classification analysis. After conventional rejection of bad trials, we showed contralateral ERD and ipsilateral ERS in the somatosensory area, which are well-known patterns of MI. Finally, we showed that 73.08% of datasets (38 subjects) included reasonably discriminative information. Our EEG datasets included the information necessary to determine statistical significance; they consisted of well-discriminated datasets (38 subjects) and less-discriminative datasets. These may provide researchers with opportunities to investigate human factors related to MI BCI performance variation, and may also achieve subject-to-subject transfer by using metadata, including a questionnaire, EEG coordinates, and EEGs for non-task-related states. © The Authors 2017. Published by Oxford University Press.

  7. Comparison of Machine Learning Techniques for the Prediction of Compressive Strength of Concrete

    Directory of Open Access Journals (Sweden)

    Palika Chopra

    2018-01-01

    Full Text Available A comparative analysis for the prediction of compressive strength of concrete at the ages of 28, 56, and 91 days has been carried out using machine learning techniques via “R” software environment. R is digging out a strong foothold in the statistical realm and is becoming an indispensable tool for researchers. The dataset has been generated under controlled laboratory conditions. Using R miner, the most widely used data mining techniques decision tree (DT model, random forest (RF model, and neural network (NN model have been used and compared with the help of coefficient of determination (R2 and root-mean-square error (RMSE, and it is inferred that the NN model predicts with high accuracy for compressive strength of concrete.

  8. ZERODUR: deterministic approach for strength design

    Science.gov (United States)

    Hartmann, Peter

    2012-12-01

    There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two

  9. A high-resolution European dataset for hydrologic modeling

    Science.gov (United States)

    Ntegeka, Victor; Salamon, Peter; Gomes, Goncalo; Sint, Hadewij; Lorini, Valerio; Thielen, Jutta

    2013-04-01

    There is an increasing demand for large scale hydrological models not only in the field of modeling the impact of climate change on water resources but also for disaster risk assessments and flood or drought early warning systems. These large scale models need to be calibrated and verified against large amounts of observations in order to judge their capabilities to predict the future. However, the creation of large scale datasets is challenging for it requires collection, harmonization, and quality checking of large amounts of observations. For this reason, only a limited number of such datasets exist. In this work, we present a pan European, high-resolution gridded dataset of meteorological observations (EFAS-Meteo) which was designed with the aim to drive a large scale hydrological model. Similar European and global gridded datasets already exist, such as the HadGHCND (Caesar et al., 2006), the JRC MARS-STAT database (van der Goot and Orlandi, 2003) and the E-OBS gridded dataset (Haylock et al., 2008). However, none of those provide similarly high spatial resolution and/or a complete set of variables to force a hydrologic model. EFAS-Meteo contains daily maps of precipitation, surface temperature (mean, minimum and maximum), wind speed and vapour pressure at a spatial grid resolution of 5 x 5 km for the time period 1 January 1990 - 31 December 2011. It furthermore contains calculated radiation, which is calculated by using a staggered approach depending on the availability of sunshine duration, cloud cover and minimum and maximum temperature, and evapotranspiration (potential evapotranspiration, bare soil and open water evapotranspiration). The potential evapotranspiration was calculated using the Penman-Monteith equation with the above-mentioned meteorological variables. The dataset was created as part of the development of the European Flood Awareness System (EFAS) and has been continuously updated throughout the last years. The dataset variables are used as

  10. ASSISTments Dataset from Multiple Randomized Controlled Experiments

    Science.gov (United States)

    Selent, Douglas; Patikorn, Thanaporn; Heffernan, Neil

    2016-01-01

    In this paper, we present a dataset consisting of data generated from 22 previously and currently running randomized controlled experiments inside the ASSISTments online learning platform. This dataset provides data mining opportunities for researchers to analyze ASSISTments data in a convenient format across multiple experiments at the same time.…

  11. Would the ‘real’ observed dataset stand up? A critical examination of eight observed gridded climate datasets for China

    International Nuclear Information System (INIS)

    Sun, Qiaohong; Miao, Chiyuan; Duan, Qingyun; Kong, Dongxian; Ye, Aizhong; Di, Zhenhua; Gong, Wei

    2014-01-01

    This research compared and evaluated the spatio-temporal similarities and differences of eight widely used gridded datasets. The datasets include daily precipitation over East Asia (EA), the Climate Research Unit (CRU) product, the Global Precipitation Climatology Centre (GPCC) product, the University of Delaware (UDEL) product, Precipitation Reconstruction over Land (PREC/L), the Asian Precipitation Highly Resolved Observational (APHRO) product, the Institute of Atmospheric Physics (IAP) dataset from the Chinese Academy of Sciences, and the National Meteorological Information Center dataset from the China Meteorological Administration (CN05). The meteorological variables focus on surface air temperature (SAT) or precipitation (PR) in China. All datasets presented general agreement on the whole spatio-temporal scale, but some differences appeared for specific periods and regions. On a temporal scale, EA shows the highest amount of PR, while APHRO shows the lowest. CRU and UDEL show higher SAT than IAP or CN05. On a spatial scale, the most significant differences occur in western China for PR and SAT. For PR, the difference between EA and CRU is the largest. When compared with CN05, CRU shows higher SAT in the central and southern Northwest river drainage basin, UDEL exhibits higher SAT over the Southwest river drainage system, and IAP has lower SAT in the Tibetan Plateau. The differences in annual mean PR and SAT primarily come from summer and winter, respectively. Finally, potential factors impacting agreement among gridded climate datasets are discussed, including raw data sources, quality control (QC) schemes, orographic correction, and interpolation techniques. The implications and challenges of these results for climate research are also briefly addressed. (paper)

  12. Viking Seismometer PDS Archive Dataset

    Science.gov (United States)

    Lorenz, R. D.

    2016-12-01

    The Viking Lander 2 seismometer operated successfully for over 500 Sols on the Martian surface, recording at least one likely candidate Marsquake. The Viking mission, in an era when data handling hardware (both on board and on the ground) was limited in capability, predated modern planetary data archiving, and ad-hoc repositories of the data, and the very low-level record at NSSDC, were neither convenient to process nor well-known. In an effort supported by the NASA Mars Data Analysis Program, we have converted the bulk of the Viking dataset (namely the 49,000 and 270,000 records made in High- and Event- modes at 20 and 1 Hz respectively) into a simple ASCII table format. Additionally, since wind-generated lander motion is a major component of the signal, contemporaneous meteorological data are included in summary records to facilitate correlation. These datasets are being archived at the PDS Geosciences Node. In addition to brief instrument and dataset descriptions, the archive includes code snippets in the freely-available language 'R' to demonstrate plotting and analysis. Further, we present examples of lander-generated noise, associated with the sampler arm, instrument dumps and other mechanical operations.

  13. Effects of the network structure and coupling strength on the noise-induced response delay of a neuronal network

    International Nuclear Information System (INIS)

    Ozer, Mahmut; Uzuntarla, Muhammet

    2008-01-01

    The Hodgkin-Huxley (H-H) neuron model driven by stimuli just above threshold shows a noise-induced response delay with respect to time to the first spike for a certain range of noise strengths, an effect called 'noise delayed decay' (NDD). We study the response time of a network of coupled H-H neurons, and investigate how the NDD can be affected by the connection topology of the network and the coupling strength. We show that the NDD effect exists for weak and intermediate coupling strengths, whereas it disappears for strong coupling strength regardless of the connection topology. We also show that although the network structure has very little effect on the NDD for a weak coupling strength, the network structure plays a key role for an intermediate coupling strength by decreasing the NDD effect with the increasing number of random shortcuts, and thus provides an additional operating regime, that is absent in the regular network, in which the neurons may also exploit a spike time code

  14. 3D printing from MRI Data: Harnessing strengths and minimizing weaknesses.

    Science.gov (United States)

    Ripley, Beth; Levin, Dmitry; Kelil, Tatiana; Hermsen, Joshua L; Kim, Sooah; Maki, Jeffrey H; Wilson, Gregory J

    2017-03-01

    3D printing facilitates the creation of accurate physical models of patient-specific anatomy from medical imaging datasets. While the majority of models to date are created from computed tomography (CT) data, there is increasing interest in creating models from other datasets, such as ultrasound and magnetic resonance imaging (MRI). MRI, in particular, holds great potential for 3D printing, given its excellent tissue characterization and lack of ionizing radiation. There are, however, challenges to 3D printing from MRI data as well. Here we review the basics of 3D printing, explore the current strengths and weaknesses of printing from MRI data as they pertain to model accuracy, and discuss considerations in the design of MRI sequences for 3D printing. Finally, we explore the future of 3D printing and MRI, including creative applications and new materials. 5 J. Magn. Reson. Imaging 2017;45:635-645. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Threshold behavior in electron-atom scattering

    International Nuclear Information System (INIS)

    Sadeghpour, H.R.; Greene, C.H.

    1996-01-01

    Ever since the classic work of Wannier in 1953, the process of treating two threshold electrons in the continuum of a positively charged ion has been an active field of study. The authors have developed a treatment motivated by the physics below the double ionization threshold. By modeling the double ionization as a series of Landau-Zener transitions, they obtain an analytical formulation of the absolute threshold probability which has a leading power law behavior, akin to Wannier's law. Some of the noteworthy aspects of this derivation are that the derivation can be conveniently continued below threshold giving rise to a open-quotes cuspclose quotes at threshold, and that on both sides of the threshold, absolute values of the cross sections are obtained

  16. Homogenised Australian climate datasets used for climate change monitoring

    International Nuclear Information System (INIS)

    Trewin, Blair; Jones, David; Collins; Dean; Jovanovic, Branislava; Braganza, Karl

    2007-01-01

    Full text: The Australian Bureau of Meteorology has developed a number of datasets for use in climate change monitoring. These datasets typically cover 50-200 stations distributed as evenly as possible over the Australian continent, and have been subject to detailed quality control and homogenisation.The time period over which data are available for each element is largely determined by the availability of data in digital form. Whilst nearly all Australian monthly and daily precipitation data have been digitised, a significant quantity of pre-1957 data (for temperature and evaporation) or pre-1987 data (for some other elements) remains to be digitised, and is not currently available for use in the climate change monitoring datasets. In the case of temperature and evaporation, the start date of the datasets is also determined by major changes in instruments or observing practices for which no adjustment is feasible at the present time. The datasets currently available cover: Monthly and daily precipitation (most stations commence 1915 or earlier, with many extending back to the late 19th century, and a few to the mid-19th century); Annual temperature (commences 1910); Daily temperature (commences 1910, with limited station coverage pre-1957); Twice-daily dewpoint/relative humidity (commences 1957); Monthly pan evaporation (commences 1970); Cloud amount (commences 1957) (Jovanovic etal. 2007). As well as the station-based datasets listed above, an additional dataset being developed for use in climate change monitoring (and other applications) covers tropical cyclones in the Australian region. This is described in more detail in Trewin (2007). The datasets already developed are used in analyses of observed climate change, which are available through the Australian Bureau of Meteorology website (http://www.bom.gov.au/silo/products/cli_chg/). They are also used as a basis for routine climate monitoring, and in the datasets used for the development of seasonal

  17. Double Photoionization Near Threshold

    Science.gov (United States)

    Wehlitz, Ralf

    2007-01-01

    The threshold region of the double-photoionization cross section is of particular interest because both ejected electrons move slowly in the Coulomb field of the residual ion. Near threshold both electrons have time to interact with each other and with the residual ion. Also, different theoretical models compete to describe the double-photoionization cross section in the threshold region. We have investigated that cross section for lithium and beryllium and have analyzed our data with respect to the latest results in the Coulomb-dipole theory. We find that our data support the idea of a Coulomb-dipole interaction.

  18. Coastal monitoring solutions of the geomorphological response of beach-dune systems using multi-temporal LiDAR datasets (Vendée coast, France)

    Science.gov (United States)

    Le Mauff, Baptiste; Juigner, Martin; Ba, Antoine; Robin, Marc; Launeau, Patrick; Fattal, Paul

    2018-03-01

    Three beach and dune systems located in the northeastern part of the Bay of Biscay in France were monitored over 5 years with a time series of three airborne LiDAR datasets. The three study sites illustrate a variety of morphological beach types found in this region. Reproducible monitoring solutions adapted to basic and complex beach and dune morphologies using LiDAR time series were investigated over two periods bounded by the three surveys. The first period (between May 2008 and August 2010) is characterized by a higher prevalence of storm events, and thus has a greater potential for eroding the coast, than the second period (between August 2010 and September 2013). During the first period, the central and northeastern part of the Bay of Biscay was notably impacted by Storm Xynthia, with water levels and wave heights exceeding the 10-year return period and 1-year return period, respectively. Despite differences in dune morphology between the sites, the dune crest (Dhigh) and the dune base (Dlow) are efficiently extracted from each DEM. Based on the extracted dune base, an original shoreline mobility indicator is built displaying a combination of the horizontal and vertical migrations of this geomorphic indicator between two LiDAR datasets. A 'Geomorphic Change Detection' is also completed by computing DEMs of Difference (DoD) resulting in segregated maps of erosion and deposition and sediment budgets. Accounting for the accuracy of LiDAR datasets, a probabilistic approach at a 95% confidence interval is used as a threshold for the Geomorphic Change Detection showing more reliable results. However, caution should be taken when interpreting thresholded maps of changes and sediment budgets because some beach processes may be masked, especially on wide tidal beaches, by only keeping the most significant changes. The results of the shoreline mobility and Geomorphic Change Detection show a high variability in the beach responses between and within the three study

  19. Introduction of a simple-model-based land surface dataset for Europe

    Science.gov (United States)

    Orth, Rene; Seneviratne, Sonia I.

    2015-04-01

    Land surface hydrology can play a crucial role during extreme events such as droughts, floods and even heat waves. We introduce in this study a new hydrological dataset for Europe that consists of soil moisture, runoff and evapotranspiration (ET). It is derived with a simple water balance model (SWBM) forced with precipitation, temperature and net radiation. The SWBM dataset extends over the period 1984-2013 with a daily time step and 0.5° × 0.5° resolution. We employ a novel calibration approach, in which we consider 300 random parameter sets chosen from an observation-based range. Using several independent validation datasets representing soil moisture (or terrestrial water content), ET and streamflow, we identify the best performing parameter set and hence the new dataset. To illustrate its usefulness, the SWBM dataset is compared against several state-of-the-art datasets (ERA-Interim/Land, MERRA-Land, GLDAS-2-Noah, simulations of the Community Land Model Version 4), using all validation datasets as reference. For soil moisture dynamics it outperforms the benchmarks. Therefore the SWBM soil moisture dataset constitutes a reasonable alternative to sparse measurements, little validated model results, or proxy data such as precipitation indices. Also in terms of runoff the SWBM dataset performs well, whereas the evaluation of the SWBM ET dataset is overall satisfactory, but the dynamics are less well captured for this variable. This highlights the limitations of the dataset, as it is based on a simple model that uses uniform parameter values. Hence some processes impacting ET dynamics may not be captured, and quality issues may occur in regions with complex terrain. Even though the SWBM is well calibrated, it cannot replace more sophisticated models; but as their calibration is a complex task the present dataset may serve as a benchmark in future. In addition we investigate the sources of skill of the SWBM dataset and find that the parameter set has a similar

  20. Data Mining for Imbalanced Datasets: An Overview

    Science.gov (United States)

    Chawla, Nitesh V.

    A dataset is imbalanced if the classification categories are not approximately equally represented. Recent years brought increased interest in applying machine learning techniques to difficult "real-world" problems, many of which are characterized by imbalanced data. Additionally the distribution of the testing data may differ from that of the training data, and the true misclassification costs may be unknown at learning time. Predictive accuracy, a popular choice for evaluating performance of a classifier, might not be appropriate when the data is imbalanced and/or the costs of different errors vary markedly. In this Chapter, we discuss some of the sampling techniques used for balancing the datasets, and the performance measures more appropriate for mining imbalanced datasets.

  1. Determination of the off-shell Higgs boson signal strength in the high-mass $ZZ$ and $WW$ final states with the ATLAS detector

    CERN Document Server

    Aad, Georges; Abdallah, Jalal; Abdinov, Ovsat; Aben, Rosemarie; Abolins, Maris; AbouZeid, Ossama; Abramowicz, Halina; Abreu, Henso; Abreu, Ricardo; Abulaiti, Yiming; Acharya, Bobby Samir; Adamczyk, Leszek; Adams, David; Adelman, Jahred; Adomeit, Stefanie; Adye, Tim; Affolder, Tony; Agatonovic-Jovin, Tatjana; Aguilar-Saavedra, Juan Antonio; Agustoni, Marco; Ahlen, Steven; Ahmadov, Faig; Aielli, Giulio; Akerstedt, Henrik; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov, Andrei; Alberghi, Gian Luigi; Albert, Justin; Albrand, Solveig; Alconada Verzini, Maria Josefina; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexopoulos, Theodoros; Alhroob, Muhammad; Alimonti, Gianluca; Alio, Lion; Alison, John; Allbrooke, Benedict; Allport, Phillip; Aloisio, Alberto; Alonso, Alejandro; Alonso, Francisco; Alpigiani, Cristiano; Altheimer, Andrew David; Alvarez Gonzalez, Barbara; Άlvarez Piqueras, Damián; Alviggi, Mariagrazia; Amako, Katsuya; Amaral Coutinho, Yara; Amelung, Christoph; Amidei, Dante; Amor Dos Santos, Susana Patricia; Amorim, Antonio; Amoroso, Simone; Amram, Nir; Amundsen, Glenn; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Angelidakis, Stylianos; Angelozzi, Ivan; Anger, Philipp; Angerami, Aaron; Anghinolfi, Francis; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antonelli, Mario; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Arabidze, Giorgi; Arai, Yasuo; Araque, Juan Pedro; Arce, Ayana; Arduh, Francisco Anuar; Arguin, Jean-Francois; Argyropoulos, Spyridon; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnal, Vanessa; Arnold, Hannah; Arratia, Miguel; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Asai, Shoji; Asbah, Nedaa; Ashkenazi, Adi; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkinson, Markus; Atlay, Naim Bora; Auerbach, Benjamin; Augsten, Kamil; Aurousseau, Mathieu; Avolio, Giuseppe; Axen, Bradley; Ayoub, Mohamad Kassem; Azuelos, Georges; Baak, Max; Baas, Alessandra; Bacci, Cesare; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Badescu, Elisabeta; Bagiacchi, Paolo; Bagnaia, Paolo; Bai, Yu; Bain, Travis; Baines, John; Baker, Oliver Keith; Balek, Petr; Balestri, Thomas; Balli, Fabrice; Banas, Elzbieta; Banerjee, Swagato; Bannoura, Arwa A E; Bansil, Hardeep Singh; Barak, Liron; Baranov, Sergei; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnes, Sarah Louise; Barnett, Bruce; Barnett, Michael; Barnovska, Zuzana; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Bassalat, Ahmed; Basye, Austin; Bates, Richard; Batista, Santiago Juan; Batley, Richard; Battaglia, Marco; Bauce, Matteo; Bauer, Florian; Bawa, Harinder Singh; Beacham, James Baker; Beattie, Michael David; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Hans Peter; Becker, Anne Kathrin; Becker, Maurice; Becker, Sebastian; Beckingham, Matthew; Becot, Cyril; Beddall, Andrew; Beddall, Ayda; Bednyakov, Vadim; Bee, Christopher; Beemster, Lars; Beermann, Thomas; Begel, Michael; Behr, Katharina; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belotskiy, Konstantin; Beltramello, Olga; Benary, Odette; Benchekroun, Driss; Bender, Michael; Bendtz, Katarina; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez Garcia, Jorge-Armando; Benjamin, Douglas; Bensinger, James; Bentvelsen, Stan; Beresford, Lydia; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Beringer, Jürg; Bernard, Clare; Bernard, Nathan Rogers; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertoli, Gabriele; Bertolucci, Federico; Bertsche, Carolyn; Bertsche, David; Besana, Maria Ilaria; Besjes, Geert-Jan; Bessidskaia Bylund, Olga; Bessner, Martin Florian; Besson, Nathalie; Betancourt, Christopher; Bethke, Siegfried; Bevan, Adrian John; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianchini, Louis; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Biglietti, Michela; Bilbao De Mendizabal, Javier; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Black, Curtis; Black, James; Black, Kevin; Blackburn, Daniel; Blair, Robert; Blanchard, Jean-Baptiste; Blanco, Jacobo Ezequiel; Blazek, Tomas; Bloch, Ingo; Blocker, Craig; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Bock, Christopher; Boehler, Michael; Bogaerts, Joannes Andreas; Bogdanchikov, Alexander; Bohm, Christian; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Boldyrev, Alexey; Bomben, Marco; Bona, Marcella; Boonekamp, Maarten; Borisov, Anatoly; Borissov, Guennadi; Borroni, Sara; Bortfeldt, Jonathan; Bortolotto, Valerio; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boudreau, Joseph; Bouffard, Julian; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Bousson, Nicolas; Boutouil, Sara; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozic, Ivan; Bracinik, Juraj; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brazzale, Simone Federico; Brendlinger, Kurt; Brennan, Amelia Jean; Brenner, Lydia; Brenner, Richard; Bressler, Shikma; Bristow, Kieran; Bristow, Timothy Michael; Britton, Dave; Britzger, Daniel; Brochu, Frederic; Brock, Ian; Brock, Raymond; Bronner, Johanna; Brooijmans, Gustaaf; Brooks, Timothy; Brooks, William; Brosamer, Jacquelyn; Brost, Elizabeth; Brown, Jonathan; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Bryngemark, Lene; Buanes, Trygve; Buat, Quentin; Buchholz, Peter; Buckley, Andrew; Buda, Stelian Ioan; Budagov, Ioulian; Buehrer, Felix; Bugge, Lars; Bugge, Magnar Kopangen; Bulekov, Oleg; Burckhart, Helfried; Burdin, Sergey; Burghgrave, Blake; Burke, Stephen; Burmeister, Ingo; Busato, Emmanuel; Büscher, Daniel; Büscher, Volker; Bussey, Peter; Buszello, Claus-Peter; Butler, John; Butt, Aatif Imtiaz; Buttar, Craig; Butterworth, Jonathan; Butti, Pierfrancesco; Buttinger, William; Buzatu, Adrian; Buzykaev, Aleksey; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calandri, Alessandro; Calderini, Giovanni; Calfayan, Philippe; Caloba, Luiz; Calvet, David; Calvet, Samuel; Camacho Toro, Reina; Camarda, Stefano; Cameron, David; Caminada, Lea Michaela; Caminal Armadans, Roger; Campana, Simone; Campanelli, Mario; Campoverde, Angel; Canale, Vincenzo; Canepa, Anadi; Cano Bret, Marc; Cantero, Josu; Cantrill, Robert; Cao, Tingting; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Caputo, Regina; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Sascha; Carquin, Edson; Carrillo-Montoya, German D; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Casolino, Mirkoantonio; Castaneda-Miranda, Elizabeth; Castelli, Angelantonio; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Catastini, Pierluigi; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Caudron, Julien; Cavaliere, Viviana; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerio, Benjamin; Cerny, Karel; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cerv, Matevz; Cervelli, Alberto; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chalupkova, Ina; Chang, Philip; Chapleau, Bertrand; Chapman, John Derek; Charlton, Dave; Chau, Chav Chhiv; Chavez Barajas, Carlos Alberto; Cheatham, Susan; Chegwidden, Andrew; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Karen; Chen, Liming; Chen, Shenjian; Chen, Xin; Chen, Ye; Cheng, Hok Chuen; Cheng, Yangyang; Cheplakov, Alexander; Cheremushkina, Evgenia; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Chevalier, Laurent; Chiarella, Vitaliano; Childers, John Taylor; Chiodini, Gabriele; Chisholm, Andrew; Chislett, Rebecca Thalatta; Chitan, Adrian; Chizhov, Mihail; Choi, Kyungeon; Chouridou, Sofia; Chow, Bonnie Kar Bo; Christodoulou, Valentinos; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Chuinard, Annabelle Julia; Chwastowski, Janusz; Chytka, Ladislav; Ciapetti, Guido; Ciftci, Abbas Kenan; Cinca, Diane; Cindro, Vladimir; Cioara, Irina Antonela; Ciocio, Alessandra; Citron, Zvi Hirsh; Ciubancan, Mihai; Clark, Allan G; Clark, Brian Lee; Clark, Philip James; Clarke, Robert; Cleland, Bill; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coffey, Laurel; Cogan, Joshua Godfrey; Cole, Brian; Cole, Stephen; Colijn, Auke-Pieter; Collot, Johann; Colombo, Tommaso; Compostella, Gabriele; Conde Muiño, Patricia; Coniavitis, Elias; Connell, Simon Henry; Connelly, Ian; Consonni, Sofia Maria; Consorti, Valerio; Constantinescu, Serban; Conta, Claudio; Conti, Geraldine; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Corso-Radu, Alina; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Côté, David; Cottin, Giovanna; Cowan, Glen; Cox, Brian; Cranmer, Kyle; Cree, Graham; Crépé-Renaudin, Sabine; Crescioli, Francesco; Cribbs, Wayne Allen; Crispin Ortuzar, Mireia; Cristinziani, Markus; Croft, Vince; Crosetti, Giovanni; Cuhadar Donszelmann, Tulay; Cummings, Jane; Curatolo, Maria; Cuthbert, Cameron; Czirr, Hendrik; Czodrowski, Patrick; D'Auria, Saverio; D'Onofrio, Monica; Da Cunha Sargedas De Sousa, Mario Jose; Da Via, Cinzia; Dabrowski, Wladyslaw; Dafinca, Alexandru; Dai, Tiesheng; Dale, Orjan; Dallaire, Frederick; Dallapiccola, Carlo; Dam, Mogens; Dandoy, Jeffrey Rogers; Daniells, Andrew Christopher; Danninger, Matthias; Dano Hoffmann, Maria; Dao, Valerio; Darbo, Giovanni; Darmora, Smita; Dassoulas, James; Dattagupta, Aparajita; Davey, Will; David, Claire; Davidek, Tomas; Davies, Eleanor; Davies, Merlin; Davison, Peter; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Daya-Ishmukhametova, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Cecco, Sandro; De Groot, Nicolo; de Jong, Paul; De la Torre, Hector; De Lorenzi, Francesco; De Nooij, Lucie; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dearnaley, William James; Debbe, Ramiro; Debenedetti, Chiara; Dedovich, Dmitri; Deigaard, Ingrid; Del Peso, Jose; Del Prete, Tarcisio; Delgove, David; Deliot, Frederic; Delitzsch, Chris Malena; Deliyergiyev, Maksym; Dell'Acqua, Andrea; Dell'Asta, Lidia; Dell'Orso, Mauro; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; Deluca, Carolina; DeMarco, David; Demers, Sarah; Demichev, Mikhail; Demilly, Aurelien; Denisov, Sergey; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deterre, Cecile; Deviveiros, Pier-Olivier; Dewhurst, Alastair; Dhaliwal, Saminder; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Domenico, Antonio; Di Donato, Camilla; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Mattia, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Di Valentino, David; Diaconu, Cristinel; Diamond, Miriam; Dias, Flavia; Diaz, Marco Aurelio; Diehl, Edward; Dietrich, Janet; Diglio, Sara; Dimitrievska, Aleksandra; Dingfelder, Jochen; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; Djuvsland, Julia Isabell; Barros do Vale, Maria Aline; Dobos, Daniel; Dobre, Monica; Doglioni, Caterina; Dohmae, Takeshi; Dolejsi, Jiri; Dolezal, Zdenek; Dolgoshein, Boris; Donadelli, Marisilvia; Donati, Simone; Dondero, Paolo; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dova, Maria-Teresa; Doyle, Tony; Drechsler, Eric; Dris, Manolis; Dubreuil, Emmanuelle; Duchovni, Ehud; Duckeck, Guenter; Ducu, Otilia Anamaria; Duda, Dominik; Dudarev, Alexey; Duflot, Laurent; Duguid, Liam; Dührssen, Michael; Dunford, Monica; Duran Yildiz, Hatice; Düren, Michael; Durglishvili, Archil; Duschinger, Dirk; Dwuznik, Michal; Dyndal, Mateusz; Eckardt, Christoph; Ecker, Katharina Maria; Edson, William; Edwards, Nicholas Charles; Ehrenfeld, Wolfgang; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Elliot, Alison; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Enari, Yuji; Endner, Oliver Chris; Endo, Masaki; Engelmann, Roderich; Erdmann, Johannes; Ereditato, Antonio; Ernis, Gunar; Ernst, Jesse; Ernst, Michael; Errede, Steven; Ertel, Eugen; Escalier, Marc; Esch, Hendrik; Escobar, Carlos; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Ezhilov, Alexey; Fabbri, Laura; Facini, Gabriel; Fakhrutdinov, Rinat; Falciano, Speranza; Falla, Rebecca Jane; Faltova, Jana; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farooque, Trisha; Farrell, Steven; Farrington, Sinead; Farthouat, Philippe; Fassi, Farida; Fassnacht, Patrick; Fassouliotis, Dimitrios; Favareto, Andrea; Fayard, Louis; Federic, Pavol; Fedin, Oleg; Fedorko, Wojciech; Feigl, Simon; Feligioni, Lorenzo; Feng, Cunfeng; Feng, Eric; Feng, Haolu; Fenyuk, Alexander; Fernandez Martinez, Patricia; Fernandez Perez, Sonia; Ferrag, Samir; Ferrando, James; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filipuzzi, Marco; Filthaut, Frank; Fincke-Keeler, Margret; Finelli, Kevin Daniel; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Adam; Fischer, Cora; Fischer, Julia; Fisher, Wade Cameron; Fitzgerald, Eric Andrew; Flechl, Martin; Fleck, Ivor; Fleischmann, Philipp; Fleischmann, Sebastian; Fletcher, Gareth Thomas; Fletcher, Gregory; Flick, Tobias; Floderus, Anders; Flores Castillo, Luis; Flowerdew, Michael; Formica, Andrea; Forti, Alessandra; Fournier, Daniel; Fox, Harald; Fracchia, Silvia; Francavilla, Paolo; Franchini, Matteo; Francis, David; Franconi, Laura; Franklin, Melissa; Fraternali, Marco; Freeborn, David; French, Sky; Friedrich, Felix; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fulsom, Bryan Gregory; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gabrielli, Alessandro; Gabrielli, Andrea; Gadatsch, Stefan; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Galhardo, Bruno; Gallas, Elizabeth; Gallop, Bruce; Gallus, Petr; Galster, Gorm Aske Gram Krohn; Gan, KK; Gao, Jun; Gao, Yanyan; Gao, Yongsheng; Garay Walls, Francisca; Garberson, Ford; García, Carmen; García Navarro, José Enrique; Garcia-Sciveres, Maurice; Gardner, Robert; Garelli, Nicoletta; Garonne, Vincent; Gatti, Claudio; Gaudiello, Andrea; Gaudio, Gabriella; Gaur, Bakul; Gauthier, Lea; Gauzzi, Paolo; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Ge, Peng; Gecse, Zoltan; Gee, Norman; Geerts, Daniël Alphonsus Adrianus; Geich-Gimbel, Christoph; Geisler, Manuel Patrice; Gemme, Claudia; Genest, Marie-Hélène; Gentile, Simonetta; George, Matthias; George, Simon; Gerbaudo, Davide; Gershon, Avi; Ghazlane, Hamid; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giangiobbe, Vincent; Giannetti, Paola; Gibbard, Bruce; Gibson, Stephen; Gilchriese, Murdock; Gillam, Thomas; Gillberg, Dag; Gilles, Geoffrey; Gingrich, Douglas; Giokaris, Nikos; Giordani, MarioPaolo; Giorgi, Filippo Maria; Giorgi, Francesco Michelangelo; Giraud, Pierre-Francois; Giromini, Paolo; Giugni, Danilo; Giuliani, Claudia; Giulini, Maddalena; Gjelsten, Børge Kile; Gkaitatzis, Stamatios; Gkialas, Ioannis; Gkougkousis, Evangelos Leonidas; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glaysher, Paul; Glazov, Alexandre; Goblirsch-Kolb, Maximilian; Goddard, Jack Robert; Godlewski, Jan; Goldfarb, Steven; Golling, Tobias; Golubkov, Dmitry; Gomes, Agostinho; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; González de la Hoz, Santiago; Gonzalez Parra, Garoe; Gonzalez-Sevilla, Sergio; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Goshaw, Alfred; Gössling, Claus; Gostkin, Mikhail Ivanovitch; Goujdami, Driss; Goussiou, Anna; Govender, Nicolin; Grabas, Herve Marie Xavier; Graber, Lars; Grabowska-Bold, Iwona; Grafström, Per; Grahn, Karl-Johan; Gramling, Johanna; Gramstad, Eirik; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Gray, Heather; Graziani, Enrico; Greenwood, Zeno Dixon; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Griffiths, Justin; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grishkevich, Yaroslav; Grivaz, Jean-Francois; Grohs, Johannes Philipp; Grohsjean, Alexander; Gross, Eilam; Grosse-Knetter, Joern; Grossi, Giulio Cornelio; Grout, Zara Jane; Guan, Liang; Guenther, Jaroslav; Guescini, Francesco; Guest, Daniel; Gueta, Orel; Guido, Elisa; Guillemin, Thibault; Guindon, Stefan; Gul, Umar; Gumpert, Christian; Guo, Jun; Gupta, Shaun; Gutierrez, Phillip; Gutierrez Ortiz, Nicolas Gilberto; Gutschow, Christian; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haber, Carl; Hadavand, Haleh Khani; Haddad, Nacim; Haefner, Petra; Hageböck, Stephan; Hajduk, Zbigniew; Hakobyan, Hrachya; Haleem, Mahsana; Haley, Joseph; Hall, David; Halladjian, Garabed; Hallewell, Gregory David; Hamacher, Klaus; Hamal, Petr; Hamano, Kenji; Hamer, Matthias; Hamilton, Andrew; Hamilton, Samuel; Hamity, Guillermo Nicolas; Hamnett, Phillip George; Han, Liang; Hanagaki, Kazunori; Hanawa, Keita; Hance, Michael; Hanke, Paul; Hanna, Remie; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Maike Christina; Hansen, Peter Henrik; Hara, Kazuhiko; Hard, Andrew; Harenberg, Torsten; Hariri, Faten; Harkusha, Siarhei; Harrington, Robert; Harrison, Paul Fraser; Hartjes, Fred; Hasegawa, Makoto; Hasegawa, Satoshi; Hasegawa, Yoji; Hasib, A; Hassani, Samira; Haug, Sigve; Hauser, Reiner; Hauswald, Lorenz; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Anthony David; Hayashi, Takayasu; Hayden, Daniel; Hays, Chris; Hays, Jonathan Michael; Hayward, Helen; Haywood, Stephen; Head, Simon; Heck, Tobias; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heim, Timon; Heinemann, Beate; Heinrich, Lukas; Hejbal, Jiri; Helary, Louis; Hellman, Sten; Hellmich, Dennis; Helsens, Clement; Henderson, James; Henderson, Robert; Heng, Yang; Hengler, Christopher; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Herbert, Geoffrey Henry; Hernández Jiménez, Yesenia; Herrberg-Schubert, Ruth; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hesketh, Gavin Grant; Hessey, Nigel; Hetherly, Jeffrey Wayne; Hickling, Robert; Higón-Rodriguez, Emilio; Hill, Ewan; Hill, John; Hiller, Karl Heinz; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hinman, Rachel Reisner; Hirose, Minoru; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoenig, Friedrich; Hohlfeld, Marc; Hohn, David; Holmes, Tova Ray; Hong, Tae Min; Hooft van Huysduynen, Loek; Hopkins, Walter; Horii, Yasuyuki; Horton, Arthur James; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howard, Jacob; Howarth, James; Hrabovsky, Miroslav; Hristova, Ivana; Hrivnac, Julius; Hryn'ova, Tetiana; Hrynevich, Aliaksei; Hsu, Catherine; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Hu, Diedi; Hu, Qipeng; Hu, Xueye; Huang, Yanping; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Huhtinen, Mika; Hülsing, Tobias Alexander; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Ideal, Emma; Idrissi, Zineb; Iengo, Paolo; Igonkina, Olga; Iizawa, Tomoya; Ikegami, Yoichi; Ikematsu, Katsumasa; Ikeno, Masahiro; Ilchenko, Iurii; Iliadis, Dimitrios; Ilic, Nikolina; Inamaru, Yuki; Ince, Tayfun; Ioannou, Pavlos; Iodice, Mauro; Iordanidou, Kalliopi; Ippolito, Valerio; Irles Quiles, Adrian; Isaksson, Charlie; Ishino, Masaya; Ishitsuka, Masaki; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Iturbe Ponce, Julia Mariana; Iuppa, Roberto; Ivarsson, Jenny; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jabbar, Samina; Jackson, Brett; Jackson, Matthew; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakoubek, Tomas; Jakubek, Jan; Jamin, David Olivier; Jana, Dilip; Jansen, Eric; Jansky, Roland; Janssen, Jens; Janus, Michel; Jarlskog, Göran; Javadov, Namig; Javůrek, Tomáš; Jeanty, Laura; Jejelava, Juansher; Jeng, Geng-yuan; Jennens, David; Jenni, Peter; Jentzsch, Jennifer; Jeske, Carl; Jézéquel, Stéphane; Ji, Haoshuang; Jia, Jiangyong; Jiang, Yi; Jimenez Pena, Javier; Jin, Shan; Jinaru, Adam; Jinnouchi, Osamu; Joergensen, Morten Dam; Johansson, Per; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tim; Jongmanns, Jan; Jorge, Pedro; Joshi, Kiran Daniel; Jovicevic, Jelena; Ju, Xiangyang; Jung, Christian; Jussel, Patrick; Juste Rozas, Aurelio; Kaci, Mohammed; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kahn, Sebastien Jonathan; Kajomovitz, Enrique; Kalderon, Charles William; Kama, Sami; Kamenshchikov, Andrey; Kanaya, Naoko; Kaneda, Michiru; Kaneti, Steven; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kar, Deepak; Karakostas, Konstantinos; Karamaoun, Andrew; Karastathis, Nikolaos; Kareem, Mohammad Jawad; Karnevskiy, Mikhail; Karpov, Sergey; Karpova, Zoya; Karthik, Krishnaiyengar; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kass, Richard; Kastanas, Alex; Kataoka, Yousuke; Katre, Akshay; Katzy, Judith; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kazama, Shingo; Kazanin, Vassili; Kazarinov, Makhail; Keeler, Richard; Kehoe, Robert; Keil, Markus; Keller, John; Kempster, Jacob Julian; Keoshkerian, Houry; Kepka, Oldrich; Kerševan, Borut Paul; Kersten, Susanne; Keyes, Robert; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharlamov, Alexey; Khoo, Teng Jian; Khoriauli, Gia; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hee Yeun; Kim, Hyeon Jin; Kim, Shinhong; Kim, Young-Kee; Kimura, Naoki; Kind, Oliver Maria; King, Barry; King, Matthew; King, Robert Steven Beaufoy; King, Samuel Burton; Kirk, Julie; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kiss, Florian; Kiuchi, Kenji; Kladiva, Eduard; Klein, Matthew Henry; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klimek, Pawel; Klimentov, Alexei; Klingenberg, Reiner; Klinger, Joel Alexander; Klioutchnikova, Tatiana; Klok, Peter; Kluge, Eike-Erik; Kluit, Peter; Kluth, Stefan; Kneringer, Emmerich; Knoops, Edith; Knue, Andrea; Kobayashi, Dai; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Koffas, Thomas; Koffeman, Els; Kogan, Lucy Anne; Kohlmann, Simon; Kohout, Zdenek; Kohriki, Takashi; Koi, Tatsumi; Kolanoski, Hermann; Koletsou, Iro; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Kondrashova, Nataliia; Köneke, Karsten; König, Adriaan; König, Sebastian; Kono, Takanori; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kopeliansky, Revital; Koperny, Stefan; Köpke, Lutz; Kopp, Anna Katharina; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Kortner, Oliver; Kortner, Sandra; Kosek, Tomas; Kostyukhin, Vadim; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumeli-Charalampidi, Athina; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kramarenko, Viktor; Kramberger, Gregor; Krasnopevtsev, Dimitriy; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, Jana; Kravchenko, Anton; Kreiss, Sven; Kretz, Moritz; Kretzschmar, Jan; Kreutzfeldt, Kristof; Krieger, Peter; Krizka, Karol; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Krumnack, Nils; Krumshteyn, Zinovii; Kruse, Amanda; Kruse, Mark; Kruskal, Michael; Kubota, Takashi; Kucuk, Hilal; Kuday, Sinan; Kuehn, Susanne; Kugel, Andreas; Kuger, Fabian; Kuhl, Andrew; Kuhl, Thorsten; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kuna, Marine; Kunigo, Takuto; Kupco, Alexander; Kurashige, Hisaya; Kurochkin, Yurii; Kurumida, Rie; Kus, Vlastimil; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; Kwan, Tony; Kyriazopoulos, Dimitrios; La Rosa, Alessandro; La Rosa Navarro, Jose Luis; La Rotonda, Laura; Lacasta, Carlos; Lacava, Francesco; Lacey, James; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Lambourne, Luke; Lammers, Sabine; Lampen, Caleb; Lampl, Walter; Lançon, Eric; Landgraf, Ulrich; Landon, Murrough; Lang, Valerie Susanne; Lange, J örn Christian; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Lasagni Manghi, Federico; Lassnig, Mario; Laurelli, Paolo; Lavrijsen, Wim; Law, Alexander; Laycock, Paul; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Menedeu, Eve; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Claire Alexandra; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Guillaume; Lefebvre, Michel; Legger, Federica; Leggett, Charles; Lehan, Allan; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leight, William Axel; Leisos, Antonios; Leister, Andrew Gerard; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lemmer, Boris; Leney, Katharine; Lenz, Tatjana; Lenzen, Georg; Lenzi, Bruno; Leone, Robert; Leone, Sandra; Leonidopoulos, Christos; Leontsinis, Stefanos; Leroy, Claude; Lester, Christopher; Levchenko, Mikhail; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levy, Mark; Lewis, Adrian; Leyko, Agnieszka; Leyton, Michael; Li, Bing; Li, Haifeng; Li, Ho Ling; Li, Lei; Li, Liang; Li, Shu; Li, Yichen; Liang, Zhijun; Liao, Hongbo; Liberti, Barbara; Liblong, Aaron; Lichard, Peter; Lie, Ki; Liebal, Jessica; Liebig, Wolfgang; Limbach, Christian; Limosani, Antonio; Lin, Simon; Lin, Tai-Hua; Linde, Frank; Lindquist, Brian Edward; Linnemann, James; Lipeles, Elliot; Lipniacka, Anna; Lisovyi, Mykhailo; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Bo; Liu, Dong; Liu, Jian; Liu, Jianbei; Liu, Kun; Liu, Lulu; Liu, Miaoyuan; Liu, Minghui; Liu, Yanwen; Livan, Michele; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lo Sterzo, Francesco; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Loebinger, Fred; Loevschall-Jensen, Ask Emil; Loginov, Andrey; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Long, Brian Alexander; Long, Jonathan; Long, Robin Eamonn; Looper, Kristina Anne; Lopes, Lourenco; Lopez Mateos, David; Lopez Paredes, Brais; Lopez Paz, Ivan; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Loscutoff, Peter; Lösel, Philipp Jonathan; Lou, XinChou; Lounis, Abdenour; Love, Jeremy; Love, Peter; Lu, Nan; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Luehring, Frederick; Lukas, Wolfgang; Luminari, Lamberto; Lundberg, Olof; Lund-Jensen, Bengt; Lungwitz, Matthias; Lynn, David; Lysak, Roman; Lytken, Else; Ma, Hong; Ma, Lian Liang; Maccarrone, Giovanni; Macchiolo, Anna; Macdonald, Calum Michael; Machado Miguens, Joana; Macina, Daniela; Madaffari, Daniele; Madar, Romain; Maddocks, Harvey Jonathan; Mader, Wolfgang; Madsen, Alexander; Maeland, Steffen; Maeno, Tadashi; Maevskiy, Artem; Magradze, Erekle; Mahboubi, Kambiz; Mahlstedt, Joern; Maiani, Camilla; Maidantchik, Carmen; Maier, Andreas Alexander; Maier, Thomas; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Malaescu, Bogdan; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mamuzic, Judita; Mancini, Giada; Mandelli, Beatrice; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Manfredini, Alessandro; Manhaes de Andrade Filho, Luciano; Manjarres Ramos, Joany; Mann, Alexander; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Mantifel, Rodger; Mantoani, Matteo; Mapelli, Livio; March, Luis; Marchiori, Giovanni; Marcisovsky, Michal; Marino, Christopher; Marjanovic, Marija; Marroquim, Fernando; Marsden, Stephen Philip; Marshall, Zach; Marti, Lukas Fritz; Marti-Garcia, Salvador; Martin, Brian Thomas; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martinez, Mario; Martin-Haugh, Stewart; Martoiu, Victor Sorin; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massa, Lorenzo; Massol, Nicolas; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mättig, Peter; Mattmann, Johannes; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; Mazini, Rachid; Mazza, Simone Michele; Mazzaferro, Luca; Mc Goldrick, Garrin; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McCubbin, Norman; McFarlane, Kenneth; Mcfayden, Josh; Mchedlidze, Gvantsa; McMahon, Steve; McPherson, Robert; Medinnis, Michael; Meehan, Samuel; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meineck, Christian; Meirose, Bernhard; Mellado Garcia, Bruce Rafael; Meloni, Federico; Mengarelli, Alberto; Menke, Sven; Meoni, Evelin; Mercurio, Kevin Michael; Mergelmeyer, Sebastian; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Carsten; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Middleton, Robin; Miglioranzi, Silvia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Milesi, Marco; Milic, Adriana; Miller, David; Mills, Corrinne; Milov, Alexander; Milstead, David; Minaenko, Andrey; Minami, Yuto; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mitani, Takashi; Mitrevski, Jovan; Mitsou, Vasiliki A; Miucci, Antonio; Miyagawa, Paul; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Mochizuki, Kazuya; Mohapatra, Soumya; Mohr, Wolfgang; Molander, Simon; Moles-Valls, Regina; Mönig, Klaus; Monini, Caterina; Monk, James; Monnier, Emmanuel; Montejo Berlingen, Javier; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Morange, Nicolas; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morgenstern, Marcus; Morii, Masahiro; Morisbak, Vanja; Moritz, Sebastian; Morley, Anthony Keith; Mornacchi, Giuseppe; Morris, John; Mortensen, Simon Stark; Morton, Alexander; Morvaj, Ljiljana; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Motohashi, Kazuki; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Muanza, Steve; Mudd, Richard; Mueller, Felix; Mueller, James; Mueller, Klemens; Mueller, Ralph Soeren Peter; Mueller, Thibaut; Muenstermann, Daniel; Mullen, Paul; Munwes, Yonathan; Murillo Quijada, Javier Alberto; Murray, Bill; Musheghyan, Haykuhi; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nackenhorst, Olaf; Nadal, Jordi; Nagai, Koichi; Nagai, Ryo; Nagai, Yoshikazu; Nagano, Kunihiro; Nagarkar, Advait; Nagasaka, Yasushi; Nagata, Kazuki; Nagel, Martin; Nagy, Elemer; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Namasivayam, Harisankar; Nanava, Gizo; Naranjo Garcia, Roger Felipe; Narayan, Rohin; Naumann, Thomas; Navarro, Gabriela; Nayyar, Ruchika; Neal, Homer; Nechaeva, Polina; Neep, Thomas James; Nef, Pascal Daniel; Negri, Andrea; Negrini, Matteo; Nektarijevic, Snezana; Nellist, Clara; Nelson, Andrew; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neumann, Manuel; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen, Duong Hai; Nickerson, Richard; Nicolaidou, Rosy; Nicquevert, Bertrand; Nielsen, Jason; Nikiforou, Nikiforos; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Jon Kerr; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nisius, Richard; Nobe, Takuya; Nomachi, Masaharu; Nomidis, Ioannis; Nooney, Tamsin; Norberg, Scarlet; Nordberg, Markus; Novgorodova, Olga; Nowak, Sebastian; Nozaki, Mitsuaki; Nozka, Libor; Ntekas, Konstantinos; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; Nuti, Francesco; O'Brien, Brendan Joseph; O'grady, Fionnbarr; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Obermann, Theresa; Ocariz, Jose; Ochi, Atsuhiko; Ochoa, Ines; Oda, Susumu; Odaka, Shigeru; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohman, Henrik; Oide, Hideyuki; Okamura, Wataru; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Olivares Pino, Sebastian Andres; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olszewski, Andrzej; Olszowska, Jolanta; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlando, Nicola; Oropeza Barrera, Cristina; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Otero y Garzon, Gustavo; Otono, Hidetoshi; Ouchrif, Mohamed; Ouellette, Eric; Ould-Saada, Farid; Ouraou, Ahmimed; Oussoren, Koen Pieter; Ouyang, Qun; Ovcharova, Ana; Owen, Mark; Owen, Rhys Edward; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pachal, Katherine; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Pagáčová, Martina; Pagan Griso, Simone; Paganis, Efstathios; Pahl, Christoph; Paige, Frank; Pais, Preema; Pajchel, Katarina; Palacino, Gabriel; Palestini, Sandro; Palka, Marek; Pallin, Dominique; Palma, Alberto; Pan, Yibin; Panagiotopoulou, Evgenia; Pandini, Carlo Enrico; Panduro Vazquez, William; Pani, Priscilla; Panitkin, Sergey; Paolozzi, Lorenzo; Papadopoulou, Theodora; Papageorgiou, Konstantinos; Paramonov, Alexander; Paredes Hernandez, Daniela; Parker, Michael Andrew; Parker, Kerry Ann; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passaggio, Stefano; Pastore, Fernanda; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Pauly, Thilo; Pearce, James; Pearson, Benjamin; Pedersen, Lars Egholm; Pedersen, Maiken; Pedraza Lopez, Sebastian; Pedro, Rute; Peleganchuk, Sergey; Pelikan, Daniel; Peng, Haiping; Penning, Bjoern; Penwell, John; Perepelitsa, Dennis; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perini, Laura; Pernegger, Heinz; Perrella, Sabrina; Peschke, Richard; Peshekhonov, Vladimir; Peters, Krisztian; Peters, Yvonne; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Pettersson, Nora Emilia; Pezoa, Raquel; Phillips, Peter William; Piacquadio, Giacinto; Pianori, Elisabetta; Picazio, Attilio; Piccaro, Elisa; Piccinini, Maurizio; Pickering, Mark Andrew; Piegaia, Ricardo; Pignotti, David; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinfold, James; Pingel, Almut; Pinto, Belmiro; Pires, Sylvestre; Pitt, Michael; Pizio, Caterina; Plazak, Lukas; Pleier, Marc-Andre; Pleskot, Vojtech; Plotnikova, Elena; Plucinski, Pawel; Pluth, Daniel; Poettgen, Ruth; Poggioli, Luc; Pohl, David-leon; Polesello, Giacomo; Policicchio, Antonio; Polifka, Richard; Polini, Alessandro; Pollard, Christopher Samuel; Polychronakos, Venetios; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Pospisil, Stanislav; Potamianos, Karolos; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Pozdnyakov, Valery; Pralavorio, Pascal; Pranko, Aliaksandr; Prasad, Srivas; Prell, Soeren; Price, Darren; Price, Joe; Price, Lawrence; Primavera, Margherita; Prince, Sebastien; Proissl, Manuel; Prokofiev, Kirill; Prokoshin, Fedor; Protopapadaki, Eftychia-sofia; Protopopescu, Serban; Proudfoot, James; Przybycien, Mariusz; Ptacek, Elizabeth; Puddu, Daniele; Pueschel, Elisa; Puldon, David; Purohit, Milind; Puzo, Patrick; Qian, Jianming; Qin, Gang; Qin, Yang; Quadt, Arnulf; Quarrie, David; Quayle, William; Queitsch-Maitland, Michaela; Quilty, Donnchadha; Radeka, Veljko; Radescu, Voica; Radhakrishnan, Sooraj Krishnan; Radloff, Peter; Rados, Pere; Ragusa, Francesco; Rahal, Ghita; Rajagopalan, Srinivasan; Rammensee, Michael; Rangel-Smith, Camila; Rauscher, Felix; Rave, Stefan; Ravenscroft, Thomas; Raymond, Michel; Read, Alexander Lincoln; Readioff, Nathan Peter; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Rehnisch, Laura; Reisin, Hernan; Relich, Matthew; Rembser, Christoph; Ren, Huan; Renaud, Adrien; Rescigno, Marco; Resconi, Silvia; Rezanova, Olga; Reznicek, Pavel; Rezvani, Reyhaneh; Richter, Robert; Richter, Stefan; Richter-Was, Elzbieta; Ricken, Oliver; Ridel, Melissa; Rieck, Patrick; Riegel, Christian Johann; Rieger, Julia; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Ristić, Branislav; Ritsch, Elmar; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robson, Aidan; Roda, Chiara; Roe, Shaun; Røhne, Ole; Rolli, Simona; Romaniouk, Anatoli; Romano, Marino; Romano Saez, Silvestre Marino; Romero Adam, Elena; Rompotis, Nikolaos; Ronzani, Manfredi; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Peyton; Rosendahl, Peter Lundgaard; Rosenthal, Oliver; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rosten, Rachel; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexandre; Rozen, Yoram; Ruan, Xifeng; Rubbo, Francesco; Rubinskiy, Igor; Rud, Viacheslav; Rudolph, Christian; Rudolph, Matthew Scott; Rühr, Frederik; Ruiz-Martinez, Aranzazu; Rurikova, Zuzana; Rusakovich, Nikolai; Ruschke, Alexander; Russell, Heather; Rutherfoord, John; Ruthmann, Nils; Ryabov, Yury; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Saavedra, Aldo; Sabato, Gabriele; Sacerdoti, Sabrina; Saddique, Asif; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Saimpert, Matthias; Sakamoto, Hiroshi; Sakurai, Yuki; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salek, David; Sales De Bruin, Pedro Henrique; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Sanchez, Arturo; Sánchez, Javier; Sanchez Martinez, Victoria; Sandaker, Heidi; Sandbach, Ruth Laura; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Carlos; Sandstroem, Rikard; Sankey, Dave; Sansoni, Andrea; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Santoyo Castillo, Itzebelt; Sapp, Kevin; Sapronov, Andrey; Saraiva, João; Sarrazin, Bjorn; Sasaki, Osamu; Sasaki, Yuichi; Sato, Koji; Sauvage, Gilles; Sauvan, Emmanuel; Savage, Graham; Savard, Pierre; Sawyer, Craig; Sawyer, Lee; Saxon, James; Sbarra, Carla; Sbrizzi, Antonio; Scanlon, Tim; Scannicchio, Diana; Scarcella, Mark; Scarfone, Valerio; Schaarschmidt, Jana; Schacht, Peter; Schaefer, Douglas; Schaefer, Ralph; Schaeffer, Jan; Schaepe, Steffen; Schaetzel, Sebastian; Schäfer, Uli; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R~Dean; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Schiavi, Carlo; Schillo, Christian; Schioppa, Marco; Schlenker, Stefan; Schmidt, Evelyn; Schmieden, Kristof; Schmitt, Christian; Schmitt, Sebastian; Schmitt, Stefan; Schneider, Basil; Schnellbach, Yan Jie; Schnoor, Ulrike; Schoeffel, Laurent; Schoening, Andre; Schoenrock, Bradley Daniel; Schopf, Elisabeth; Schorlemmer, Andre Lukas; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schramm, Steven; Schreyer, Manuel; Schroeder, Christian; Schuh, Natascha; Schultens, Martin Johannes; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwarz, Thomas Andrew; Schwegler, Philipp; Schwemling, Philippe; Schwienhorst, Reinhard; Schwindling, Jerome; Schwindt, Thomas; Schwoerer, Maud; Sciacca, Gianfranco; Scifo, Estelle; Sciolla, Gabriella; Scuri, Fabrizio; Scutti, Federico; Searcy, Jacob; Sedov, George; Sedykh, Evgeny; Seema, Pienpen; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Sekula, Stephen; Selbach, Karoline Elfriede; Seliverstov, Dmitry; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Serre, Thomas; Seuster, Rolf; Severini, Horst; Sfiligoj, Tina; Sforza, Federico; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shang, Ruo-yu; Shank, James; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Shcherbakova, Anna; Shehu, Ciwake Yusufu; Sherwood, Peter; Shi, Liaoshan; Shimizu, Shima; Shimmin, Chase Owen; Shimojima, Makoto; Shiyakova, Mariya; Shmeleva, Alevtina; Shoaleh Saadi, Diane; Shochet, Mel; Shojaii, Seyedruhollah; Shrestha, Suyog; Shulga, Evgeny; Shupe, Michael; Shushkevich, Stanislav; Sicho, Petr; Sidiropoulou, Ourania; Sidorov, Dmitri; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silva, José; Silver, Yiftah; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simioni, Eduard; Simmons, Brinick; Simon, Dorian; Simoniello, Rosa; Sinervo, Pekka; Sinev, Nikolai; Siragusa, Giovanni; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skinner, Malcolm Bruce; Skottowe, Hugh Philip; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Slawinska, Magdalena; Sliwa, Krzysztof; Smakhtin, Vladimir; Smart, Ben; Smestad, Lillian; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smith, Matthew; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snidero, Giacomo; Snyder, Scott; Sobie, Randall; Socher, Felix; Soffer, Abner; Soh, Dart-yin; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Soldatov, Evgeny; Soldevila, Urmila; Solodkov, Alexander; Soloshenko, Alexei; Solovyanov, Oleg; Solovyev, Victor; Sommer, Philip; Song, Hong Ye; Soni, Nitesh; Sood, Alexander; Sopczak, Andre; Sopko, Bruno; Sopko, Vit; Sorin, Veronica; Sosa, David; Sosebee, Mark; Sotiropoulou, Calliope Louisa; Soualah, Rachik; Soueid, Paul; Soukharev, Andrey; South, David; Spagnolo, Stefania; Spalla, Margherita; Spanò, Francesco; Spearman, William Robert; Spettel, Fabian; Spighi, Roberto; Spigo, Giancarlo; Spiller, Laurence Anthony; Spousta, Martin; Spreitzer, Teresa; St Denis, Richard Dante; Staerz, Steffen; Stahlman, Jonathan; Stamen, Rainer; Stamm, Soren; Stanecka, Ewa; Stanescu, Cristian; Stanescu-Bellu, Madalina; Stanitzki, Marcel Michael; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Staszewski, Rafal; Stavina, Pavel; Steinberg, Peter; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stern, Sebastian; Stewart, Graeme; Stillings, Jan Andre; Stockton, Mark; Stoebe, Michael; Stoicea, Gabriel; Stolte, Philipp; Stonjek, Stefan; Stradling, Alden; Straessner, Arno; Stramaglia, Maria Elena; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strauss, Emanuel; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Strubig, Antonia; Stucci, Stefania Antonia; Stugu, Bjarne; Styles, Nicholas Adam; Su, Dong; Su, Jun; Subramaniam, Rajivalochan; Succurro, Antonella; Sugaya, Yorihito; Suhr, Chad; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Siyuan; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Susinno, Giancarlo; Sutton, Mark; Suzuki, Shota; Suzuki, Yu; Svatos, Michal; Swedish, Stephen; Swiatlowski, Maximilian; Sykora, Ivan; Sykora, Tomas; Ta, Duc; Taccini, Cecilia; Tackmann, Kerstin; Taenzer, Joe; Taffard, Anyes; Tafirout, Reda; Taiblum, Nimrod; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tam, Jason; Tan, Kong Guan; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tannenwald, Benjamin Bordy; Tannoury, Nancy; Tapprogge, Stefan; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tashiro, Takuya; Tassi, Enrico; Tavares Delgado, Ademar; Tayalati, Yahya; Taylor, Frank; Taylor, Geoffrey; Taylor, Wendy; Teischinger, Florian Alfred; Teixeira Dias Castanheira, Matilde; Teixeira-Dias, Pedro; Temming, Kim Katrin; Ten Kate, Herman; Teng, Ping-Kun; Teoh, Jia Jian; Tepel, Fabian-Phillipp; Terada, Susumu; Terashi, Koji; Terron, Juan; Terzo, Stefano; Testa, Marianna; Teuscher, Richard; Therhaag, Jan; Theveneaux-Pelzer, Timothée; Thomas, Juergen; Thomas-Wilsker, Joshuha; Thompson, Emily; Thompson, Paul; Thompson, Ray; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Thomson, Mark; Thun, Rudolf; Tibbetts, Mark James; Ticse Torres, Royer Edson; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tiouchichine, Elodie; Tipton, Paul; Tisserant, Sylvain; Todorov, Theodore; Todorova-Nova, Sharka; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tollefson, Kirsten; Tolley, Emma; Tomlinson, Lee; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Trefzger, Thomas; Tremblet, Louis; Tricoli, Alessandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Tripiana, Martin; Trischuk, William; Trocmé, Benjamin; Troncon, Clara; Trottier-McDonald, Michel; Trovatelli, Monica; True, Patrick; Trzebinski, Maciej; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsirintanis, Nikolaos; Tsiskaridze, Shota; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsuno, Soshi; Tsybychev, Dmitri; Tudorache, Alexandra; Tudorache, Valentina; Tuna, Alexander Naip; Tupputi, Salvatore; Turchikhin, Semen; Turecek, Daniel; Turra, Ruggero; Turvey, Andrew John; Tuts, Michael; Tykhonov, Andrii; Tylmad, Maja; Tyndel, Mike; Ueda, Ikuo; Ueno, Ryuichi; Ughetto, Michael; Ugland, Maren; Uhlenbrock, Mathias; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Ungaro, Francesca; Unno, Yoshinobu; Unverdorben, Christopher; Urban, Jozef; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Usanova, Anna; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Valderanis, Chrysostomos; Valencic, Nika; Valentinetti, Sara; Valero, Alberto; Valery, Loic; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; Van Den Wollenberg, Wouter; Van Der Deijl, Pieter; van der Geer, Rogier; van der Graaf, Harry; Van Der Leeuw, Robin; van Eldik, Niels; van Gemmeren, Peter; Van Nieuwkoop, Jacobus; van Vulpen, Ivo; van Woerden, Marius Cornelis; Vanadia, Marco; Vandelli, Wainer; Vanguri, Rami; Vaniachine, Alexandre; Vannucci, Francois; Vardanyan, Gagik; Vari, Riccardo; Varnes, Erich; Varol, Tulin; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vazeille, Francois; Vazquez Schroeder, Tamara; Veatch, Jason; Veloso, Filipe; Velz, Thomas; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Venturini, Alessio; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Viazlo, Oleksandr; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Vigne, Ralph; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinogradov, Vladimir; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vladoiu, Dan; Vlasak, Michal; Vogel, Marcelo; Vokac, Petr; Volpi, Guido; Volpi, Matteo; von der Schmitt, Hans; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorobev, Konstantin; Vos, Marcel; Voss, Rudiger; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vuillermet, Raphael; Vukotic, Ilija; Vykydal, Zdenek; Wagner, Peter; Wagner, Wolfgang; Wahlberg, Hernan; Wahrmund, Sebastian; Wakabayashi, Jun; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wang, Chao; Wang, Fuquan; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Kuhan; Wang, Rui; Wang, Song-Ming; Wang, Tan; Wang, Xiaoxiao; Wanotayaroj, Chaowaroj; Warburton, Andreas; Ward, Patricia; Wardrope, David Robert; Warsinsky, Markus; Washbrook, Andrew; Wasicki, Christoph; Watkins, Peter; Watson, Alan; Watson, Ian; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Ben; Webb, Samuel; Weber, Michele; Weber, Stefan Wolf; Webster, Jordan S; Weidberg, Anthony; Weinert, Benjamin; Weingarten, Jens; Weiser, Christian; Weits, Hartger; Wells, Phillippa; Wenaus, Torre; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Wessels, Martin; Wetter, Jeffrey; Whalen, Kathleen; Wharton, Andrew Mark; White, Andrew; White, Martin; White, Ryan; White, Sebastian; Whiteson, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik-Fuchs, Liv Antje Mari; Wildauer, Andreas; Wilkens, Henric George; Williams, Hugh; Williams, Sarah; Willis, Christopher; Willocq, Stephane; Wilson, Alan; Wilson, John; Wingerter-Seez, Isabelle; Winklmeier, Frank; Winter, Benedict Tobias; Wittgen, Matthias; Wittkowski, Josephine; Wollstadt, Simon Jakob; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wozniak, Krzysztof; Wu, Mengqing; Wu, Miles; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wyatt, Terry Richard; Wynne, Benjamin; Xella, Stefania; Xu, Da; Xu, Lailin; Yabsley, Bruce; Yacoob, Sahal; Yakabe, Ryota; Yamada, Miho; Yamaguchi, Yohei; Yamamoto, Akira; Yamamoto, Shimpei; Yamanaka, Takashi; Yamauchi, Katsuya; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Hongtao; Yang, Yi; Yao, Liwen; Yao, Weiming; Yasu, Yoshiji; Yatsenko, Elena; Yau Wong, Kaven Henry; Ye, Jingbo; Ye, Shuwei; Yeletskikh, Ivan; Yen, Andy L; Yildirim, Eda; Yorita, Kohei; Yoshida, Rikutaro; Yoshihara, Keisuke; Young, Charles; Young, Christopher John; Youssef, Saul; Yu, David Ren-Hwa; Yu, Jaehoon; Yu, Jiaming; Yu, Jie; Yuan, Li; Yurkewicz, Adam; Yusuff, Imran; Zabinski, Bartlomiej; Zaidan, Remi; Zaitsev, Alexander; Zalieckas, Justas; Zaman, Aungshuman; Zambito, Stefano; Zanello, Lucia; Zanzi, Daniele; Zeitnitz, Christian; Zeman, Martin; Zemla, Andrzej; Zengel, Keith; Zenin, Oleg; Ženiš, Tibor; Zerwas, Dirk; Zhang, Dongliang; Zhang, Fangzhou; Zhang, Jinlong; Zhang, Lei; Zhang, Ruiqi; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Xiandong; Zhao, Yongke; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Chen; Zhou, Lei; Zhou, Li; Zhou, Ning; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhukov, Konstantin; Zibell, Andre; Zieminska, Daria; Zimine, Nikolai; Zimmermann, Christoph; Zimmermann, Robert; Zimmermann, Stephanie; Zinonos, Zinonas; Zinser, Markus; Ziolkowski, Michael; Živković, Lidija; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zurzolo, Giovanni; Zwalinski, Lukasz

    2015-07-17

    Measurements of the $ZZ$ and $WW$ final states in the mass range above the $2m_Z$ and $2m_W$ thresholds provide a unique opportunity to measure the off-shell coupling strength of the Higgs boson. This paper presents a determination of the off-shell Higgs boson event yields normalised to the Standard Model prediction (signal strength) in the $ZZ \\rightarrow 4\\ell$, $ZZ\\rightarrow 2\\ell2\

  2. Analysis of k-means clustering approach on the breast cancer Wisconsin dataset.

    Science.gov (United States)

    Dubey, Ashutosh Kumar; Gupta, Umesh; Jain, Sonal

    2016-11-01

    Breast cancer is one of the most common cancers found worldwide and most frequently found in women. An early detection of breast cancer provides the possibility of its cure; therefore, a large number of studies are currently going on to identify methods that can detect breast cancer in its early stages. This study was aimed to find the effects of k-means clustering algorithm with different computation measures like centroid, distance, split method, epoch, attribute, and iteration and to carefully consider and identify the combination of measures that has potential of highly accurate clustering accuracy. K-means algorithm was used to evaluate the impact of clustering using centroid initialization, distance measures, and split methods. The experiments were performed using breast cancer Wisconsin (BCW) diagnostic dataset. Foggy and random centroids were used for the centroid initialization. In foggy centroid, based on random values, the first centroid was calculated. For random centroid, the initial centroid was considered as (0, 0). The results were obtained by employing k-means algorithm and are discussed with different cases considering variable parameters. The calculations were based on the centroid (foggy/random), distance (Euclidean/Manhattan/Pearson), split (simple/variance), threshold (constant epoch/same centroid), attribute (2-9), and iteration (4-10). Approximately, 92 % average positive prediction accuracy was obtained with this approach. Better results were found for the same centroid and the highest variance. The results achieved using Euclidean and Manhattan were better than the Pearson correlation. The findings of this work provided extensive understanding of the computational parameters that can be used with k-means. The results indicated that k-means has a potential to classify BCW dataset.

  3. A hybrid organic-inorganic perovskite dataset

    Science.gov (United States)

    Kim, Chiho; Huan, Tran Doan; Krishnan, Sridevi; Ramprasad, Rampi

    2017-05-01

    Hybrid organic-inorganic perovskites (HOIPs) have been attracting a great deal of attention due to their versatility of electronic properties and fabrication methods. We prepare a dataset of 1,346 HOIPs, which features 16 organic cations, 3 group-IV cations and 4 halide anions. Using a combination of an atomic structure search method and density functional theory calculations, the optimized structures, the bandgap, the dielectric constant, and the relative energies of the HOIPs are uniformly prepared and validated by comparing with relevant experimental and/or theoretical data. We make the dataset available at Dryad Digital Repository, NoMaD Repository, and Khazana Repository (http://khazana.uconn.edu/), hoping that it could be useful for future data-mining efforts that can explore possible structure-property relationships and phenomenological models. Progressive extension of the dataset is expected as new organic cations become appropriate within the HOIP framework, and as additional properties are calculated for the new compounds found.

  4. Genomics dataset of unidentified disclosed isolates

    Directory of Open Access Journals (Sweden)

    Bhagwan N. Rekadwad

    2016-09-01

    Full Text Available Analysis of DNA sequences is necessary for higher hierarchical classification of the organisms. It gives clues about the characteristics of organisms and their taxonomic position. This dataset is chosen to find complexities in the unidentified DNA in the disclosed patents. A total of 17 unidentified DNA sequences were thoroughly analyzed. The quick response codes were generated. AT/GC content of the DNA sequences analysis was carried out. The QR is helpful for quick identification of isolates. AT/GC content is helpful for studying their stability at different temperatures. Additionally, a dataset on cleavage code and enzyme code studied under the restriction digestion study, which helpful for performing studies using short DNA sequences was reported. The dataset disclosed here is the new revelatory data for exploration of unique DNA sequences for evaluation, identification, comparison and analysis. Keywords: BioLABs, Blunt ends, Genomics, NEB cutter, Restriction digestion, Short DNA sequences, Sticky ends

  5. Modeling the residual effects and threshold saturation of training: a case study of Olympic swimmers.

    Science.gov (United States)

    Hellard, Philippe; Avalos, Marta; Millet, Gregoire; Lacoste, Lucien; Barale, Frederic; Chatard, Jean-Claude

    2005-02-01

    The aim of this study was to model the residual effects of training on the swimming performance and to compare a model that includes threshold saturation (MM) with the Banister model (BM). Seven Olympic swimmers were studied over a period of 4 +/- 2 years. For 3 training loads (low-intensity w(LIT), high-intensity w(HIT), and strength training w(ST)), 3 residual training effects were determined: short-term (STE) during the taper phase (i.e., 3 weeks before the performance [weeks 0, 1, and 2]), intermediate-term (ITE) during the intensity phase (weeks 3, 4, and 5), and long-term (LTE) during the volume phase (weeks 6, 7, and 8). ITE and LTE were positive for w(HIT) and w(LIT), respectively (p measures indicated that MM compares favorably with BM. Identifying individual training thresholds may help individualize the distribution of training loads.

  6. IPCC Socio-Economic Baseline Dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — The Intergovernmental Panel on Climate Change (IPCC) Socio-Economic Baseline Dataset consists of population, human development, economic, water resources, land...

  7. Fatigue damage mechanism and strength of woven laminates

    International Nuclear Information System (INIS)

    Xiao, J.; Bathias, C.

    1993-01-01

    The apparent secant stiffness changes with the cyclic number for both unnotched and notched woven laminated specimens (two orthotropic and one quasi-isotropic) during tensile fatigue test at a fixed ratio of maximum fatigue load to UTS were observed. The observable damage initiation and evolution as a function of the cyclic number were directly measured at the notched specimen surface with a video-camera system. The fatigue strengths of the unnotched and notched specimens were determined. The results show that the normalized apparent secant stiffness change curves as a function of cyclic numbers can be divided into three stages. For the first and the second stages in notched specimens and for total life of unnotched specimens, the damage has not been evidently observed and certainly verified with the traditional experimental methods such as radiography and microscopy although many acoustic emission signals can be obtained. The last stage for the notched specimens (N/Nf>0.4, the secant stiffness decreases fast) corresponds to the initiation and evolution of the observable damages. The fatigue strength of these woven composite laminates is dominated by the third stage during which the observable damage develops along the specimen ligament until fracture. During the third stage, a critical dimension at the specimen ligament and a life threshold can be found beyond which a final catastrophic fracture will immediately occur. The quasi-isotropic laminate is of a fatigue strength lower than the two orthotropic laminates of which the fatigue strengths are approaching to each other. The fatigue life is also influenced by the stacking sequences. (orig.)

  8. The LANDFIRE Refresh strategy: updating the national dataset

    Science.gov (United States)

    Nelson, Kurtis J.; Connot, Joel A.; Peterson, Birgit E.; Martin, Charley

    2013-01-01

    The LANDFIRE Program provides comprehensive vegetation and fuel datasets for the entire United States. As with many large-scale ecological datasets, vegetation and landscape conditions must be updated periodically to account for disturbances, growth, and natural succession. The LANDFIRE Refresh effort was the first attempt to consistently update these products nationwide. It incorporated a combination of specific systematic improvements to the original LANDFIRE National data, remote sensing based disturbance detection methods, field collected disturbance information, vegetation growth and succession modeling, and vegetation transition processes. This resulted in the creation of two complete datasets for all 50 states: LANDFIRE Refresh 2001, which includes the systematic improvements, and LANDFIRE Refresh 2008, which includes the disturbance and succession updates to the vegetation and fuel data. The new datasets are comparable for studying landscape changes in vegetation type and structure over a decadal period, and provide the most recent characterization of fuel conditions across the country. The applicability of the new layers is discussed and the effects of using the new fuel datasets are demonstrated through a fire behavior modeling exercise using the 2011 Wallow Fire in eastern Arizona as an example.

  9. Effects of botulinum toxin on strength-duration properties.

    Science.gov (United States)

    Yerdelen, Deniz; Koc, Filiz; Sarica, Yakup

    2007-10-01

    Axonal excitability studies have been used in several diseases to investigate the underlying pathophysiology. The threshold tracking technique was developed to measure noninvasively several indices of axonal excitability, such as strength-duration properties. This study investigated the possible effects of botulinum toxin on strength-duration time constant (SDTC) in patients with the symptoms and signs of botulism. The clinical and electrophysiological findings of 13 patients who were admitted to the authors' clinic with botulism signs and symptoms were evaluated in a 5-day period after exposure to the toxin prospectively. After routine diagnostic electroneuromyographic examinations and electromyogram with repetitive nerve stimulation at 20-50 Hz, SDTC was studied. The results were compared with 13 age- and sex-matched healthy volunteers. The SDTCs were 381 +/- 60 micros and 471 +/- 84 micros in patients and controls, respectively. There was a statistical difference between the two groups (p = .003, Mann Whitney U test). These findings suggest a possible effect of botulinum toxin, known to be effective at neuromuscular junction, on Na(+)/K(+) pump activity, and Na(+) or K(+) conductance.

  10. Retrospective Attention Interacts with Stimulus Strength to Shape Working Memory Performance.

    Directory of Open Access Journals (Sweden)

    Theresa Wildegger

    Full Text Available Orienting attention retrospectively to selective contents in working memory (WM influences performance. A separate line of research has shown that stimulus strength shapes perceptual representations. There is little research on how stimulus strength during encoding shapes WM performance, and how effects of retrospective orienting might vary with changes in stimulus strength. We explore these questions in three experiments using a continuous-recall WM task. In Experiment 1 we show that benefits of cueing spatial attention retrospectively during WM maintenance (retrocueing varies according to stimulus contrast during encoding. Retrocueing effects emerge for supraliminal but not sub-threshold stimuli. However, once stimuli are supraliminal, performance is no longer influenced by stimulus contrast. In Experiments 2 and 3 we used a mixture-model approach to examine how different sources of error in WM are affected by contrast and retrocueing. For high-contrast stimuli (Experiment 2, retrocues increased the precision of successfully remembered items. For low-contrast stimuli (Experiment 3, retrocues decreased the probability of mistaking a target with distracters. These results suggest that the processes by which retrospective attentional orienting shape WM performance are dependent on the quality of WM representations, which in turn depends on stimulus strength during encoding.

  11. On the existence of a threshold for preventive behavioral responses to suppress epidemic spreading.

    Science.gov (United States)

    Sahneh, Faryad Darabi; Chowdhury, Fahmida N; Scoglio, Caterina M

    2012-01-01

    The spontaneous behavioral responses of individuals to the progress of an epidemic are recognized to have a significant impact on how the infection spreads. One observation is that, even if the infection strength is larger than the classical epidemic threshold, the initially growing infection can diminish as the result of preventive behavioral patterns adopted by the individuals. In order to investigate such dynamics of the epidemic spreading, we use a simple behavioral model coupled with the individual-based SIS epidemic model where susceptible individuals adopt a preventive behavior when sensing infection. We show that, given any infection strength and contact topology, there exists a region in the behavior-related parameter space such that infection cannot survive in long run and is completely contained. Several simulation results, including a spreading scenario in a realistic contact network from a rural district in the State of Kansas, are presented to support our analytical arguments.

  12. Omicseq: a web-based search engine for exploring omics datasets

    Science.gov (United States)

    Sun, Xiaobo; Pittard, William S.; Xu, Tianlei; Chen, Li; Zwick, Michael E.; Jiang, Xiaoqian; Wang, Fusheng

    2017-01-01

    Abstract The development and application of high-throughput genomics technologies has resulted in massive quantities of diverse omics data that continue to accumulate rapidly. These rich datasets offer unprecedented and exciting opportunities to address long standing questions in biomedical research. However, our ability to explore and query the content of diverse omics data is very limited. Existing dataset search tools rely almost exclusively on the metadata. A text-based query for gene name(s) does not work well on datasets wherein the vast majority of their content is numeric. To overcome this barrier, we have developed Omicseq, a novel web-based platform that facilitates the easy interrogation of omics datasets holistically to improve ‘findability’ of relevant data. The core component of Omicseq is trackRank, a novel algorithm for ranking omics datasets that fully uses the numerical content of the dataset to determine relevance to the query entity. The Omicseq system is supported by a scalable and elastic, NoSQL database that hosts a large collection of processed omics datasets. In the front end, a simple, web-based interface allows users to enter queries and instantly receive search results as a list of ranked datasets deemed to be the most relevant. Omicseq is freely available at http://www.omicseq.org. PMID:28402462

  13. Nanoparticle-organic pollutant interaction dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Dataset presents concentrations of organic pollutants, such as polyaromatic hydrocarbon compounds, in water samples. Water samples of known volume and concentration...

  14. Small-threshold behaviour of two-loop self-energy diagrams: two-particle thresholds

    International Nuclear Information System (INIS)

    Berends, F.A.; Davydychev, A.I.; Moskovskij Gosudarstvennyj Univ., Moscow; Smirnov, V.A.; Moskovskij Gosudarstvennyj Univ., Moscow

    1996-01-01

    The behaviour of two-loop two-point diagrams at non-zero thresholds corresponding to two-particle cuts is analyzed. The masses involved in a cut and the external momentum are assumed to be small as compared to some of the other masses of the diagram. By employing general formulae of asymptotic expansions of Feynman diagrams in momenta and masses, we construct an algorithm to derive analytic approximations to the diagrams. In such a way, we calculate several first coefficients of the expansion. Since no conditions on relative values of the small masses and the external momentum are imposed, the threshold irregularities are described analytically. Numerical examples, using diagrams occurring in the standard model, illustrate the convergence of the expansion below the first large threshold. (orig.)

  15. Framework for Interactive Parallel Dataset Analysis on the Grid

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, David A.; Ananthan, Balamurali; /Tech-X Corp.; Johnson, Tony; Serbo, Victor; /SLAC

    2007-01-10

    We present a framework for use at a typical Grid site to facilitate custom interactive parallel dataset analysis targeting terabyte-scale datasets of the type typically produced by large multi-institutional science experiments. We summarize the needs for interactive analysis and show a prototype solution that satisfies those needs. The solution consists of desktop client tool and a set of Web Services that allow scientists to sign onto a Grid site, compose analysis script code to carry out physics analysis on datasets, distribute the code and datasets to worker nodes, collect the results back to the client, and to construct professional-quality visualizations of the results.

  16. The effect of concrete strength and reinforcement on toughness of reinforced concrete beams

    OpenAIRE

    Carneiro, Joaquim A. O.; Jalali, Said; Teixeira, Vasco M. P.; Tomás, M.

    2005-01-01

    The objective pursued with this work includes the evaluating of the strength and the total energy absorption capacity (toughness) of reinforced concrete beams using different amounts of steel-bar reinforcement. The experimental campaign deals with the evaluation of the threshold load prior collapse, ultimate load and deformation, as well as the beam total energy absorption capacity, using a three point bending test. The beam half span displacement was measured using a displacement transducer,...

  17. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    Science.gov (United States)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  18. An Affinity Propagation Clustering Algorithm for Mixed Numeric and Categorical Datasets

    Directory of Open Access Journals (Sweden)

    Kang Zhang

    2014-01-01

    Full Text Available Clustering has been widely used in different fields of science, technology, social science, and so forth. In real world, numeric as well as categorical features are usually used to describe the data objects. Accordingly, many clustering methods can process datasets that are either numeric or categorical. Recently, algorithms that can handle the mixed data clustering problems have been developed. Affinity propagation (AP algorithm is an exemplar-based clustering method which has demonstrated good performance on a wide variety of datasets. However, it has limitations on processing mixed datasets. In this paper, we propose a novel similarity measure for mixed type datasets and an adaptive AP clustering algorithm is proposed to cluster the mixed datasets. Several real world datasets are studied to evaluate the performance of the proposed algorithm. Comparisons with other clustering algorithms demonstrate that the proposed method works well not only on mixed datasets but also on pure numeric and categorical datasets.

  19. Effect of whole body vibration exercise on muscle strength and proprioception in females with knee osteoarthritis

    DEFF Research Database (Denmark)

    Trans, T; Aaboe, J; Henriksen, M

    2009-01-01

    status was measured using WOMAC. It was found that muscle strength increased significantly (pIsometric knee-extension significantly increased (p=0.021) in VibM compared to Con. TDPM was significantly improved (p=0.033) in VibF compared to Con, while there was a tendency......The purpose of this study was to assess the effect of whole body vibration (WBV) exercise on muscle strength and proprioception in female patients with osteoarthritis in the knee (knee-OA). A single blinded, randomised, controlled trial was performed in an outpatient clinic on 52 female patients...... groups trained twice a week for 8 weeks, with a progressively increasing intensity. The WBV groups performed unloaded static WBV exercise. The following were measured: knee muscle strength (extension/flexion) and proprioception (threshold for detection of passive movement (TDPM)). Self-reported disease...

  20. Beyond the fragmentation threshold hypothesis: regime shifts in biodiversity across fragmented landscapes.

    Directory of Open Access Journals (Sweden)

    Renata Pardini

    Full Text Available Ecological systems are vulnerable to irreversible change when key system properties are pushed over thresholds, resulting in the loss of resilience and the precipitation of a regime shift. Perhaps the most important of such properties in human-modified landscapes is the total amount of remnant native vegetation. In a seminal study Andrén proposed the existence of a fragmentation threshold in the total amount of remnant vegetation, below which landscape-scale connectivity is eroded and local species richness and abundance become dependent on patch size. Despite the fact that species patch-area effects have been a mainstay of conservation science there has yet to be a robust empirical evaluation of this hypothesis. Here we present and test a new conceptual model describing the mechanisms and consequences of biodiversity change in fragmented landscapes, identifying the fragmentation threshold as a first step in a positive feedback mechanism that has the capacity to impair ecological resilience, and drive a regime shift in biodiversity. The model considers that local extinction risk is defined by patch size, and immigration rates by landscape vegetation cover, and that the recovery from local species losses depends upon the landscape species pool. Using a unique dataset on the distribution of non-volant small mammals across replicate landscapes in the Atlantic forest of Brazil, we found strong evidence for our model predictions--that patch-area effects are evident only at intermediate levels of total forest cover, where landscape diversity is still high and opportunities for enhancing biodiversity through local management are greatest. Furthermore, high levels of forest loss can push native biota through an extinction filter, and result in the abrupt, landscape-wide loss of forest-specialist taxa, ecological resilience and management effectiveness. The proposed model links hitherto distinct theoretical approaches within a single framework

  1. Using Multiple Big Datasets and Machine Learning to Produce a New Global Particulate Dataset: A Technology Challenge Case Study

    Science.gov (United States)

    Lary, D. J.

    2013-12-01

    A BigData case study is described where multiple datasets from several satellites, high-resolution global meteorological data, social media and in-situ observations are combined using machine learning on a distributed cluster using an automated workflow. The global particulate dataset is relevant to global public health studies and would not be possible to produce without the use of the multiple big datasets, in-situ data and machine learning.To greatly reduce the development time and enhance the functionality a high level language capable of parallel processing has been used (Matlab). A key consideration for the system is high speed access due to the large data volume, persistence of the large data volumes and a precise process time scheduling capability.

  2. In-season monitoring of hip and groin strength, health and function in elite youth soccer

    DEFF Research Database (Denmark)

    Wollin, Martin; Thorborg, Kristian; Welvaert, Marijke

    2018-01-01

    OBJECTIVES: The primary purpose of this study was to describe an early detection and management strategy when monitoring in-season hip and groin strength, health and function in soccer. Secondly to compare pre-season to in-season test results. DESIGN: Longitudinal cohort study. METHODS: Twenty......-seven elite male youth soccer players (age: 15.07±0.73years) volunteered to participate in the study. Monitoring tests included: adductor strength, adductor/abductor strength ratio and hip and groin outcome scores (HAGOS). Data were recorded at pre-season and at 22 monthly intervals in-season. Thresholds.......09, CI95%: 0.04, 0.13 respectively). HAGOS subscale scores were lowest at baseline with all, except Physical Activity, showing significant improvements at time-point one (ptime-loss were classified minimal or mild. CONCLUSIONS: In-season monitoring aimed at early detection...

  3. Chemical product and function dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — Merged product weight fraction and chemical function data. This dataset is associated with the following publication: Isaacs , K., M. Goldsmith, P. Egeghy , K....

  4. Hyper-arousal decreases human visual thresholds.

    Directory of Open Access Journals (Sweden)

    Adam J Woods

    Full Text Available Arousal has long been known to influence behavior and serves as an underlying component of cognition and consciousness. However, the consequences of hyper-arousal for visual perception remain unclear. The present study evaluates the impact of hyper-arousal on two aspects of visual sensitivity: visual stereoacuity and contrast thresholds. Sixty-eight participants participated in two experiments. Thirty-four participants were randomly divided into two groups in each experiment: Arousal Stimulation or Sham Control. The Arousal Stimulation group underwent a 50-second cold pressor stimulation (immersing the foot in 0-2° C water, a technique known to increase arousal. In contrast, the Sham Control group immersed their foot in room temperature water. Stereoacuity thresholds (Experiment 1 and contrast thresholds (Experiment 2 were measured before and after stimulation. The Arousal Stimulation groups demonstrated significantly lower stereoacuity and contrast thresholds following cold pressor stimulation, whereas the Sham Control groups showed no difference in thresholds. These results provide the first evidence that hyper-arousal from sensory stimulation can lower visual thresholds. Hyper-arousal's ability to decrease visual thresholds has important implications for survival, sports, and everyday life.

  5. General Purpose Multimedia Dataset - GarageBand 2008

    DEFF Research Database (Denmark)

    Meng, Anders

    This document describes a general purpose multimedia data-set to be used in cross-media machine learning problems. In more detail we describe the genre taxonomy applied at http://www.garageband.com, from where the data-set was collected, and how the taxonomy have been fused into a more human...... understandable taxonomy. Finally, a description of various features extracted from both the audio and text are presented....

  6. Omicseq: a web-based search engine for exploring omics datasets.

    Science.gov (United States)

    Sun, Xiaobo; Pittard, William S; Xu, Tianlei; Chen, Li; Zwick, Michael E; Jiang, Xiaoqian; Wang, Fusheng; Qin, Zhaohui S

    2017-07-03

    The development and application of high-throughput genomics technologies has resulted in massive quantities of diverse omics data that continue to accumulate rapidly. These rich datasets offer unprecedented and exciting opportunities to address long standing questions in biomedical research. However, our ability to explore and query the content of diverse omics data is very limited. Existing dataset search tools rely almost exclusively on the metadata. A text-based query for gene name(s) does not work well on datasets wherein the vast majority of their content is numeric. To overcome this barrier, we have developed Omicseq, a novel web-based platform that facilitates the easy interrogation of omics datasets holistically to improve 'findability' of relevant data. The core component of Omicseq is trackRank, a novel algorithm for ranking omics datasets that fully uses the numerical content of the dataset to determine relevance to the query entity. The Omicseq system is supported by a scalable and elastic, NoSQL database that hosts a large collection of processed omics datasets. In the front end, a simple, web-based interface allows users to enter queries and instantly receive search results as a list of ranked datasets deemed to be the most relevant. Omicseq is freely available at http://www.omicseq.org. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. A numerical study of threshold states

    International Nuclear Information System (INIS)

    Ata, M.S.; Grama, C.; Grama, N.; Hategan, C.

    1979-01-01

    There are some experimental evidences of charged particle threshold states. On the statistical background of levels, some simple structures were observed in excitation spectrum. They occur near the coulombian threshold and have a large reduced width for the decay in the threshold channel. These states were identified as charged cluster threshold states. Such threshold states were observed in sup(15,16,17,18)O, sup(18,19)F, sup(19,20)Ne, sup(24)Mg, sup(32)S. The types of clusters involved were d, t, 3 He, α and even 12 C. They were observed in heavy-ions transfer reactions in the residual nucleus as strong excited levels. The charged particle threshold states occur as simple structures at high excitation energy. They could be interesting both from nuclear structure as well as nuclear reaction mechanism point of view. They could be excited as simple structures both in compound and residual nucleus. (author)

  8. Quantifying uncertainty in observational rainfall datasets

    Science.gov (United States)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    The CO-ordinated Regional Downscaling Experiment (CORDEX) has to date seen the publication of at least ten journal papers that examine the African domain during 2012 and 2013. Five of these papers consider Africa generally (Nikulin et al. 2012, Kim et al. 2013, Hernandes-Dias et al. 2013, Laprise et al. 2013, Panitz et al. 2013) and five have regional foci: Tramblay et al. (2013) on Northern Africa, Mariotti et al. (2014) and Gbobaniyi el al. (2013) on West Africa, Endris et al. (2013) on East Africa and Kalagnoumou et al. (2013) on southern Africa. There also are a further three papers that the authors know about under review. These papers all use an observed rainfall and/or temperature data to evaluate/validate the regional model output and often proceed to assess projected changes in these variables due to climate change in the context of these observations. The most popular reference rainfall data used are the CRU, GPCP, GPCC, TRMM and UDEL datasets. However, as Kalagnoumou et al. (2013) point out there are many other rainfall datasets available for consideration, for example, CMORPH, FEWS, TAMSAT & RIANNAA, TAMORA and the WATCH & WATCH-DEI data. They, with others (Nikulin et al. 2012, Sylla et al. 2012) show that the observed datasets can have a very wide spread at a particular space-time coordinate. As more ground, space and reanalysis-based rainfall products become available, all which use different methods to produce precipitation data, the selection of reference data is becoming an important factor in model evaluation. A number of factors can contribute to a uncertainty in terms of the reliability and validity of the datasets such as radiance conversion algorithims, the quantity and quality of available station data, interpolation techniques and blending methods used to combine satellite and guage based products. However, to date no comprehensive study has been performed to evaluate the uncertainty in these observational datasets. We assess 18 gridded

  9. Conceptions of nuclear threshold status

    International Nuclear Information System (INIS)

    Quester, G.H.

    1991-01-01

    This paper reviews some alternative definitions of nuclear threshold status. Each of them is important, and major analytical confusions would result if one sense of the term is mistaken for another. The motives for nations entering into such threshold status are a blend of civilian and military gains, and of national interests versus parochial or bureaucratic interests. A portion of the rationale for threshold status emerges inevitably from the pursuit of economic goals, and another portion is made more attraction by the derives of the domestic political process. Yet the impact on international security cannot be dismissed, especially where conflicts among the states remain real. Among the military or national security motives are basic deterrence, psychological warfare, war-fighting and, more generally, national prestige. In the end, as the threshold phenomenon is assayed for lessons concerning the role of nuclear weapons more generally in international relations and security, one might conclude that threshold status and outright proliferation coverage to a degree in the motives for all of the states involved and in the advantages attained. As this paper has illustrated, nuclear threshold status is more subtle and more ambiguous than outright proliferation, and it takes considerable time to sort out the complexities. Yet the world has now had a substantial amount of time to deal with this ambiguous status, and this may tempt more states to exploit it

  10. Turkey Run Landfill Emissions Dataset

    Data.gov (United States)

    U.S. Environmental Protection Agency — landfill emissions measurements for the Turkey run landfill in Georgia. This dataset is associated with the following publication: De la Cruz, F., R. Green, G....

  11. Topic modeling for cluster analysis of large biological and medical datasets.

    Science.gov (United States)

    Zhao, Weizhong; Zou, Wen; Chen, James J

    2014-01-01

    The big data moniker is nowhere better deserved than to describe the ever-increasing prodigiousness and complexity of biological and medical datasets. New methods are needed to generate and test hypotheses, foster biological interpretation, and build validated predictors. Although multivariate techniques such as cluster analysis may allow researchers to identify groups, or clusters, of related variables, the accuracies and effectiveness of traditional clustering methods diminish for large and hyper dimensional datasets. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. Its ability to reduce high dimensionality to a small number of latent variables makes it suitable as a means for clustering or overcoming clustering difficulties in large biological and medical datasets. In this study, three topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, are proposed and tested on the cluster analysis of three large datasets: Salmonella pulsed-field gel electrophoresis (PFGE) dataset, lung cancer dataset, and breast cancer dataset, which represent various types of large biological or medical datasets. All three various methods are shown to improve the efficacy/effectiveness of clustering results on the three datasets in comparison to traditional methods. A preferable cluster analysis method emerged for each of the three datasets on the basis of replicating known biological truths. Topic modeling could be advantageously applied to the large datasets of biological or medical research. The three proposed topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, yield clustering improvements for the three different data types. Clusters more efficaciously represent truthful groupings and subgroupings in the data than traditional methods, suggesting

  12. An Analysis of the GTZAN Music Genre Dataset

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2012-01-01

    Most research in automatic music genre recognition has used the dataset assembled by Tzanetakis et al. in 2001. The composition and integrity of this dataset, however, has never been formally analyzed. For the first time, we provide an analysis of its composition, and create a machine...

  13. Dataset definition for CMS operations and physics analyses

    Science.gov (United States)

    Franzoni, Giovanni; Compact Muon Solenoid Collaboration

    2016-04-01

    Data recorded at the CMS experiment are funnelled into streams, integrated in the HLT menu, and further organised in a hierarchical structure of primary datasets and secondary datasets/dedicated skims. Datasets are defined according to the final-state particles reconstructed by the high level trigger, the data format and the use case (physics analysis, alignment and calibration, performance studies). During the first LHC run, new workflows have been added to this canonical scheme, to exploit at best the flexibility of the CMS trigger and data acquisition systems. The concepts of data parking and data scouting have been introduced to extend the physics reach of CMS, offering the opportunity of defining physics triggers with extremely loose selections (e.g. dijet resonance trigger collecting data at a 1 kHz). In this presentation, we review the evolution of the dataset definition during the LHC run I, and we discuss the plans for the run II.

  14. Dataset definition for CMS operations and physics analyses

    CERN Document Server

    AUTHOR|(CDS)2051291

    2016-01-01

    Data recorded at the CMS experiment are funnelled into streams, integrated in the HLT menu, and further organised in a hierarchical structure of primary datasets, secondary datasets, and dedicated skims. Datasets are defined according to the final-state particles reconstructed by the high level trigger, the data format and the use case (physics analysis, alignment and calibration, performance studies). During the first LHC run, new workflows have been added to this canonical scheme, to exploit at best the flexibility of the CMS trigger and data acquisition systems. The concept of data parking and data scouting have been introduced to extend the physics reach of CMS, offering the opportunity of defining physics triggers with extremely loose selections (e.g. dijet resonance trigger collecting data at a 1 kHz). In this presentation, we review the evolution of the dataset definition during the first run, and we discuss the plans for the second LHC run.

  15. Dataset of NRDA emission data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Emissions data from open air oil burns. This dataset is associated with the following publication: Gullett, B., J. Aurell, A. Holder, B. Mitchell, D. Greenwell, M....

  16. Medical Image Data and Datasets in the Era of Machine Learning-Whitepaper from the 2016 C-MIMI Meeting Dataset Session.

    Science.gov (United States)

    Kohli, Marc D; Summers, Ronald M; Geis, J Raymond

    2017-08-01

    At the first annual Conference on Machine Intelligence in Medical Imaging (C-MIMI), held in September 2016, a conference session on medical image data and datasets for machine learning identified multiple issues. The common theme from attendees was that everyone participating in medical image evaluation with machine learning is data starved. There is an urgent need to find better ways to collect, annotate, and reuse medical imaging data. Unique domain issues with medical image datasets require further study, development, and dissemination of best practices and standards, and a coordinated effort among medical imaging domain experts, medical imaging informaticists, government and industry data scientists, and interested commercial, academic, and government entities. High-level attributes of reusable medical image datasets suitable to train, test, validate, verify, and regulate ML products should be better described. NIH and other government agencies should promote and, where applicable, enforce, access to medical image datasets. We should improve communication among medical imaging domain experts, medical imaging informaticists, academic clinical and basic science researchers, government and industry data scientists, and interested commercial entities.

  17. Discovery and Reuse of Open Datasets: An Exploratory Study

    Directory of Open Access Journals (Sweden)

    Sara

    2016-07-01

    Full Text Available Objective: This article analyzes twenty cited or downloaded datasets and the repositories that house them, in order to produce insights that can be used by academic libraries to encourage discovery and reuse of research data in institutional repositories. Methods: Using Thomson Reuters’ Data Citation Index and repository download statistics, we identified twenty cited/downloaded datasets. We documented the characteristics of the cited/downloaded datasets and their corresponding repositories in a self-designed rubric. The rubric includes six major categories: basic information; funding agency and journal information; linking and sharing; factors to encourage reuse; repository characteristics; and data description. Results: Our small-scale study suggests that cited/downloaded datasets generally comply with basic recommendations for facilitating reuse: data are documented well; formatted for use with a variety of software; and shared in established, open access repositories. Three significant factors also appear to contribute to dataset discovery: publishing in discipline-specific repositories; indexing in more than one location on the web; and using persistent identifiers. The cited/downloaded datasets in our analysis came from a few specific disciplines, and tended to be funded by agencies with data publication mandates. Conclusions: The results of this exploratory research provide insights that can inform academic librarians as they work to encourage discovery and reuse of institutional datasets. Our analysis also suggests areas in which academic librarians can target open data advocacy in their communities in order to begin to build open data success stories that will fuel future advocacy efforts.

  18. 77 FR 15052 - Dataset Workshop-U.S. Billion Dollar Disasters Dataset (1980-2011): Assessing Dataset Strengths...

    Science.gov (United States)

    2012-03-14

    ..., located at 151 Patton Avenue, Asheville, North Carolina 28801. FOR FURTHER INFORMATION CONTACT: Adam Smith...-4183, Email: Adam.Smith@noaa.gov ) For RSVP responses, use the email address noted above ( Karen.L...

  19. Visualization of conserved structures by fusing highly variable datasets.

    Science.gov (United States)

    Silverstein, Jonathan C; Chhadia, Ankur; Dech, Fred

    2002-01-01

    Skill, effort, and time are required to identify and visualize anatomic structures in three-dimensions from radiological data. Fundamentally, automating these processes requires a technique that uses symbolic information not in the dynamic range of the voxel data. We were developing such a technique based on mutual information for automatic multi-modality image fusion (MIAMI Fuse, University of Michigan). This system previously demonstrated facility at fusing one voxel dataset with integrated symbolic structure information to a CT dataset (different scale and resolution) from the same person. The next step of development of our technique was aimed at accommodating the variability of anatomy from patient to patient by using warping to fuse our standard dataset to arbitrary patient CT datasets. A standard symbolic information dataset was created from the full color Visible Human Female by segmenting the liver parenchyma, portal veins, and hepatic veins and overwriting each set of voxels with a fixed color. Two arbitrarily selected patient CT scans of the abdomen were used for reference datasets. We used the warping functions in MIAMI Fuse to align the standard structure data to each patient scan. The key to successful fusion was the focused use of multiple warping control points that place themselves around the structure of interest automatically. The user assigns only a few initial control points to align the scans. Fusion 1 and 2 transformed the atlas with 27 points around the liver to CT1 and CT2 respectively. Fusion 3 transformed the atlas with 45 control points around the liver to CT1 and Fusion 4 transformed the atlas with 5 control points around the portal vein. The CT dataset is augmented with the transformed standard structure dataset, such that the warped structure masks are visualized in combination with the original patient dataset. This combined volume visualization is then rendered interactively in stereo on the ImmersaDesk in an immersive Virtual

  20. Threshold Concepts in Finance: Student Perspectives

    Science.gov (United States)

    Hoadley, Susan; Kyng, Tim; Tickle, Leonie; Wood, Leigh N.

    2015-01-01

    Finance threshold concepts are the essential conceptual knowledge that underpin well-developed financial capabilities and are central to the mastery of finance. In this paper we investigate threshold concepts in finance from the point of view of students, by establishing the extent to which students are aware of threshold concepts identified by…

  1. An Annotated Dataset of 14 Cardiac MR Images

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This note describes a dataset consisting of 14 annotated cardiac MR images. Points of correspondence are placed on each image at the left ventricle (LV). As such, the dataset can be readily used for building statistical models of shape. Further, format specifications and terms of use are given....

  2. Thresholding magnetic resonance images of human brain

    Institute of Scientific and Technical Information of China (English)

    Qing-mao HU; Wieslaw L NOWINSKI

    2005-01-01

    In this paper, methods are proposed and validated to determine low and high thresholds to segment out gray matter and white matter for MR images of different pulse sequences of human brain. First, a two-dimensional reference image is determined to represent the intensity characteristics of the original three-dimensional data. Then a region of interest of the reference image is determined where brain tissues are present. The non-supervised fuzzy c-means clustering is employed to determine: the threshold for obtaining head mask, the low threshold for T2-weighted and PD-weighted images, and the high threshold for T1-weighted, SPGR and FLAIR images. Supervised range-constrained thresholding is employed to determine the low threshold for T1-weighted, SPGR and FLAIR images. Thresholding based on pairs of boundary pixels is proposed to determine the high threshold for T2- and PD-weighted images. Quantification against public data sets with various noise and inhomogeneity levels shows that the proposed methods can yield segmentation robust to noise and intensity inhomogeneity. Qualitatively the proposed methods work well with real clinical data.

  3. Near threshold fatigue testing

    Science.gov (United States)

    Freeman, D. C.; Strum, M. J.

    1993-01-01

    Measurement of the near-threshold fatigue crack growth rate (FCGR) behavior provides a basis for the design and evaluation of components subjected to high cycle fatigue. Typically, the near-threshold fatigue regime describes crack growth rates below approximately 10(exp -5) mm/cycle (4 x 10(exp -7) inch/cycle). One such evaluation was recently performed for the binary alloy U-6Nb. The procedures developed for this evaluation are described in detail to provide a general test method for near-threshold FCGR testing. In particular, techniques for high-resolution measurements of crack length performed in-situ through a direct current, potential drop (DCPD) apparatus, and a method which eliminates crack closure effects through the use of loading cycles with constant maximum stress intensity are described.

  4. Using natural range of variation to set decision thresholds: a case study for great plains grasslands

    Science.gov (United States)

    Symstad, Amy J.; Jonas, Jayne L.; Edited by Guntenspergen, Glenn R.

    2014-01-01

    Natural range of variation (NRV) may be used to establish decision thresholds or action assessment points when ecological thresholds are either unknown or do not exist for attributes of interest in a managed ecosystem. The process for estimating NRV involves identifying spatial and temporal scales that adequately capture the heterogeneity of the ecosystem; compiling data for the attributes of interest via study of historic records, analysis and interpretation of proxy records, modeling, space-for-time substitutions, or analysis of long-term monitoring data; and quantifying the NRV from those data. At least 19 National Park Service (NPS) units in North America’s Great Plains are monitoring plant species richness and evenness as indicators of vegetation integrity in native grasslands, but little information on natural, temporal variability of these indicators is available. In this case study, we use six long-term vegetation monitoring datasets to quantify the temporal variability of these attributes in reference conditions for a variety of Great Plains grassland types, and then illustrate the implications of using different NRVs based on these quantities for setting management decision thresholds. Temporal variability of richness (as measured by the coefficient of variation, CV) is fairly consistent across the wide variety of conditions occurring in Colorado shortgrass prairie to Minnesota tallgrass sand savanna (CV 0.20–0.45) and generally less than that of production at the same sites. Temporal variability of evenness spans a greater range of CV than richness, and it is greater than that of production in some sites but less in other sites. This natural temporal variability may mask undesirable changes in Great Plains grasslands vegetation. Consequently, we suggest that managers consider using a relatively narrow NRV (interquartile range of all richness or evenness values observed in reference conditions) for designating a surveillance threshold, at which

  5. Dataset - Adviesregel PPL 2010

    NARCIS (Netherlands)

    Evert, van F.K.; Schans, van der D.A.; Geel, van W.C.A.; Slabbekoorn, J.J.; Booij, R.; Jukema, J.N.; Meurs, E.J.J.; Uenk, D.

    2011-01-01

    This dataset contains experimental data from a number of field experiments with potato in The Netherlands (Van Evert et al., 2011). The data are presented as an SQL dump of a PostgreSQL database (version 8.4.4). An outline of the entity-relationship diagram of the database is given in an

  6. Tension in the recent Type Ia supernovae datasets

    International Nuclear Information System (INIS)

    Wei, Hao

    2010-01-01

    In the present work, we investigate the tension in the recent Type Ia supernovae (SNIa) datasets Constitution and Union. We show that they are in tension not only with the observations of the cosmic microwave background (CMB) anisotropy and the baryon acoustic oscillations (BAO), but also with other SNIa datasets such as Davis and SNLS. Then, we find the main sources responsible for the tension. Further, we make this more robust by employing the method of random truncation. Based on the results of this work, we suggest two truncated versions of the Union and Constitution datasets, namely the UnionT and ConstitutionT SNIa samples, whose behaviors are more regular.

  7. Viability of Controlling Prosthetic Hand Utilizing Electroencephalograph (EEG) Dataset Signal

    Science.gov (United States)

    Miskon, Azizi; A/L Thanakodi, Suresh; Raihan Mazlan, Mohd; Mohd Haziq Azhar, Satria; Nooraya Mohd Tawil, Siti

    2016-11-01

    This project presents the development of an artificial hand controlled by Electroencephalograph (EEG) signal datasets for the prosthetic application. The EEG signal datasets were used as to improvise the way to control the prosthetic hand compared to the Electromyograph (EMG). The EMG has disadvantages to a person, who has not used the muscle for a long time and also to person with degenerative issues due to age factor. Thus, the EEG datasets found to be an alternative for EMG. The datasets used in this work were taken from Brain Computer Interface (BCI) Project. The datasets were already classified for open, close and combined movement operations. It served the purpose as an input to control the prosthetic hand by using an Interface system between Microsoft Visual Studio and Arduino. The obtained results reveal the prosthetic hand to be more efficient and faster in response to the EEG datasets with an additional LiPo (Lithium Polymer) battery attached to the prosthetic. Some limitations were also identified in terms of the hand movements, weight of the prosthetic, and the suggestions to improve were concluded in this paper. Overall, the objective of this paper were achieved when the prosthetic hand found to be feasible in operation utilizing the EEG datasets.

  8. Technical note: An inorganic water chemistry dataset (1972–2011 ...

    African Journals Online (AJOL)

    A national dataset of inorganic chemical data of surface waters (rivers, lakes, and dams) in South Africa is presented and made freely available. The dataset comprises more than 500 000 complete water analyses from 1972 up to 2011, collected from more than 2 000 sample monitoring stations in South Africa. The dataset ...

  9. At-Risk-of-Poverty Threshold

    Directory of Open Access Journals (Sweden)

    Táňa Dvornáková

    2012-06-01

    Full Text Available European Statistics on Income and Living Conditions (EU-SILC is a survey on households’ living conditions. The main aim of the survey is to get long-term comparable data on social and economic situation of households. Data collected in the survey are used mainly in connection with the evaluation of income poverty and determinationof at-risk-of-poverty rate. This article deals with the calculation of the at risk-of-poverty threshold based on data from EU-SILC 2009. The main task is to compare two approaches to the computation of at riskof-poverty threshold. The first approach is based on the calculation of the threshold for each country separately,while the second one is based on the calculation of the threshold for all states together. The introduction summarizes common attributes in the calculation of the at-risk-of-poverty threshold, such as disposable household income, equivalised household income. Further, different approaches to both calculations are introduced andadvantages and disadvantages of these approaches are stated. Finally, the at-risk-of-poverty rate calculation is described and comparison of the at-risk-of-poverty rates based on these two different approaches is made.

  10. Wind and wave dataset for Matara, Sri Lanka

    Science.gov (United States)

    Luo, Yao; Wang, Dongxiao; Priyadarshana Gamage, Tilak; Zhou, Fenghua; Madusanka Widanage, Charith; Liu, Taiwei

    2018-01-01

    We present a continuous in situ hydro-meteorology observational dataset from a set of instruments first deployed in December 2012 in the south of Sri Lanka, facing toward the north Indian Ocean. In these waters, simultaneous records of wind and wave data are sparse due to difficulties in deploying measurement instruments, although the area hosts one of the busiest shipping lanes in the world. This study describes the survey, deployment, and measurements of wind and waves, with the aim of offering future users of the dataset the most comprehensive and as much information as possible. This dataset advances our understanding of the nearshore hydrodynamic processes and wave climate, including sea waves and swells, in the north Indian Ocean. Moreover, it is a valuable resource for ocean model parameterization and validation. The archived dataset (Table 1) is examined in detail, including wave data at two locations with water depths of 20 and 10 m comprising synchronous time series of wind, ocean astronomical tide, air pressure, etc. In addition, we use these wave observations to evaluate the ERA-Interim reanalysis product. Based on Buoy 2 data, the swells are the main component of waves year-round, although monsoons can markedly alter the proportion between swell and wind sea. The dataset (Luo et al., 2017) is publicly available from Science Data Bank (https://doi.org/10.11922/sciencedb.447).

  11. Wind and wave dataset for Matara, Sri Lanka

    Directory of Open Access Journals (Sweden)

    Y. Luo

    2018-01-01

    Full Text Available We present a continuous in situ hydro-meteorology observational dataset from a set of instruments first deployed in December 2012 in the south of Sri Lanka, facing toward the north Indian Ocean. In these waters, simultaneous records of wind and wave data are sparse due to difficulties in deploying measurement instruments, although the area hosts one of the busiest shipping lanes in the world. This study describes the survey, deployment, and measurements of wind and waves, with the aim of offering future users of the dataset the most comprehensive and as much information as possible. This dataset advances our understanding of the nearshore hydrodynamic processes and wave climate, including sea waves and swells, in the north Indian Ocean. Moreover, it is a valuable resource for ocean model parameterization and validation. The archived dataset (Table 1 is examined in detail, including wave data at two locations with water depths of 20 and 10 m comprising synchronous time series of wind, ocean astronomical tide, air pressure, etc. In addition, we use these wave observations to evaluate the ERA-Interim reanalysis product. Based on Buoy 2 data, the swells are the main component of waves year-round, although monsoons can markedly alter the proportion between swell and wind sea. The dataset (Luo et al., 2017 is publicly available from Science Data Bank (https://doi.org/10.11922/sciencedb.447.

  12. Heuristics for Relevancy Ranking of Earth Dataset Search Results

    Science.gov (United States)

    Lynnes, Christopher; Quinn, Patrick; Norton, James

    2016-01-01

    As the Variety of Earth science datasets increases, science researchers find it more challenging to discover and select the datasets that best fit their needs. The most common way of search providers to address this problem is to rank the datasets returned for a query by their likely relevance to the user. Large web page search engines typically use text matching supplemented with reverse link counts, semantic annotations and user intent modeling. However, this produces uneven results when applied to dataset metadata records simply externalized as a web page. Fortunately, data and search provides have decades of experience in serving data user communities, allowing them to form heuristics that leverage the structure in the metadata together with knowledge about the user community. Some of these heuristics include specific ways of matching the user input to the essential measurements in the dataset and determining overlaps of time range and spatial areas. Heuristics based on the novelty of the datasets can prioritize later, better versions of data over similar predecessors. And knowledge of how different user types and communities use data can be brought to bear in cases where characteristics of the user (discipline, expertise) or their intent (applications, research) can be divined. The Earth Observing System Data and Information System has begun implementing some of these heuristics in the relevancy algorithm of its Common Metadata Repository search engine.

  13. QSAR ligand dataset for modelling mutagenicity, genotoxicity, and rodent carcinogenicity

    Directory of Open Access Journals (Sweden)

    Davy Guan

    2018-04-01

    Full Text Available Five datasets were constructed from ligand and bioassay result data from the literature. These datasets include bioassay results from the Ames mutagenicity assay, Greenscreen GADD-45a-GFP assay, Syrian Hamster Embryo (SHE assay, and 2 year rat carcinogenicity assay results. These datasets provide information about chemical mutagenicity, genotoxicity and carcinogenicity.

  14. Strength Training Prior to Endurance Exercise: Impact on the Neuromuscular System, Endurance Performance and Cardiorespiratory Responses

    Directory of Open Access Journals (Sweden)

    Conceição Matheus

    2014-12-01

    Full Text Available This study aimed to investigate the acute effects of two strength-training protocols on the neuromuscular and cardiorespiratory responses during endurance exercise. Thirteen young males (23.2 ± 1.6 years old participated in this study. The hypertrophic strength-training protocol was composed of 6 sets of 8 squats at 75% of maximal dynamic strength. The plyometric strength-training protocol was composed of 6 sets of 8 jumps performed with the body weight as the workload. Endurance exercise was performed on a cycle ergometer at a power corresponding to the second ventilatory threshold until exhaustion. Before and after each protocol, a maximal voluntary contraction was performed, and the rate of force development and electromyographic parameters were assessed. After the hypertrophic strengthtraining and plyometric strength-training protocol, significant decreases were observed in the maximal voluntary contraction and rate of force development, whereas no changes were observed in the electromyographic parameters. Oxygen uptake and a heart rate during endurance exercise were not significantly different among the protocols. However, the time-to-exhaustion was significantly higher during endurance exercise alone than when performed after hypertrophic strength-training or plyometric strength-training (p <0.05. These results suggest that endurance performance may be impaired when preceded by strength-training, with no oxygen uptake or heart rate changes during the exercise.

  15. Effects of densified silica fume on microstructure and compressive strength of blended cement pastes

    International Nuclear Information System (INIS)

    Ji Yajun; Cahyadi, Jong Herman

    2003-01-01

    Some experimental investigations on the microstructure and compressive strength development of silica fume blended cement pastes are presented in this paper. The silica fume replacement varies from 0% to 20% by weight and the water/binder ratio (w/b) is 0.4. The pore structure by mercury intrusion porosimetry (MIP), the micromorphology by scanning electron microscopy (SEM) and the compressive strength at 3, 7, 14, 28, 56 and 90 days have been studied. The test results indicate that the improvements on both microstructure and mechanical properties of hardened cement pastes by silica fume replacement are not effective due to the agglomeration of silica fume particles. The unreacted silica fume remained in cement pastes, the threshold diameter was not reduced and the increase in compressive strength was insignificant up to 28 days. It is suggested that the proper measures should be taken to disperse silica fume agglomeration to make it more effective on improving the properties of materials

  16. [Correlations Between Joint Proprioception, Muscle Strength, and Functional Ability in Patients with Knee Osteoarthritis].

    Science.gov (United States)

    Chen, Yoa; Yu, Yong; He, Cheng-qi

    2015-11-01

    To establish correlations between joint proprioception, muscle flexion and extension peak torque, and functional ability in patients with knee osteoarthritis (OA). Fifty-six patients with symptomatic knee OA were recruited in this study. Both proprioceptive acuity and muscle strength were measured using the isomed-2000 isokinetic dynamometer. Proprioceptive acuity was evaluated by establishing the joint motion detection threshold (JMDT). Muscle strength was evaluated by Max torque (Nm) and Max torque/weight (Nm/ kg). Functional ability was assessed by the Western Ontario and McMaster Universities Osteoarthritis Index physical function (WOMAC-PF) questionnaire. Correlational analyses were performed between proprioception, muscle strength, and functional ability. A multiple stepwise regression model was established, with WOMAC-PF as dependent variable and patient age, body mass index (BMI), visual analogue scale (VAS)-score, mean grade for Kellgren-Lawrance of both knees, mean strength for quadriceps and hamstring muscles of both knees, and mean JMDT of both knees as independent variables. Poor proprioception (high JMDT) was negatively correlated with muscle strength (Pcoefficient (B) = 0.385, P<0.50 and high VAS-scale score (B=0.347, P<0.05) were significant predictors of WOMAC-PF score. Patients with poor proprioception is associated with poor muscle strength and limitation in functional ability. Patients with symptomatic OA of knees commonly endure with moderate to considerable dysfunction, which is associated with poor proprioception (high JMDT) and high VAS-scale score.

  17. Summary of DOE threshold limits efforts

    International Nuclear Information System (INIS)

    Wickham, L.E.; Smith, C.F.; Cohen, J.J.

    1987-01-01

    The Department of Energy (DOE) has been developing the concept of threshold quantities for use in determining which waste materials may be disposed of as nonradioactive waste in DOE sanitary landfills. Waste above a threshold level could be managed as radioactive or mixed waste (if hazardous chemicals are present); waste below this level would be handled as sanitary waste. After extensive review of a draft threshold guidance document in 1985, a second draft threshold background document was produced in March 1986. The second draft included a preliminary cost-benefit analysis and quality assurance considerations. The review of the second draft has been completed. Final changes to be incorporated include an in-depth cost-benefit analysis of two example sites and recommendations of how to further pursue (i.e. employ) the concept of threshold quantities within the DOE. 3 references

  18. The Dataset of Countries at Risk of Electoral Violence

    OpenAIRE

    Birch, Sarah; Muchlinski, David

    2017-01-01

    Electoral violence is increasingly affecting elections around the world, yet researchers have been limited by a paucity of granular data on this phenomenon. This paper introduces and describes a new dataset of electoral violence – the Dataset of Countries at Risk of Electoral Violence (CREV) – that provides measures of 10 different types of electoral violence across 642 elections held around the globe between 1995 and 2013. The paper provides a detailed account of how and why the dataset was ...

  19. Towards interoperable and reproducible QSAR analyses: Exchange of datasets.

    Science.gov (United States)

    Spjuth, Ola; Willighagen, Egon L; Guha, Rajarshi; Eklund, Martin; Wikberg, Jarl Es

    2010-06-30

    QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML) which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join, extend, combine datasets and hence work collectively, but

  20. Towards interoperable and reproducible QSAR analyses: Exchange of datasets

    Directory of Open Access Journals (Sweden)

    Spjuth Ola

    2010-06-01

    Full Text Available Abstract Background QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. Results We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Conclusions Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join

  1. VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication

    Science.gov (United States)

    Denina, Giovanni; Bhanu, Bir; Nguyen, Hoang Thanh; Ding, Chong; Kamal, Ahmed; Ravishankar, Chinya; Roy-Chowdhury, Amit; Ivers, Allen; Varda, Brenda

    Human-activity recognition is one of the most challenging problems in computer vision. Researchers from around the world have tried to solve this problem and have come a long way in recognizing simple motions and atomic activities. As the computer vision community heads toward fully recognizing human activities, a challenging and labeled dataset is needed. To respond to that need, we collected a dataset of realistic scenarios in a multi-camera network environment (VideoWeb) involving multiple persons performing dozens of different repetitive and non-repetitive activities. This chapter describes the details of the dataset. We believe that this VideoWeb Activities dataset is unique and it is one of the most challenging datasets available today. The dataset is publicly available online at http://vwdata.ee.ucr.edu/ along with the data annotation.

  2. Toward computational cumulative biology by combining models of biological datasets.

    Science.gov (United States)

    Faisal, Ali; Peltonen, Jaakko; Georgii, Elisabeth; Rung, Johan; Kaski, Samuel

    2014-01-01

    A main challenge of data-driven sciences is how to make maximal use of the progressively expanding databases of experimental datasets in order to keep research cumulative. We introduce the idea of a modeling-based dataset retrieval engine designed for relating a researcher's experimental dataset to earlier work in the field. The search is (i) data-driven to enable new findings, going beyond the state of the art of keyword searches in annotations, (ii) modeling-driven, to include both biological knowledge and insights learned from data, and (iii) scalable, as it is accomplished without building one unified grand model of all data. Assuming each dataset has been modeled beforehand, by the researchers or automatically by database managers, we apply a rapidly computable and optimizable combination model to decompose a new dataset into contributions from earlier relevant models. By using the data-driven decomposition, we identify a network of interrelated datasets from a large annotated human gene expression atlas. While tissue type and disease were major driving forces for determining relevant datasets, the found relationships were richer, and the model-based search was more accurate than the keyword search; moreover, it recovered biologically meaningful relationships that are not straightforwardly visible from annotations-for instance, between cells in different developmental stages such as thymocytes and T-cells. Data-driven links and citations matched to a large extent; the data-driven links even uncovered corrections to the publication data, as two of the most linked datasets were not highly cited and turned out to have wrong publication entries in the database.

  3. GUIDEseq: a bioconductor package to analyze GUIDE-Seq datasets for CRISPR-Cas nucleases.

    Science.gov (United States)

    Zhu, Lihua Julie; Lawrence, Michael; Gupta, Ankit; Pagès, Hervé; Kucukural, Alper; Garber, Manuel; Wolfe, Scot A

    2017-05-15

    Genome editing technologies developed around the CRISPR-Cas9 nuclease system have facilitated the investigation of a broad range of biological questions. These nucleases also hold tremendous promise for treating a variety of genetic disorders. In the context of their therapeutic application, it is important to identify the spectrum of genomic sequences that are cleaved by a candidate nuclease when programmed with a particular guide RNA, as well as the cleavage efficiency of these sites. Powerful new experimental approaches, such as GUIDE-seq, facilitate the sensitive, unbiased genome-wide detection of nuclease cleavage sites within the genome. Flexible bioinformatics analysis tools for processing GUIDE-seq data are needed. Here, we describe an open source, open development software suite, GUIDEseq, for GUIDE-seq data analysis and annotation as a Bioconductor package in R. The GUIDEseq package provides a flexible platform with more than 60 adjustable parameters for the analysis of datasets associated with custom nuclease applications. These parameters allow data analysis to be tailored to different nuclease platforms with different length and complexity in their guide and PAM recognition sequences or their DNA cleavage position. They also enable users to customize sequence aggregation criteria, and vary peak calling thresholds that can influence the number of potential off-target sites recovered. GUIDEseq also annotates potential off-target sites that overlap with genes based on genome annotation information, as these may be the most important off-target sites for further characterization. In addition, GUIDEseq enables the comparison and visualization of off-target site overlap between different datasets for a rapid comparison of different nuclease configurations or experimental conditions. For each identified off-target, the GUIDEseq package outputs mapped GUIDE-Seq read count as well as cleavage score from a user specified off-target cleavage score prediction

  4. 3DSEM: A 3D microscopy dataset

    Directory of Open Access Journals (Sweden)

    Ahmad P. Tafti

    2016-03-01

    Full Text Available The Scanning Electron Microscope (SEM as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. Keywords: 3D microscopy dataset, 3D microscopy vision, 3D SEM surface reconstruction, Scanning Electron Microscope (SEM

  5. Active Semisupervised Clustering Algorithm with Label Propagation for Imbalanced and Multidensity Datasets

    Directory of Open Access Journals (Sweden)

    Mingwei Leng

    2013-01-01

    Full Text Available The accuracy of most of the existing semisupervised clustering algorithms based on small size of labeled dataset is low when dealing with multidensity and imbalanced datasets, and labeling data is quite expensive and time consuming in many real-world applications. This paper focuses on active data selection and semisupervised clustering algorithm in multidensity and imbalanced datasets and proposes an active semisupervised clustering algorithm. The proposed algorithm uses an active mechanism for data selection to minimize the amount of labeled data, and it utilizes multithreshold to expand labeled datasets on multidensity and imbalanced datasets. Three standard datasets and one synthetic dataset are used to demonstrate the proposed algorithm, and the experimental results show that the proposed semisupervised clustering algorithm has a higher accuracy and a more stable performance in comparison to other clustering and semisupervised clustering algorithms, especially when the datasets are multidensity and imbalanced.

  6. A reanalysis dataset of the South China Sea

    Science.gov (United States)

    Zeng, Xuezhi; Peng, Shiqiu; Li, Zhijin; Qi, Yiquan; Chen, Rongyu

    2014-01-01

    Ocean reanalysis provides a temporally continuous and spatially gridded four-dimensional estimate of the ocean state for a better understanding of the ocean dynamics and its spatial/temporal variability. Here we present a 19-year (1992–2010) high-resolution ocean reanalysis dataset of the upper ocean in the South China Sea (SCS) produced from an ocean data assimilation system. A wide variety of observations, including in-situ temperature/salinity profiles, ship-measured and satellite-derived sea surface temperatures, and sea surface height anomalies from satellite altimetry, are assimilated into the outputs of an ocean general circulation model using a multi-scale incremental three-dimensional variational data assimilation scheme, yielding a daily high-resolution reanalysis dataset of the SCS. Comparisons between the reanalysis and independent observations support the reliability of the dataset. The presented dataset provides the research community of the SCS an important data source for studying the thermodynamic processes of the ocean circulation and meso-scale features in the SCS, including their spatial and temporal variability. PMID:25977803

  7. A dataset of forest biomass structure for Eurasia.

    Science.gov (United States)

    Schepaschenko, Dmitry; Shvidenko, Anatoly; Usoltsev, Vladimir; Lakyda, Petro; Luo, Yunjian; Vasylyshyn, Roman; Lakyda, Ivan; Myklush, Yuriy; See, Linda; McCallum, Ian; Fritz, Steffen; Kraxner, Florian; Obersteiner, Michael

    2017-05-16

    The most comprehensive dataset of in situ destructive sampling measurements of forest biomass in Eurasia have been compiled from a combination of experiments undertaken by the authors and from scientific publications. Biomass is reported as four components: live trees (stem, bark, branches, foliage, roots); understory (above- and below ground); green forest floor (above- and below ground); and coarse woody debris (snags, logs, dead branches of living trees and dead roots), consisting of 10,351 unique records of sample plots and 9,613 sample trees from ca 1,200 experiments for the period 1930-2014 where there is overlap between these two datasets. The dataset also contains other forest stand parameters such as tree species composition, average age, tree height, growing stock volume, etc., when available. Such a dataset can be used for the development of models of biomass structure, biomass extension factors, change detection in biomass structure, investigations into biodiversity and species distribution and the biodiversity-productivity relationship, as well as the assessment of the carbon pool and its dynamics, among many others.

  8. A Dataset for Visual Navigation with Neuromorphic Methods

    Directory of Open Access Journals (Sweden)

    Francisco eBarranco

    2016-02-01

    Full Text Available Standardized benchmarks in Computer Vision have greatly contributed to the advance of approaches to many problems in the field. If we want to enhance the visibility of event-driven vision and increase its impact, we will need benchmarks that allow comparison among different neuromorphic methods as well as comparison to Computer Vision conventional approaches. We present datasets to evaluate the accuracy of frame-free and frame-based approaches for tasks of visual navigation. Similar to conventional Computer Vision datasets, we provide synthetic and real scenes, with the synthetic data created with graphics packages, and the real data recorded using a mobile robotic platform carrying a dynamic and active pixel vision sensor (DAVIS and an RGB+Depth sensor. For both datasets the cameras move with a rigid motion in a static scene, and the data includes the images, events, optic flow, 3D camera motion, and the depth of the scene, along with calibration procedures. Finally, we also provide simulated event data generated synthetically from well-known frame-based optical flow datasets.

  9. Sparse Group Penalized Integrative Analysis of Multiple Cancer Prognosis Datasets

    Science.gov (United States)

    Liu, Jin; Huang, Jian; Xie, Yang; Ma, Shuangge

    2014-01-01

    SUMMARY In cancer research, high-throughput profiling studies have been extensively conducted, searching for markers associated with prognosis. Because of the “large d, small n” characteristic, results generated from the analysis of a single dataset can be unsatisfactory. Recent studies have shown that integrative analysis, which simultaneously analyzes multiple datasets, can be more effective than single-dataset analysis and classic meta-analysis. In most of existing integrative analysis, the homogeneity model has been assumed, which postulates that different datasets share the same set of markers. Several approaches have been designed to reinforce this assumption. In practice, different datasets may differ in terms of patient selection criteria, profiling techniques, and many other aspects. Such differences may make the homogeneity model too restricted. In this study, we assume the heterogeneity model, under which different datasets are allowed to have different sets of markers. With multiple cancer prognosis datasets, we adopt the AFT (accelerated failure time) model to describe survival. This model may have the lowest computational cost among popular semiparametric survival models. For marker selection, we adopt a sparse group MCP (minimax concave penalty) approach. This approach has an intuitive formulation and can be computed using an effective group coordinate descent algorithm. Simulation study shows that it outperforms the existing approaches under both the homogeneity and heterogeneity models. Data analysis further demonstrates the merit of heterogeneity model and proposed approach. PMID:23938111

  10. Parton distributions with threshold resummation

    CERN Document Server

    Bonvini, Marco; Rojo, Juan; Rottoli, Luca; Ubiali, Maria; Ball, Richard D.; Bertone, Valerio; Carrazza, Stefano; Hartland, Nathan P.

    2015-01-01

    We construct a set of parton distribution functions (PDFs) in which fixed-order NLO and NNLO calculations are supplemented with soft-gluon (threshold) resummation up to NLL and NNLL accuracy respectively, suitable for use in conjunction with any QCD calculation in which threshold resummation is included at the level of partonic cross sections. These resummed PDF sets, based on the NNPDF3.0 analysis, are extracted from deep-inelastic scattering, Drell-Yan, and top quark pair production data, for which resummed calculations can be consistently used. We find that, close to threshold, the inclusion of resummed PDFs can partially compensate the enhancement in resummed matrix elements, leading to resummed hadronic cross-sections closer to the fixed-order calculation. On the other hand, far from threshold, resummed PDFs reduce to their fixed-order counterparts. Our results demonstrate the need for a consistent use of resummed PDFs in resummed calculations.

  11. Effect of threshold quantization in opportunistic splitting algorithm

    KAUST Repository

    Nam, Haewoon

    2011-12-01

    This paper discusses algorithms to find the optimal threshold and also investigates the impact of threshold quantization on the scheduling outage performance of the opportunistic splitting scheduling algorithm. Since this algorithm aims at finding the user with the highest channel quality within the minimal number of mini-slots by adjusting the threshold every mini-slot, optimizing the threshold is of paramount importance. Hence, in this paper we first discuss how to compute the optimal threshold along with two tight approximations for the optimal threshold. Closed-form expressions are provided for those approximations for simple calculations. Then, we consider linear quantization of the threshold to take the limited number of bits for signaling messages in practical systems into consideration. Due to the limited granularity for the quantized threshold value, an irreducible scheduling outage floor is observed. The numerical results show that the two approximations offer lower scheduling outage probability floors compared to the conventional algorithm when the threshold is quantized. © 2006 IEEE.

  12. Point of no return: experimental determination of the lethal hydraulic threshold during drought for loblolly pine (Pinus taeda)

    Science.gov (United States)

    Hammond, W.; Yu, K.; Wilson, L. A.; Will, R.; Anderegg, W.; Adams, H. D.

    2017-12-01

    The strength of the terrestrial carbon sink—dominated by forests—remains one of the greatest uncertainties in climate change modelling. How forests will respond to increased variability in temperature and precipitation is poorly understood, and experimental study to better inform global vegetation models in this area is needed. Necessary for achieving­­­­ this goal is an understanding of how increased temperatures and drought will affect landscape level distributions of plant species. Quantifying physiological thresholds representing a point of no return from drought stress, including thresholds in hydraulic function, is critical to this end. Recent theoretical, observational, and modelling research has converged upon a threshold of 60 percent loss of hydraulic conductivity at mortality (PLClethal). However, direct experimental determination of lethal points in conductivity and cavitation during drought is lacking. We quantified thresholds in hydraulic function in Loblolly pine, Pinus taeda, a commercially important timber species. In a greenhouse experiment, we exposed saplings (n = 96 total) to drought and rewatered treatment groups at variable levels of increasing water stress determined by pre-selected targets in pre-dawn water potential. Treatments also included a watered control with no drought, and drought with no rewatering. We measured physiological responses to water stress, including hydraulic conductivity, native PLC, water potential, foliar color, canopy die-back, and dark-adapted chlorophyll fluorescence. Following the rewatering treatment, we observed saplings for at least two months to determine which survived and which died. Using these data we calculated lethal physiological thresholds in water potential, directly measured PLC, and PLC inferred from water potential using a hydraulic vulnerability curve. We found that PLClethal inferred from water potential agreed with the 60% threshold suggested by previous research. However, directly

  13. Log canonical thresholds of smooth Fano threefolds

    International Nuclear Information System (INIS)

    Cheltsov, Ivan A; Shramov, Konstantin A

    2008-01-01

    The complex singularity exponent is a local invariant of a holomorphic function determined by the integrability of fractional powers of the function. The log canonical thresholds of effective Q-divisors on normal algebraic varieties are algebraic counterparts of complex singularity exponents. For a Fano variety, these invariants have global analogues. In the former case, it is the so-called α-invariant of Tian; in the latter case, it is the global log canonical threshold of the Fano variety, which is the infimum of log canonical thresholds of all effective Q-divisors numerically equivalent to the anticanonical divisor. An appendix to this paper contains a proof that the global log canonical threshold of a smooth Fano variety coincides with its α-invariant of Tian. The purpose of the paper is to compute the global log canonical thresholds of smooth Fano threefolds (altogether, there are 105 deformation families of such threefolds). The global log canonical thresholds are computed for every smooth threefold in 64 deformation families, and the global log canonical thresholds are computed for a general threefold in 20 deformation families. Some bounds for the global log canonical thresholds are computed for 14 deformation families. Appendix A is due to J.-P. Demailly.

  14. Parameterization of disorder predictors for large-scale applications requiring high specificity by using an extended benchmark dataset

    Directory of Open Access Journals (Sweden)

    Eisenhaber Frank

    2010-02-01

    Full Text Available Abstract Background Algorithms designed to predict protein disorder play an important role in structural and functional genomics, as disordered regions have been reported to participate in important cellular processes. Consequently, several methods with different underlying principles for disorder prediction have been independently developed by various groups. For assessing their usability in automated workflows, we are interested in identifying parameter settings and threshold selections, under which the performance of these predictors becomes directly comparable. Results First, we derived a new benchmark set that accounts for different flavours of disorder complemented with a similar amount of order annotation derived for the same protein set. We show that, using the recommended default parameters, the programs tested are producing a wide range of predictions at different levels of specificity and sensitivity. We identify settings, in which the different predictors have the same false positive rate. We assess conditions when sets of predictors can be run together to derive consensus or complementary predictions. This is useful in the framework of proteome-wide applications where high specificity is required such as in our in-house sequence analysis pipeline and the ANNIE webserver. Conclusions This work identifies parameter settings and thresholds for a selection of disorder predictors to produce comparable results at a desired level of specificity over a newly derived benchmark dataset that accounts equally for ordered and disordered regions of different lengths.

  15. An Analysis on Better Testing than Training Performances on the Iris Dataset

    NARCIS (Netherlands)

    Schutten, Marten; Wiering, Marco

    2016-01-01

    The Iris dataset is a well known dataset containing information on three different types of Iris flowers. A typical and popular method for solving classification problems on datasets such as the Iris set is the support vector machine (SVM). In order to do so the dataset is separated in a set used

  16. Threshold Concepts and Information Literacy

    Science.gov (United States)

    Townsend, Lori; Brunetti, Korey; Hofer, Amy R.

    2011-01-01

    What do we teach when we teach information literacy in higher education? This paper describes a pedagogical approach to information literacy that helps instructors focus content around transformative learning thresholds. The threshold concept framework holds promise for librarians because it grounds the instructor in the big ideas and underlying…

  17. A Threshold Continuum for Aeolian Sand Transport

    Science.gov (United States)

    Swann, C.; Ewing, R. C.; Sherman, D. J.

    2015-12-01

    The threshold of motion for aeolian sand transport marks the initial entrainment of sand particles by the force of the wind. This is typically defined and modeled as a singular wind speed for a given grain size and is based on field and laboratory experimental data. However, the definition of threshold varies significantly between these empirical models, largely because the definition is based on visual-observations of initial grain movement. For example, in his seminal experiments, Bagnold defined threshold of motion when he observed that 100% of the bed was in motion. Others have used 50% and lesser values. Differences in threshold models, in turn, result is large errors in predicting the fluxes associated with sand and dust transport. Here we use a wind tunnel and novel sediment trap to capture the fractions of sand in creep, reptation and saltation at Earth and Mars pressures and show that the threshold of motion for aeolian sand transport is best defined as a continuum in which grains progress through stages defined by the proportion of grains in creep and saltation. We propose the use of scale dependent thresholds modeled by distinct probability distribution functions that differentiate the threshold based on micro to macro scale applications. For example, a geologic timescale application corresponds to a threshold when 100% of the bed in motion whereas a sub-second application corresponds to a threshold when a single particle is set in motion. We provide quantitative measurements (number and mode of particle movement) corresponding to visual observations, percent of bed in motion and degrees of transport intermittency for Earth and Mars. Understanding transport as a continuum provides a basis for revaluating sand transport thresholds on Earth, Mars and Titan.

  18. Characterization of Depleted-Uranium Strength and Damage Behavior

    Energy Technology Data Exchange (ETDEWEB)

    Gray, III, George T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chen, Shuh-Rong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bronkhorst, Curt A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dennis-Koller, Darcie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cerreta, Ellen K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cady, Carl M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); McCabe, Rodney J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Addessio, Francis L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Schraad, Mark W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Thoma, Dan J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lopez, Mike F. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Mason, Thomas A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Papin, Pallas A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Trujillo, Carl P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Korzekwa, Deniece R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Luscher, Darby J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hixson, Robert S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Maudlin, Paul J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelly, A. M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2012-12-17

    The intent of this report is to document the status of our knowledge of the mechanical and damage behavior of Depleted Uranium(DU hereafter). This report briefly summaries the motivation of the experimental and modeling research conducted at Los Alamos National Laboratory(LANL) on DU since the early 1980’s and thereafter the current experimental data quantifying the strength and damage behavior of DU as a function of a number of experimental variables including processing, strain rate, temperature, stress state, and shock prestraining. The effect of shock prestraining on the structure-property response of DU is described and the effect on post-shock mechanical behavior of DU is discussed. The constitutive experimental data utilized to support the derivation of two constitutive strength (plasticity) models, the Preston-Tonks-Wallace (PTW) and Mechanical Threshold Stress (MTS) models, for both annealed and shock prestrained DU are detailed and the Taylor cylinder validation tests and finite-element modeling (FEM) utilized to validate these strength models is discussed. The similarities and differences in the PTW and MTS model descriptions for DU are discussed for both the annealed and shock prestrained conditions. Quasi-static tensile data as a function of triaxial constraint and spallation test data are described. An appendix additionally briefly describes low-pressure equation-of-state data for DU utilized to support the spallation experiments. The constitutive behavior of DU screw/bolt material is presented. The response of DU subjected to dynamic tensile extrusion testing as a function of temperature is also described. This integrated experimental technique is planned to provide an additional validation test in the future. The damage data as a function of triaxiality, tensile and spallation data, is thereafter utilized to support derivation of the Tensile Plasticity (TEPLA) damage model and simulations for comparison to the DU spallation data are presented

  19. Interactive visualization and analysis of multimodal datasets for surgical applications.

    Science.gov (United States)

    Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James

    2012-12-01

    Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.

  20. Histogram-Based Thresholding for Detection and Quantification of Hemorrhages in Retinal Images

    Directory of Open Access Journals (Sweden)

    Hussain Fadhel Hamdan Jaafar

    2016-12-01

    Full Text Available Retinal image analysis is commonly used for the detection and quantification of retinal diabetic retinopathy. In retinal images, dark lesions including hemorrhages and microaneurysms are the earliest warnings of vision loss. In this paper, new algorithm for extraction and quantification of hemorrhages in fundus images is presented. Hemorrhage candidates are extracted in a preliminary step as a coarse segmentation followed by a fine segmentation step. Local variation processes are applied in the coarse segmentation step to determine boundaries of all candidates with distinct edges. Fine segmentation processes are based on histogram thresholding to extract real hemorrhages from the segmented candidates locally. The proposed method was trained and tested using an image dataset of 153 manually labeled retinal images. At the pixel level, the proposed method could identify abnormal retinal images with 90.7% sensitivity and 85.1% predictive value. Due to its distinctive performance measurements, this technique demonstrates that it could be used for a computer-aided mass screening of retinal diseases.

  1. Something From Nothing (There): Collecting Global IPv6 Datasets from DNS

    NARCIS (Netherlands)

    Fiebig, T.; Borgolte, Kevin; Hao, Shuang; Kruegel, Christopher; Vigna, Giovanny; Spring, Neil; Riley, George F.

    2017-01-01

    Current large-scale IPv6 studies mostly rely on non-public datasets, asmost public datasets are domain specific. For instance, traceroute-based datasetsare biased toward network equipment. In this paper, we present a new methodologyto collect IPv6 address datasets that does not require access to

  2. Automatic processing of multimodal tomography datasets.

    Science.gov (United States)

    Parsons, Aaron D; Price, Stephen W T; Wadeson, Nicola; Basham, Mark; Beale, Andrew M; Ashton, Alun W; Mosselmans, J Frederick W; Quinn, Paul D

    2017-01-01

    With the development of fourth-generation high-brightness synchrotrons on the horizon, the already large volume of data that will be collected on imaging and mapping beamlines is set to increase by orders of magnitude. As such, an easy and accessible way of dealing with such large datasets as quickly as possible is required in order to be able to address the core scientific problems during the experimental data collection. Savu is an accessible and flexible big data processing framework that is able to deal with both the variety and the volume of data of multimodal and multidimensional scientific datasets output such as those from chemical tomography experiments on the I18 microfocus scanning beamline at Diamond Light Source.

  3. Axillary Lymph Node Evaluation Utilizing Convolutional Neural Networks Using MRI Dataset.

    Science.gov (United States)

    Ha, Richard; Chang, Peter; Karcich, Jenika; Mutasa, Simukayi; Fardanesh, Reza; Wynn, Ralph T; Liu, Michael Z; Jambawalikar, Sachin

    2018-04-25

    The aim of this study is to evaluate the role of convolutional neural network (CNN) in predicting axillary lymph node metastasis, using a breast MRI dataset. An institutional review board (IRB)-approved retrospective review of our database from 1/2013 to 6/2016 identified 275 axillary lymph nodes for this study. Biopsy-proven 133 metastatic axillary lymph nodes and 142 negative control lymph nodes were identified based on benign biopsies (100) and from healthy MRI screening patients (42) with at least 3 years of negative follow-up. For each breast MRI, axillary lymph node was identified on first T1 post contrast dynamic images and underwent 3D segmentation using an open source software platform 3D Slicer. A 32 × 32 patch was then extracted from the center slice of the segmented tumor data. A CNN was designed for lymph node prediction based on each of these cropped images. The CNN consisted of seven convolutional layers and max-pooling layers with 50% dropout applied in the linear layer. In addition, data augmentation and L2 regularization were performed to limit overfitting. Training was implemented using the Adam optimizer, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. Code for this study was written in Python using the TensorFlow module (1.0.0). Experiments and CNN training were done on a Linux workstation with NVIDIA GTX 1070 Pascal GPU. Two class axillary lymph node metastasis prediction models were evaluated. For each lymph node, a final softmax score threshold of 0.5 was used for classification. Based on this, CNN achieved a mean five-fold cross-validation accuracy of 84.3%. It is feasible for current deep CNN architectures to be trained to predict likelihood of axillary lymph node metastasis. Larger dataset will likely improve our prediction model and can potentially be a non-invasive alternative to core needle biopsy and even sentinel lymph node

  4. Iran: the next nuclear threshold state?

    OpenAIRE

    Maurer, Christopher L.

    2014-01-01

    Approved for public release; distribution is unlimited A nuclear threshold state is one that could quickly operationalize its peaceful nuclear program into one capable of producing a nuclear weapon. This thesis compares two known threshold states, Japan and Brazil, with Iran to determine if the Islamic Republic could also be labeled a threshold state. Furthermore, it highlights the implications such a status could have on U.S. nonproliferation policy. Although Iran's nuclear program is mir...

  5. GUDM: Automatic Generation of Unified Datasets for Learning and Reasoning in Healthcare.

    Science.gov (United States)

    Ali, Rahman; Siddiqi, Muhammad Hameed; Idris, Muhammad; Ali, Taqdir; Hussain, Shujaat; Huh, Eui-Nam; Kang, Byeong Ho; Lee, Sungyoung

    2015-07-02

    A wide array of biomedical data are generated and made available to healthcare experts. However, due to the diverse nature of data, it is difficult to predict outcomes from it. It is therefore necessary to combine these diverse data sources into a single unified dataset. This paper proposes a global unified data model (GUDM) to provide a global unified data structure for all data sources and generate a unified dataset by a "data modeler" tool. The proposed tool implements user-centric priority based approach which can easily resolve the problems of unified data modeling and overlapping attributes across multiple datasets. The tool is illustrated using sample diabetes mellitus data. The diverse data sources to generate the unified dataset for diabetes mellitus include clinical trial information, a social media interaction dataset and physical activity data collected using different sensors. To realize the significance of the unified dataset, we adopted a well-known rough set theory based rules creation process to create rules from the unified dataset. The evaluation of the tool on six different sets of locally created diverse datasets shows that the tool, on average, reduces 94.1% time efforts of the experts and knowledge engineer while creating unified datasets.

  6. Fatigue crack Behaviour in a High Strength Tool Steel

    DEFF Research Database (Denmark)

    Højerslev, Christian; Carstensen, Jesper V.; Brøndsted, Povl

    2002-01-01

    The influence of microstructure on fatigue crack initiation and crack growth of a hardened and tempered high speed steel was investigated. The evolution of fatigue cracks was followed in four point bending at room temperature. It was found that a carbide damage zone exists above a threshold load...... value of maximally 80% of the yield strength of the steel. The size of this carbide damage zone increases with increasing load amplitude, and the zone is apparently associated with crack nucleation. On fatigue crack propagation plastic deformation of the matrix occurs in a radius of approximately 4...... microns in front of the fatigue crack tip, which is comparable with the relevant mean free carbide spacing....

  7. A Research Graph dataset for connecting research data repositories using RD-Switchboard.

    Science.gov (United States)

    Aryani, Amir; Poblet, Marta; Unsworth, Kathryn; Wang, Jingbo; Evans, Ben; Devaraju, Anusuriya; Hausstein, Brigitte; Klas, Claus-Peter; Zapilko, Benjamin; Kaplun, Samuele

    2018-05-29

    This paper describes the open access graph dataset that shows the connections between Dryad, CERN, ANDS and other international data repositories to publications and grants across multiple research data infrastructures. The graph dataset was created using the Research Graph data model and the Research Data Switchboard (RD-Switchboard), a collaborative project by the Research Data Alliance DDRI Working Group (DDRI WG) with the aim to discover and connect the related research datasets based on publication co-authorship or jointly funded grants. The graph dataset allows researchers to trace and follow the paths to understanding a body of work. By mapping the links between research datasets and related resources, the graph dataset improves both their discovery and visibility, while avoiding duplicate efforts in data creation. Ultimately, the linked datasets may spur novel ideas, facilitate reproducibility and re-use in new applications, stimulate combinatorial creativity, and foster collaborations across institutions.

  8. Incorporation of trace elements in Portland cement clinker: Thresholds limits for Cu, Ni, Sn or Zn

    International Nuclear Information System (INIS)

    Gineys, N.; Aouad, G.; Sorrentino, F.; Damidot, D.

    2011-01-01

    This paper aims at defining precisely, the threshold limits for several trace elements (Cu, Ni, Sn or Zn) which correspond to the maximum amount that could be incorporated into a standard clinker whilst reaching the limit of solid solution of its four major phases (C 3 S, C 2 S, C 3 A and C 4 AF). These threshold limits were investigated through laboratory synthesised clinkers that were mainly studied by X-ray Diffraction and Scanning Electron Microscopy. The reference clinker was close to a typical Portland clinker (65% C 3 S, 18% C 2 S, 8% C 3 A and 8% C 4 AF). The threshold limits for Cu, Ni, Zn and Sn are quite high with respect to the current contents in clinker and were respectively equal to 0.35, 0.5, 0.7 and 1 wt.%. It appeared that beyond the defined threshold limits, trace elements had different behaviours. Ni was associated with Mg as a magnesium nickel oxide (MgNiO 2 ) and Sn reacted with lime to form a calcium stannate (Ca 2 SnO 4 ). Cu changed the crystallisation process and affected therefore the formation of C 3 S. Indeed a high content of Cu in clinker led to the decomposition of C 3 S into C 2 S and of free lime. Zn, in turn, affected the formation of C 3 A. Ca 6 Zn 3 Al 4 O 15 was formed whilst a tremendous reduction of C 3 A content was identified. The reactivity of cements made with the clinkers at the threshold limits was followed by calorimetry and compressive strength measurements on cement paste. The results revealed that the doped cements were at least as reactive as the reference cement.

  9. Hydrometeorological threshold conditions for debris flow initiation in Norway

    Directory of Open Access Journals (Sweden)

    N. K. Meyer

    2012-10-01

    Full Text Available Debris flows, triggered by extreme precipitation events and rapid snow melt, cause considerable damage to the Norwegian infrastructure every year. To define intensity-duration (ID thresholds for debris flow initiation critical water supply conditions arising from intensive rainfall or snow melt were assessed on the basis of daily hydro-meteorological information for 502 documented debris flow events. Two threshold types were computed: one based on absolute ID relationships and one using ID relationships normalized by the local precipitation day normal (PDN. For each threshold type, minimum, medium and maximum threshold values were defined by fitting power law curves along the 10th, 50th and 90th percentiles of the data population. Depending on the duration of the event, the absolute threshold intensities needed for debris flow initiation vary between 15 and 107 mm day−1. Since the PDN changes locally, the normalized thresholds show spatial variations. Depending on location, duration and threshold level, the normalized threshold intensities vary between 6 and 250 mm day−1. The thresholds obtained were used for a frequency analysis of over-threshold events giving an estimation of the exceedance probability and thus potential for debris flow events in different parts of Norway. The absolute thresholds are most often exceeded along the west coast, while the normalized thresholds are most frequently exceeded on the west-facing slopes of the Norwegian mountain ranges. The minimum thresholds derived in this study are in the range of other thresholds obtained for regions with a climate comparable to Norway. Statistics reveal that the normalized threshold is more reliable than the absolute threshold as the former shows no spatial clustering of debris flows related to water supply events captured by the threshold.

  10. Error threshold inference from Global Precipitation Measurement (GPM) satellite rainfall data and interpolated ground-based rainfall measurements in Metro Manila

    Science.gov (United States)

    Ampil, L. J. Y.; Yao, J. G.; Lagrosas, N.; Lorenzo, G. R. H.; Simpas, J.

    2017-12-01

    The Global Precipitation Measurement (GPM) mission is a group of satellites that provides global observations of precipitation. Satellite-based observations act as an alternative if ground-based measurements are inadequate or unavailable. Data provided by satellites however must be validated for this data to be reliable and used effectively. In this study, the Integrated Multisatellite Retrievals for GPM (IMERG) Final Run v3 half-hourly product is validated by comparing against interpolated ground measurements derived from sixteen ground stations in Metro Manila. The area considered in this study is the region 14.4° - 14.8° latitude and 120.9° - 121.2° longitude, subdivided into twelve 0.1° x 0.1° grid squares. Satellite data from June 1 - August 31, 2014 with the data aggregated to 1-day temporal resolution are used in this study. The satellite data is directly compared to measurements from individual ground stations to determine the effect of the interpolation by contrast against the comparison of satellite data and interpolated measurements. The comparisons are calculated by taking a fractional root-mean-square error (F-RMSE) between two datasets. The results show that interpolation improves errors compared to using raw station data except during days with very small amounts of rainfall. F-RMSE reaches extreme values of up to 654 without a rainfall threshold. A rainfall threshold is inferred to remove extreme error values and make the distribution of F-RMSE more consistent. Results show that the rainfall threshold varies slightly per month. The threshold for June is inferred to be 0.5 mm, reducing the maximum F-RMSE to 9.78, while the threshold for July and August is inferred to be 0.1 mm, reducing the maximum F-RMSE to 4.8 and 10.7, respectively. The maximum F-RMSE is reduced further as the threshold is increased. Maximum F-RMSE is reduced to 3.06 when a rainfall threshold of 10 mm is applied over the entire duration of JJA. These results indicate that

  11. Process mining in oncology using the MIMIC-III dataset

    Science.gov (United States)

    Prima Kurniati, Angelina; Hall, Geoff; Hogg, David; Johnson, Owen

    2018-03-01

    Process mining is a data analytics approach to discover and analyse process models based on the real activities captured in information systems. There is a growing body of literature on process mining in healthcare, including oncology, the study of cancer. In earlier work we found 37 peer-reviewed papers describing process mining research in oncology with a regular complaint being the limited availability and accessibility of datasets with suitable information for process mining. Publicly available datasets are one option and this paper describes the potential to use MIMIC-III, for process mining in oncology. MIMIC-III is a large open access dataset of de-identified patient records. There are 134 publications listed as using the MIMIC dataset, but none of them have used process mining. The MIMIC-III dataset has 16 event tables which are potentially useful for process mining and this paper demonstrates the opportunities to use MIMIC-III for process mining in oncology. Our research applied the L* lifecycle method to provide a worked example showing how process mining can be used to analyse cancer pathways. The results and data quality limitations are discussed along with opportunities for further work and reflection on the value of MIMIC-III for reproducible process mining research.

  12. 11 CFR 9036.1 - Threshold submission.

    Science.gov (United States)

    2010-01-01

    ... credit or debit card, including one made over the Internet, the candidate shall provide sufficient... section shall not count toward the threshold amount. (c) Threshold certification by Commission. (1) After...

  13. Veterans Affairs Suicide Prevention Synthetic Dataset

    Data.gov (United States)

    Department of Veterans Affairs — The VA's Veteran Health Administration, in support of the Open Data Initiative, is providing the Veterans Affairs Suicide Prevention Synthetic Dataset (VASPSD). The...

  14. SAR image classification based on CNN in real and simulation datasets

    Science.gov (United States)

    Peng, Lijiang; Liu, Ming; Liu, Xiaohua; Dong, Liquan; Hui, Mei; Zhao, Yuejin

    2018-04-01

    Convolution neural network (CNN) has made great success in image classification tasks. Even in the field of synthetic aperture radar automatic target recognition (SAR-ATR), state-of-art results has been obtained by learning deep representation of features on the MSTAR benchmark. However, the raw data of MSTAR have shortcomings in training a SAR-ATR model because of high similarity in background among the SAR images of each kind. This indicates that the CNN would learn the hierarchies of features of backgrounds as well as the targets. To validate the influence of the background, some other SAR images datasets have been made which contains the simulation SAR images of 10 manufactured targets such as tank and fighter aircraft, and the backgrounds of simulation SAR images are sampled from the whole original MSTAR data. The simulation datasets contain the dataset that the backgrounds of each kind images correspond to the one kind of backgrounds of MSTAR targets or clutters and the dataset that each image shares the random background of whole MSTAR targets or clutters. In addition, mixed datasets of MSTAR and simulation datasets had been made to use in the experiments. The CNN architecture proposed in this paper are trained on all datasets mentioned above. The experimental results shows that the architecture can get high performances on all datasets even the backgrounds of the images are miscellaneous, which indicates the architecture can learn a good representation of the targets even though the drastic changes on background.

  15. On sample size and different interpretations of snow stability datasets

    Science.gov (United States)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar

  16. Really big data: Processing and analysis of large datasets

    Science.gov (United States)

    Modern animal breeding datasets are large and getting larger, due in part to the recent availability of DNA data for many animals. Computational methods for efficiently storing and analyzing those data are under development. The amount of storage space required for such datasets is increasing rapidl...

  17. High-frequency (8 to 16 kHz) reference thresholds and intrasubject threshold variability relative to ototoxicity criteria using a Sennheiser HDA 200 earphone.

    Science.gov (United States)

    Frank, T

    2001-04-01

    The first purpose of this study was to determine high-frequency (8 to 16 kHz) thresholds for standardizing reference equivalent threshold sound pressure levels (RETSPLs) for a Sennheiser HDA 200 earphone. The second and perhaps more important purpose of this study was to determine whether repeated high-frequency thresholds using a Sennheiser HDA 200 earphone had a lower intrasubject threshold variability than the ASHA 1994 significant threshold shift criteria for ototoxicity. High-frequency thresholds (8 to 16 kHz) were obtained for 100 (50 male, 50 female) normally hearing (0.25 to 8 kHz) young adults (mean age of 21.2 yr) in four separate test sessions using a Sennheiser HDA 200 earphone. The mean and median high-frequency thresholds were similar for each test session and increased as frequency increased. At each frequency, the high-frequency thresholds were not significantly (p > 0.05) different for gender, test ear, or test session. The median thresholds at each frequency were similar to the 1998 interim ISO RETSPLs; however, large standard deviations and wide threshold distributions indicated very high intersubject threshold variability, especially at 14 and 16 kHz. Threshold repeatability was determined by finding the threshold differences between each possible test session comparison (N = 6). About 98% of all of the threshold differences were within a clinically acceptable range of +/-10 dB from 8 to 14 kHz. The threshold differences between each subject's second, third, and fourth minus their first test session were also found to determine whether intrasubject threshold variability was less than the ASHA 1994 criteria for determining a significant threshold shift due to ototoxicity. The results indicated a false-positive rate of 0% for a threshold shift > or = 20 dB at any frequency and a false-positive rate of 2% for a threshold shift >10 dB at two consecutive frequencies. This study verified that the output of high-frequency audiometers at 0 dB HL using

  18. A robust dataset-agnostic heart disease classifier from Phonocardiogram.

    Science.gov (United States)

    Banerjee, Rohan; Dutta Choudhury, Anirban; Deshpande, Parijat; Bhattacharya, Sakyajit; Pal, Arpan; Mandana, K M

    2017-07-01

    Automatic classification of normal and abnormal heart sounds is a popular area of research. However, building a robust algorithm unaffected by signal quality and patient demography is a challenge. In this paper we have analysed a wide list of Phonocardiogram (PCG) features in time and frequency domain along with morphological and statistical features to construct a robust and discriminative feature set for dataset-agnostic classification of normal and cardiac patients. The large and open access database, made available in Physionet 2016 challenge was used for feature selection, internal validation and creation of training models. A second dataset of 41 PCG segments, collected using our in-house smart phone based digital stethoscope from an Indian hospital was used for performance evaluation. Our proposed methodology yielded sensitivity and specificity scores of 0.76 and 0.75 respectively on the test dataset in classifying cardiovascular diseases. The methodology also outperformed three popular prior art approaches, when applied on the same dataset.

  19. A Comparative Analysis of Classification Algorithms on Diverse Datasets

    Directory of Open Access Journals (Sweden)

    M. Alghobiri

    2018-04-01

    Full Text Available Data mining involves the computational process to find patterns from large data sets. Classification, one of the main domains of data mining, involves known structure generalizing to apply to a new dataset and predict its class. There are various classification algorithms being used to classify various data sets. They are based on different methods such as probability, decision tree, neural network, nearest neighbor, boolean and fuzzy logic, kernel-based etc. In this paper, we apply three diverse classification algorithms on ten datasets. The datasets have been selected based on their size and/or number and nature of attributes. Results have been discussed using some performance evaluation measures like precision, accuracy, F-measure, Kappa statistics, mean absolute error, relative absolute error, ROC Area etc. Comparative analysis has been carried out using the performance evaluation measures of accuracy, precision, and F-measure. We specify features and limitations of the classification algorithms for the diverse nature datasets.

  20. AUTHENTICATION ARCHITECTURE USING THRESHOLD CRYPTOGRAPHY IN KERBEROS FOR MOBILE AD HOC NETWORKS

    Directory of Open Access Journals (Sweden)

    Hadj Gharib

    2014-06-01

    Full Text Available The use of wireless technologies is gradually increasing and risks related to the use of these technologies are considerable. Due to their dynamically changing topology and open environment without a centralized policy control of a traditional network, a mobile ad hoc network (MANET is vulnerable to the presence of malicious nodes and attacks. The ideal solution to overcome a myriad of security concerns in MANET’s is the use of reliable authentication architecture. In this paper we propose a new key management scheme based on threshold cryptography in kerberos for MANET’s, the proposed scheme uses the elliptic curve cryptography method that consumes fewer resources well adapted to the wireless environment. Our approach shows a strength and effectiveness against attacks.

  1. Thermotactile perception thresholds measurement conditions.

    Science.gov (United States)

    Maeda, Setsuo; Sakakibara, Hisataka

    2002-10-01

    The purpose of this paper is to investigate the effects of posture, push force and rate of temperature change on thermotactile thresholds and to clarify suitable measuring conditions for Japanese people. Thermotactile (warm and cold) thresholds on the right middle finger were measured with an HVLab thermal aesthesiometer. Subjects were eight healthy male Japanese students. The effects of posture in measurement were examined in the posture of a straight hand and forearm placed on a support, the same posture without a support, and the fingers and hand flexed at the wrist with the elbow placed on a desk. The finger push force applied to the applicator of the thermal aesthesiometer was controlled at a 0.5, 1.0, 2.0 and 3.0 N. The applicator temperature was changed to 0.5, 1.0, 1.5, 2.0 and 2.5 degrees C/s. After each measurement, subjects were asked about comfort under the measuring conditions. Three series of experiments were conducted on different days to evaluate repeatability. Repeated measures ANOVA showed that warm thresholds were affected by the push force and the rate of temperature change and that cold thresholds were influenced by posture and push force. The comfort assessment indicated that the measurement posture of a straight hand and forearm laid on a support was the most comfortable for the subjects. Relatively high repeatability was obtained under measurement conditions of a 1 degrees C/s temperature change rate and a 0.5 N push force. Measurement posture, push force and rate of temperature change can affect the thermal threshold. Judging from the repeatability, a push force of 0.5 N and a temperature change of 1.0 degrees C/s in the posture with the straight hand and forearm laid on a support are recommended for warm and cold threshold measurements.

  2. DOE approach to threshold quantities

    International Nuclear Information System (INIS)

    Wickham, L.E.; Kluk, A.F.; Department of Energy, Washington, DC)

    1985-01-01

    The Department of Energy (DOE) is developing the concept of threshold quantities for use in determining which waste materials must be handled as radioactive waste and which may be disposed of as nonradioactive waste at its sites. Waste above this concentration level would be managed as radioactive or mixed waste (if hazardous chemicals are present); waste below this level would be handled as sanitary waste. Ideally, the threshold must be set high enough to significantly reduce the amount of waste requiring special handling. It must also be low enough so that waste at the threshold quantity poses a very small health risk and multiple exposures to such waste would still constitute a small health risk. It should also be practical to segregate waste above or below the threshold quantity using available instrumentation. Guidance is being prepared to aid DOE sites in establishing threshold quantity values based on pathways analysis using site-specific parameters (waste stream characteristics, maximum exposed individual, population considerations, and site specific parameters such as rainfall, etc.). A guidance dose of between 0.001 to 1.0 mSv/y (0.1 to 100 mrem/y) was recommended with 0.3 mSv/y (30 mrem/y) selected as the guidance dose upon which to base calculations. Several tasks were identified, beginning with the selection of a suitable pathway model for relating dose to the concentration of radioactivity in the waste. Threshold concentrations corresponding to the guidance dose were determined for waste disposal sites at a selected humid and arid site. Finally, cost-benefit considerations at the example sites were addressed. The results of the various tasks are summarized and the relationship of this effort with related developments at other agencies discussed

  3. An assessment of differences in gridded precipitation datasets in complex terrain

    Science.gov (United States)

    Henn, Brian; Newman, Andrew J.; Livneh, Ben; Daly, Christopher; Lundquist, Jessica D.

    2018-01-01

    Hydrologic modeling and other geophysical applications are sensitive to precipitation forcing data quality, and there are known challenges in spatially distributing gauge-based precipitation over complex terrain. We conduct a comparison of six high-resolution, daily and monthly gridded precipitation datasets over the Western United States. We compare the long-term average spatial patterns, and interannual variability of water-year total precipitation, as well as multi-year trends in precipitation across the datasets. We find that the greatest absolute differences among datasets occur in high-elevation areas and in the maritime mountain ranges of the Western United States, while the greatest percent differences among datasets relative to annual total precipitation occur in arid and rain-shadowed areas. Differences between datasets in some high-elevation areas exceed 200 mm yr-1 on average, and relative differences range from 5 to 60% across the Western United States. In areas of high topographic relief, true uncertainties and biases are likely higher than the differences among the datasets; we present evidence of this based on streamflow observations. Precipitation trends in the datasets differ in magnitude and sign at smaller scales, and are sensitive to how temporal inhomogeneities in the underlying precipitation gauge data are handled.

  4. Strontium removal jar test dataset for all figures and tables.

    Data.gov (United States)

    U.S. Environmental Protection Agency — The datasets where used to generate data to demonstrate strontium removal under various water quality and treatment conditions. This dataset is associated with the...

  5. Development of a SPARK Training Dataset

    Energy Technology Data Exchange (ETDEWEB)

    Sayre, Amanda M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Olson, Jarrod R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-03-01

    In its first five years, the National Nuclear Security Administration’s (NNSA) Next Generation Safeguards Initiative (NGSI) sponsored more than 400 undergraduate, graduate, and post-doctoral students in internships and research positions (Wyse 2012). In the past seven years, the NGSI program has, and continues to produce a large body of scientific, technical, and policy work in targeted core safeguards capabilities and human capital development activities. Not only does the NGSI program carry out activities across multiple disciplines, but also across all U.S. Department of Energy (DOE)/NNSA locations in the United States. However, products are not readily shared among disciplines and across locations, nor are they archived in a comprehensive library. Rather, knowledge of NGSI-produced literature is localized to the researchers, clients, and internal laboratory/facility publication systems such as the Electronic Records and Information Capture Architecture (ERICA) at the Pacific Northwest National Laboratory (PNNL). There is also no incorporated way of analyzing existing NGSI literature to determine whether the larger NGSI program is achieving its core safeguards capabilities and activities. A complete library of NGSI literature could prove beneficial to a cohesive, sustainable, and more economical NGSI program. The Safeguards Platform for Automated Retrieval of Knowledge (SPARK) has been developed to be a knowledge storage, retrieval, and analysis capability to capture safeguards knowledge to exist beyond the lifespan of NGSI. During the development process, it was necessary to build a SPARK training dataset (a corpus of documents) for initial entry into the system and for demonstration purposes. We manipulated these data to gain new information about the breadth of NGSI publications, and they evaluated the science-policy interface at PNNL as a practical demonstration of SPARK’s intended analysis capability. The analysis demonstration sought to answer the

  6. Benchmarking of Typical Meteorological Year datasets dedicated to Concentrated-PV systems

    Science.gov (United States)

    Realpe, Ana Maria; Vernay, Christophe; Pitaval, Sébastien; Blanc, Philippe; Wald, Lucien; Lenoir, Camille

    2016-04-01

    Accurate analysis of meteorological and pyranometric data for long-term analysis is the basis of decision-making for banks and investors, regarding solar energy conversion systems. This has led to the development of methodologies for the generation of Typical Meteorological Years (TMY) datasets. The most used method for solar energy conversion systems was proposed in 1978 by the Sandia Laboratory (Hall et al., 1978) considering a specific weighted combination of different meteorological variables with notably global, diffuse horizontal and direct normal irradiances, air temperature, wind speed, relative humidity. In 2012, a new approach was proposed in the framework of the European project FP7 ENDORSE. It introduced the concept of "driver" that is defined by the user as an explicit function of the pyranometric and meteorological relevant variables to improve the representativeness of the TMY datasets with respect the specific solar energy conversion system of interest. The present study aims at comparing and benchmarking different TMY datasets considering a specific Concentrated-PV (CPV) system as the solar energy conversion system of interest. Using long-term (15+ years) time-series of high quality meteorological and pyranometric ground measurements, three types of TMY datasets generated by the following methods: the Sandia method, a simplified driver with DNI as the only representative variable and a more sophisticated driver. The latter takes into account the sensitivities of the CPV system with respect to the spectral distribution of the solar irradiance and wind speed. Different TMY datasets from the three methods have been generated considering different numbers of years in the historical dataset, ranging from 5 to 15 years. The comparisons and benchmarking of these TMY datasets are conducted considering the long-term time series of simulated CPV electric production as a reference. The results of this benchmarking clearly show that the Sandia method is not

  7. Doubler system quench detection threshold

    International Nuclear Information System (INIS)

    Kuepke, K.; Kuchnir, M.; Martin, P.

    1983-01-01

    The experimental study leading to the determination of the sensitivity needed for protecting the Fermilab Doubler from damage during quenches is presented. The quench voltage thresholds involved were obtained from measurements made on Doubler cable of resistance x temperature and voltage x time during quenches under several currents and from data collected during operation of the Doubler Quench Protection System as implemented in the B-12 string of 20 magnets. At 4kA, a quench voltage threshold in excess of 5.OV will limit the peak Doubler cable temperature to 452K for quenches originating in the magnet coils whereas a threshold of 0.5V is required for quenches originating outside of coils

  8. MGH-USC Human Connectome Project datasets with ultra-high b-value diffusion MRI.

    Science.gov (United States)

    Fan, Qiuyun; Witzel, Thomas; Nummenmaa, Aapo; Van Dijk, Koene R A; Van Horn, John D; Drews, Michelle K; Somerville, Leah H; Sheridan, Margaret A; Santillana, Rosario M; Snyder, Jenna; Hedden, Trey; Shaw, Emily E; Hollinshead, Marisa O; Renvall, Ville; Zanzonico, Roberta; Keil, Boris; Cauley, Stephen; Polimeni, Jonathan R; Tisdall, Dylan; Buckner, Randy L; Wedeen, Van J; Wald, Lawrence L; Toga, Arthur W; Rosen, Bruce R

    2016-01-01

    The MGH-USC CONNECTOM MRI scanner housed at the Massachusetts General Hospital (MGH) is a major hardware innovation of the Human Connectome Project (HCP). The 3T CONNECTOM scanner is capable of producing a magnetic field gradient of up to 300 mT/m strength for in vivo human brain imaging, which greatly shortens the time spent on diffusion encoding, and decreases the signal loss due to T2 decay. To demonstrate the capability of the novel gradient system, data of healthy adult participants were acquired for this MGH-USC Adult Diffusion Dataset (N=35), minimally preprocessed, and shared through the Laboratory of Neuro Imaging Image Data Archive (LONI IDA) and the WU-Minn Connectome Database (ConnectomeDB). Another purpose of sharing the data is to facilitate methodological studies of diffusion MRI (dMRI) analyses utilizing high diffusion contrast, which perhaps is not easily feasible with standard MR gradient system. In addition, acquisition of the MGH-Harvard-USC Lifespan Dataset is currently underway to include 120 healthy participants ranging from 8 to 90 years old, which will also be shared through LONI IDA and ConnectomeDB. Here we describe the efforts of the MGH-USC HCP consortium in acquiring and sharing the ultra-high b-value diffusion MRI data and provide a report on data preprocessing and access. We conclude with a demonstration of the example data, along with results of standard diffusion analyses, including q-ball Orientation Distribution Function (ODF) reconstruction and tractography. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. SIAM 2007 Text Mining Competition dataset

    Data.gov (United States)

    National Aeronautics and Space Administration — Subject Area: Text Mining Description: This is the dataset used for the SIAM 2007 Text Mining competition. This competition focused on developing text mining...

  10. Environmental Dataset Gateway (EDG) REST Interface

    Data.gov (United States)

    U.S. Environmental Protection Agency — Use the Environmental Dataset Gateway (EDG) to find and access EPA's environmental resources. Many options are available for easily reusing EDG content in other...

  11. High-resolution modeling of thermal thresholds and environmental influences on coral bleaching for local and regional reef management.

    Science.gov (United States)

    Kumagai, Naoki H; Yamano, Hiroya

    2018-01-01

    Coral reefs are one of the world's most threatened ecosystems, with global and local stressors contributing to their decline. Excessive sea-surface temperatures (SSTs) can cause coral bleaching, resulting in coral death and decreases in coral cover. A SST threshold of 1 °C over the climatological maximum is widely used to predict coral bleaching. In this study, we refined thermal indices predicting coral bleaching at high-spatial resolution (1 km) by statistically optimizing thermal thresholds, as well as considering other environmental influences on bleaching such as ultraviolet (UV) radiation, water turbidity, and cooling effects. We used a coral bleaching dataset derived from the web-based monitoring system Sango Map Project, at scales appropriate for the local and regional conservation of Japanese coral reefs. We recorded coral bleaching events in the years 2004-2016 in Japan. We revealed the influence of multiple factors on the ability to predict coral bleaching, including selection of thermal indices, statistical optimization of thermal thresholds, quantification of multiple environmental influences, and use of multiple modeling methods (generalized linear models and random forests). After optimization, differences in predictive ability among thermal indices were negligible. Thermal index, UV radiation, water turbidity, and cooling effects were important predictors of the occurrence of coral bleaching. Predictions based on the best model revealed that coral reefs in Japan have experienced recent and widespread bleaching. A practical method to reduce bleaching frequency by screening UV radiation was also demonstrated in this paper.

  12. Modeling the residual effects and threshold saturation of training: a case study of Olympic swimmers

    Science.gov (United States)

    Hellard, Philippe; Avalos, Marta; Millet, Grégoire; Lacoste, Lucien; Barale, Frédéric; Chatard, Jean-Claude

    2005-01-01

    The aim of this study was to model the residual effects of training on the swimming performance and to compare a model including threshold saturation (MM) to the Banister model (BM). Seven Olympic swimmers were studied over a period of 4 ± 2 years. For three training loads (low-intensity wLIT, high-intensity wHIT and strength training wST), three residual training effects were determined: short-term (STE) during the taper phase, i.e. three weeks before the performance (weeks 0, −1, −2), intermediate-term (ITE) during the intensity phase (weeks −3, −4 and −5) and long-term (LTE) during the volume phase (weeks −6, −7, −8). ITE and LTE were positive for wHIT and wLIT, respectively (P < 0.05). wLIT during taper was related to performances by a parabolic relationship (P < 0.05). Different quality measures indicated that MM compares favorably with BM. Identifying individual training thresholds may help individualizing the distribution of training loads. PMID:15705048

  13. Reaction thresholds in doubly special relativity

    International Nuclear Information System (INIS)

    Heyman, Daniel; Major, Seth; Hinteleitner, Franz

    2004-01-01

    Two theories of special relativity with an additional invariant scale, 'doubly special relativity', are tested with calculations of particle process kinematics. Using the Judes-Visser modified conservation laws, thresholds are studied in both theories. In contrast with some linear approximations, which allow for particle processes forbidden in special relativity, both the Amelino-Camelia and Magueijo-Smolin frameworks allow no additional processes. To first order, the Amelino-Camelia framework thresholds are lowered and the Magueijo-Smolin framework thresholds may be raised or lowered

  14. STRENGTH OF NANOMODIFIED HIGH-STRENGTH LIGHTWEIGHT CONCRETES

    Directory of Open Access Journals (Sweden)

    NOZEMTСEV Alexandr Sergeevich

    2013-02-01

    Full Text Available The paper presents the results of research aimed at development of nanomodified high-strength lightweight concrete for construction. The developed concretes are of low average density and high ultimate compressive strength. It is shown that to produce this type of concrete one need to use hollow glass and aluminosilicate microspheres. To increase the durability of adhesion between cement stone and fine filler the authors offer to use complex nanodimensinal modifier based on iron hydroxide sol and silica sol as a surface nanomodifier for hollow microspheres. It is hypothesized that the proposed modifier has complex effect on the activity of the cement hydration and, at the same time increases bond strength between filler and cement-mineral matrix. The compositions for energy-efficient nanomodified high-strength lightweight concrete which density is 1300…1500 kg/m³ and compressive strength is 40…65 MPa have been developed. The approaches to the design of high-strength lightweight concrete with density of less than 2000 kg/m³ are formulated. It is noted that the proposed concretes possess dense homogeneous structure and moderate mobility. Thus, they allow processing by vibration during production. The economic and practical implications for realization of high-strength lightweight concrete in industrial production have been justified.

  15. Proton threshold states in 26Al and their role in astrophysics

    International Nuclear Information System (INIS)

    Wijekumar, V.

    1985-01-01

    The energy levels of 26 Al between E/sub x/ = 6.3 and 6.5 MeV, corresponding to proton threshold energies in the 25 Mg + p reaction from E/sub p/ = 0 to 200 keV, have been investigated using the reactions 27 Al( 3 He,α) 26 Al and 24 Mg( 3 He,p) 26 Al. Despite early work reporting a doublet at E/sub x/ 6346 keV and E/sub x/ = 6362 keV, most subsequent work reported a single state with conflicting spin and parity assignments. By measuring the spectroscopic factors of these states from the 25 Mg( 3 He,d) 26 Al reaction, resonance strengths, omegaγ, of the proton threshold states in 26 Al corresponding to the 25 Mg(p,γ) 26 Al reaction have been deduced. The limits on the branching ratios from these states to the ground state of 26 Al have been obtained by using the reaction 27 Al( 3 He,αγ) 26 Al. Finally by combining the results of the above experiments, stellar reaction rates of the 25 Mg(p,γ) 26 Al reaction have been calculated. The results conclude that the production rate of 26 Al in stellar environments from the reaction 25 Mg(p,γ) 26 Al is substantially higher than what was calculated from other work

  16. Geoseq: a tool for dissecting deep-sequencing datasets

    Directory of Open Access Journals (Sweden)

    Homann Robert

    2010-10-01

    Full Text Available Abstract Background Datasets generated on deep-sequencing platforms have been deposited in various public repositories such as the Gene Expression Omnibus (GEO, Sequence Read Archive (SRA hosted by the NCBI, or the DNA Data Bank of Japan (ddbj. Despite being rich data sources, they have not been used much due to the difficulty in locating and analyzing datasets of interest. Results Geoseq http://geoseq.mssm.edu provides a new method of analyzing short reads from deep sequencing experiments. Instead of mapping the reads to reference genomes or sequences, Geoseq maps a reference sequence against the sequencing data. It is web-based, and holds pre-computed data from public libraries. The analysis reduces the input sequence to tiles and measures the coverage of each tile in a sequence library through the use of suffix arrays. The user can upload custom target sequences or use gene/miRNA names for the search and get back results as plots and spreadsheet files. Geoseq organizes the public sequencing data using a controlled vocabulary, allowing identification of relevant libraries by organism, tissue and type of experiment. Conclusions Analysis of small sets of sequences against deep-sequencing datasets, as well as identification of public datasets of interest, is simplified by Geoseq. We applied Geoseq to, a identify differential isoform expression in mRNA-seq datasets, b identify miRNAs (microRNAs in libraries, and identify mature and star sequences in miRNAS and c to identify potentially mis-annotated miRNAs. The ease of using Geoseq for these analyses suggests its utility and uniqueness as an analysis tool.

  17. Microstructure vs. Near-threshold Fatigue Crack Growth Behavior of an Heat-treated Ductile Iron

    Directory of Open Access Journals (Sweden)

    Radomila KONEČNÁ

    2012-03-01

    Full Text Available Perferritic isothermal ductile iron (IDI® is an intermediate grade between the low-strength grades of austempered ductile iron (ADI and pearlitic ductile iron (DI recently developed by Zanardi Fonderie Italy. IDI is produced by heat-treating an unalloyed nodular cast iron. The specific matrix microstructure is called “Perferritic” and consists predominantly of ferrite and pearlite. Compared to the pearlitic grades of nodular ductile iron, IDI combines similar strength with higher toughness as a result of the isothermal heat treatment. In this contribution the fatigue crack growth resistance and Kath of IDI are investigated and correlated to mechanical properties and microstructural features. The threshold Ka was determined using the load shedding technique as per ASTM Standard E-647 using CT specimens extracted from a cast block. Tensile specimens were extracted from the broken CT halves and used to determine the static mechanical properties. A metallographic investigation was carried out to correlate structural features and mechanical properties.DOI: http://dx.doi.org/10.5755/j01.ms.18.1.1336

  18. Thresholds in chemical respiratory sensitisation.

    Science.gov (United States)

    Cochrane, Stella A; Arts, Josje H E; Ehnes, Colin; Hindle, Stuart; Hollnagel, Heli M; Poole, Alan; Suto, Hidenori; Kimber, Ian

    2015-07-03

    There is a continuing interest in determining whether it is possible to identify thresholds for chemical allergy. Here allergic sensitisation of the respiratory tract by chemicals is considered in this context. This is an important occupational health problem, being associated with rhinitis and asthma, and in addition provides toxicologists and risk assessors with a number of challenges. In common with all forms of allergic disease chemical respiratory allergy develops in two phases. In the first (induction) phase exposure to a chemical allergen (by an appropriate route of exposure) causes immunological priming and sensitisation of the respiratory tract. The second (elicitation) phase is triggered if a sensitised subject is exposed subsequently to the same chemical allergen via inhalation. A secondary immune response will be provoked in the respiratory tract resulting in inflammation and the signs and symptoms of a respiratory hypersensitivity reaction. In this article attention has focused on the identification of threshold values during the acquisition of sensitisation. Current mechanistic understanding of allergy is such that it can be assumed that the development of sensitisation (and also the elicitation of an allergic reaction) is a threshold phenomenon; there will be levels of exposure below which sensitisation will not be acquired. That is, all immune responses, including allergic sensitisation, have threshold requirement for the availability of antigen/allergen, below which a response will fail to develop. The issue addressed here is whether there are methods available or clinical/epidemiological data that permit the identification of such thresholds. This document reviews briefly relevant human studies of occupational asthma, and experimental models that have been developed (or are being developed) for the identification and characterisation of chemical respiratory allergens. The main conclusion drawn is that although there is evidence that the

  19. Depotentiation from potentiated synaptic strength in a tristable system of coupled phosphatase and kinase

    Directory of Open Access Journals (Sweden)

    Mengjiao Chen

    2016-10-01

    Full Text Available Long-term potentiation (LTP of synaptic strength is strongly implicated in learning and memory. On the other hand, depotentiation, the reversal of synaptic strength from potentiated LTP state to the pre-LTP level, is required in extinction of the obsolete memory. A generic tristable system, which couples the phosphatase and kinase switches, exclusively explains how moderate and high elevation of intracellular calcium concentration triggers long-term depression (LTD and LTP, respectively. The present study, introducing calcium influx and calcium release from internal store into the tristable system, further show that significant elevation of cytoplasmic calcium concentration switches activation of both kinase and phosphatase to their basal states, thereby depotentiate the synaptic strength. A phase-plane analysis of the combined model was employed to explain the previously reported depotentiation in experiments and predict a threshold-like effect with calcium concentration. The results not only reveal a mechanism of NMDAR- and mGluR-dependent depotentiation, but also predict further experiments about the role of internal calcium store in induction of depotentiation and extinction of established memories.

  20. Compositional threshold for Nuclear Waste Glass Durability

    International Nuclear Information System (INIS)

    Kruger, Albert A.; Farooqi, Rahmatullah; Hrma, Pavel R.

    2013-01-01

    Within the composition space of glasses, a distinct threshold appears to exist that separates 'good' glasses, i.e., those which are sufficiently durable, from 'bad' glasses of a low durability. The objective of our research is to clarify the origin of this threshold by exploring the relationship between glass composition, glass structure and chemical durability around the threshold region

  1. Optimizing Systems of Threshold Detection Sensors

    National Research Council Canada - National Science Library

    Banschbach, David C

    2008-01-01

    .... Below the threshold all signals are ignored. We develop a mathematical model for setting individual sensor thresholds to obtain optimal probability of detecting a significant event, given a limit on the total number of false positives allowed...

  2. Harvard Aging Brain Study: Dataset and accessibility.

    Science.gov (United States)

    Dagley, Alexander; LaPoint, Molly; Huijbers, Willem; Hedden, Trey; McLaren, Donald G; Chatwal, Jasmeer P; Papp, Kathryn V; Amariglio, Rebecca E; Blacker, Deborah; Rentz, Dorene M; Johnson, Keith A; Sperling, Reisa A; Schultz, Aaron P

    2017-01-01

    The Harvard Aging Brain Study is sharing its data with the global research community. The longitudinal dataset consists of a 284-subject cohort with the following modalities acquired: demographics, clinical assessment, comprehensive neuropsychological testing, clinical biomarkers, and neuroimaging. To promote more extensive analyses, imaging data was designed to be compatible with other publicly available datasets. A cloud-based system enables access to interested researchers with blinded data available contingent upon completion of a data usage agreement and administrative approval. Data collection is ongoing and currently in its fifth year. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Sensitivity of a numerical wave model on wind re-analysis datasets

    Science.gov (United States)

    Lavidas, George; Venugopal, Vengatesan; Friedrich, Daniel

    2017-03-01

    Wind is the dominant process for wave generation. Detailed evaluation of metocean conditions strengthens our understanding of issues concerning potential offshore applications. However, the scarcity of buoys and high cost of monitoring systems pose a barrier to properly defining offshore conditions. Through use of numerical wave models, metocean conditions can be hindcasted and forecasted providing reliable characterisations. This study reports the sensitivity of wind inputs on a numerical wave model for the Scottish region. Two re-analysis wind datasets with different spatio-temporal characteristics are used, the ERA-Interim Re-Analysis and the CFSR-NCEP Re-Analysis dataset. Different wind products alter results, affecting the accuracy obtained. The scope of this study is to assess different available wind databases and provide information concerning the most appropriate wind dataset for the specific region, based on temporal, spatial and geographic terms for wave modelling and offshore applications. Both wind input datasets delivered results from the numerical wave model with good correlation. Wave results by the 1-h dataset have higher peaks and lower biases, in expense of a high scatter index. On the other hand, the 6-h dataset has lower scatter but higher biases. The study shows how wind dataset affects the numerical wave modelling performance, and that depending on location and study needs, different wind inputs should be considered.

  4. Identifying Threshold Concepts for Information Literacy: A Delphi Study

    Directory of Open Access Journals (Sweden)

    Lori Townsend

    2016-06-01

    Full Text Available This study used the Delphi method to engage expert practitioners on the topic of threshold concepts for information literacy. A panel of experts considered two questions. First, is the threshold concept approach useful for information literacy instruction? The panel unanimously agreed that the threshold concept approach holds potential for information literacy instruction. Second, what are the threshold concepts for information literacy instruction? The panel proposed and discussed over fifty potential threshold concepts, finally settling on six information literacy threshold concepts.

  5. Multiuser switched diversity scheduling systems with per-user threshold

    KAUST Repository

    Nam, Haewoon

    2010-05-01

    A multiuser switched diversity scheduling scheme with per-user feedback threshold is proposed and analyzed in this paper. The conventional multiuser switched diversity scheduling scheme uses a single feedback threshold for every user, where the threshold is a function of the average signal-to-noise ratios (SNRs) of the users as well as the number of users involved in the scheduling process. The proposed scheme, however, constructs a sequence of feedback thresholds instead of a single feedback threshold such that each user compares its channel quality with the corresponding feedback threshold in the sequence. Numerical and simulation results show that thanks to the flexibility of threshold selection, where a potentially different threshold can be used for each user, the proposed scheme provides a higher system capacity than that for the conventional scheme. © 2006 IEEE.

  6. Scaling of compression strength in disordered solids: metallic foams

    Directory of Open Access Journals (Sweden)

    J. Kováčik

    2016-03-01

    Full Text Available The scaling of compression strength with porosity for aluminium foams was investigated. The Al 99.96, AlMg1Si0.6 and AlSi11Mg0.6 foams of various porosity, sample size with and without surface skin were tested in compression. It was observed that the compression strength of aluminium foams scales near the percolation threshold with Tf ≈ 1.9 - 2.0 almost independently on the matrix alloy, sample size and presence of surface skin. The difference of the obtained values of Tf to the theoretical estimate of Tf = 2.64 ± 0.3 by Arbabi and Sahimi and to Ashby estimate of 1.5 was explained using an analogy with the Daoud and Coniglio approach to the scaling of the free energy of sol-gel transition. It leads to the finding that, there are two different universality classes for the critical exponent Tf: when the stretching forces dominate Tf = f = 2.1, respectively when bending forces prevail Tf = .d = 2.64 seems to be valid. Another possibility is the validity of relation Tf ≤ f which varies only according to the universality class of modulus of elasticity in foam.

  7. Strength distribution of γ-transitions deexciting superdeformed rotational bands

    International Nuclear Information System (INIS)

    Lopez-Martens, A.P.; Doesing, T.; Khoo, T.L.; Korichi, A.; Hannachi, F.; Calderin, I.J.; Lauritsen, T.; Ahmad, I.; Carpenter, M.P.; Fischer, S.M.; Hackman, G.; Janssens, R.V.F.; Nisius, D.; Reiter, P.; Amro, H.; Moore, E.F.

    1999-01-01

    The strength distribution of the γ rays in the decay-out from superdeformed (SD) states is investigated by applying the maximum likelihood method, with special emphasis on the influence of the lower threshold given by experimental conditions. Clear graphical solutions are found, and a careful estimation of the dispersion in the values of the number of degrees of freedom and of the average strength of the most likely χ 2 distribution is carried out. For the 194 Hg nucleus, 41 primary transitions from the decay-out of SD states are identified above 2600 keV. It is concluded that they represent the strongest 10% of the transitions selected stochastically from a Porter-Thomas distribution. This would support the scenario of a statistical decay of SD states via coupling to a compound state at normal deformation. However, the occurrence of several very strong direct one-step transitions as previously observed in 194 Hg has a very small probability of the order of 10 -4 . This may indicate special selection rules governing the decay. However, based on the absence of strong primary transitions from SD states in adjacent nuclei, the situation in 194 Hg is viewed as a very lucky incidence

  8. Querying Large Biological Network Datasets

    Science.gov (United States)

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  9. BanglaLekha-Isolated: A multi-purpose comprehensive dataset of Handwritten Bangla Isolated characters

    Directory of Open Access Journals (Sweden)

    Mithun Biswas

    2017-06-01

    Full Text Available BanglaLekha-Isolated, a Bangla handwritten isolated character dataset is presented in this article. This dataset contains 84 different characters comprising of 50 Bangla basic characters, 10 Bangla numerals and 24 selected compound characters. 2000 handwriting samples for each of the 84 characters were collected, digitized and pre-processed. After discarding mistakes and scribbles, 1,66,105 handwritten character images were included in the final dataset. The dataset also includes labels indicating the age and the gender of the subjects from whom the samples were collected. This dataset could be used not only for optical handwriting recognition research but also to explore the influence of gender and age on handwriting. The dataset is publicly available at https://data.mendeley.com/datasets/hf6sf8zrkc/2.

  10. A dataset of human decision-making in teamwork management

    Science.gov (United States)

    Yu, Han; Shen, Zhiqi; Miao, Chunyan; Leung, Cyril; Chen, Yiqiang; Fauvel, Simon; Lin, Jun; Cui, Lizhen; Pan, Zhengxiang; Yang, Qiang

    2017-01-01

    Today, most endeavours require teamwork by people with diverse skills and characteristics. In managing teamwork, decisions are often made under uncertainty and resource constraints. The strategies and the effectiveness of the strategies different people adopt to manage teamwork under different situations have not yet been fully explored, partially due to a lack of detailed large-scale data. In this paper, we describe a multi-faceted large-scale dataset to bridge this gap. It is derived from a game simulating complex project management processes. It presents the participants with different conditions in terms of team members' capabilities and task characteristics for them to exhibit their decision-making strategies. The dataset contains detailed data reflecting the decision situations, decision strategies, decision outcomes, and the emotional responses of 1,144 participants from diverse backgrounds. To our knowledge, this is the first dataset simultaneously covering these four facets of decision-making. With repeated measurements, the dataset may help establish baseline variability of decision-making in teamwork management, leading to more realistic decision theoretic models and more effective decision support approaches.

  11. EVALUATION OF LAND USE/LAND COVER DATASETS FOR URBAN WATERSHED MODELING

    International Nuclear Information System (INIS)

    S.J. BURIAN; M.J. BROWN; T.N. MCPHERSON

    2001-01-01

    Land use/land cover (LULC) data are a vital component for nonpoint source pollution modeling. Most watershed hydrology and pollutant loading models use, in some capacity, LULC information to generate runoff and pollutant loading estimates. Simple equation methods predict runoff and pollutant loads using runoff coefficients or pollutant export coefficients that are often correlated to LULC type. Complex models use input variables and parameters to represent watershed characteristics and pollutant buildup and washoff rates as a function of LULC type. Whether using simple or complex models an accurate LULC dataset with an appropriate spatial resolution and level of detail is paramount for reliable predictions. The study presented in this paper compared and evaluated several LULC dataset sources for application in urban environmental modeling. The commonly used USGS LULC datasets have coarser spatial resolution and lower levels of classification than other LULC datasets. In addition, the USGS datasets do not accurately represent the land use in areas that have undergone significant land use change during the past two decades. We performed a watershed modeling analysis of three urban catchments in Los Angeles, California, USA to investigate the relative difference in average annual runoff volumes and total suspended solids (TSS) loads when using the USGS LULC dataset versus using a more detailed and current LULC dataset. When the two LULC datasets were aggregated to the same land use categories, the relative differences in predicted average annual runoff volumes and TSS loads from the three catchments were 8 to 14% and 13 to 40%, respectively. The relative differences did not have a predictable relationship with catchment size

  12. Sharing Video Datasets in Design Research

    DEFF Research Database (Denmark)

    Christensen, Bo; Abildgaard, Sille Julie Jøhnk

    2017-01-01

    This paper examines how design researchers, design practitioners and design education can benefit from sharing a dataset. We present the Design Thinking Research Symposium 11 (DTRS11) as an exemplary project that implied sharing video data of design processes and design activity in natural settings...... with a large group of fellow academics from the international community of Design Thinking Research, for the purpose of facilitating research collaboration and communication within the field of Design and Design Thinking. This approach emphasizes the social and collaborative aspects of design research, where...... a multitude of appropriate perspectives and methods may be utilized in analyzing and discussing the singular dataset. The shared data is, from this perspective, understood as a design object in itself, which facilitates new ways of working, collaborating, studying, learning and educating within the expanding...

  13. Interpolation of diffusion weighted imaging datasets

    DEFF Research Database (Denmark)

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W

    2014-01-01

    anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal......Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer...... interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical...

  14. Pain thresholds, supra-threshold pain and lidocaine sensitivity in patients with erythromelalgia, including the I848Tmutation in NaV 1.7.

    Science.gov (United States)

    Helås, T; Sagafos, D; Kleggetveit, I P; Quiding, H; Jönsson, B; Segerdahl, M; Zhang, Z; Salter, H; Schmelz, M; Jørum, E

    2017-09-01

    Nociceptive thresholds and supra-threshold pain ratings as well as their reduction upon local injection with lidocaine were compared between healthy subjects and patients with erythromelalgia (EM). Lidocaine (0.25, 0.50, 1.0 or 10 mg/mL) or placebo (saline) was injected intradermally in non-painful areas of the lower arm, in a randomized, double-blind manner, to test the effect on dynamic and static mechanical sensitivity, mechanical pain sensitivity, thermal thresholds and supra-threshold heat pain sensitivity. Heat pain thresholds and pain ratings to supra-threshold heat stimulation did not differ between EM-patients (n = 27) and controls (n = 25), neither did the dose-response curves for lidocaine. Only the subgroup of EM-patients with mutations in sodium channel subunits Na V 1.7, 1.8 or 1.9 (n = 8) had increased lidocaine sensitivity for supra-threshold heat stimuli, contrasting lower sensitivity to strong mechanical stimuli. This pattern was particularly clear in the two patients carrying the Na V 1.7 I848T mutations in whom lidocaine's hyperalgesic effect on mechanical pain sensitivity contrasted more effective heat analgesia. Heat pain thresholds are not sensitized in EM patients, even in those with gain-of-function mutations in Na V 1.7. Differential lidocaine sensitivity was overt only for noxious stimuli in the supra-threshold range suggesting that sensitized supra-threshold encoding is important for the clinical pain phenotype in EM in addition to lower activation threshold. Intracutaneous lidocaine dose-dependently blocked nociceptive sensations, but we did not identify EM patients with particular high lidocaine sensitivity that could have provided valuable therapeutic guidance. Acute pain thresholds and supra-threshold heat pain in controls and patients with erythromelalgia do not differ and have the same lidocaine sensitivity. Acute heat pain thresholds even in EM patients with the Na V 1.7 I848T mutation are normal and only nociceptor

  15. Development of a SPARK Training Dataset

    International Nuclear Information System (INIS)

    Sayre, Amanda M.; Olson, Jarrod R.

    2015-01-01

    In its first five years, the National Nuclear Security Administration's (NNSA) Next Generation Safeguards Initiative (NGSI) sponsored more than 400 undergraduate, graduate, and post-doctoral students in internships and research positions (Wyse 2012). In the past seven years, the NGSI program has, and continues to produce a large body of scientific, technical, and policy work in targeted core safeguards capabilities and human capital development activities. Not only does the NGSI program carry out activities across multiple disciplines, but also across all U.S. Department of Energy (DOE)/NNSA locations in the United States. However, products are not readily shared among disciplines and across locations, nor are they archived in a comprehensive library. Rather, knowledge of NGSI-produced literature is localized to the researchers, clients, and internal laboratory/facility publication systems such as the Electronic Records and Information Capture Architecture (ERICA) at the Pacific Northwest National Laboratory (PNNL). There is also no incorporated way of analyzing existing NGSI literature to determine whether the larger NGSI program is achieving its core safeguards capabilities and activities. A complete library of NGSI literature could prove beneficial to a cohesive, sustainable, and more economical NGSI program. The Safeguards Platform for Automated Retrieval of Knowledge (SPARK) has been developed to be a knowledge storage, retrieval, and analysis capability to capture safeguards knowledge to exist beyond the lifespan of NGSI. During the development process, it was necessary to build a SPARK training dataset (a corpus of documents) for initial entry into the system and for demonstration purposes. We manipulated these data to gain new information about the breadth of NGSI publications, and they evaluated the science-policy interface at PNNL as a practical demonstration of SPARK's intended analysis capability. The analysis demonstration sought to answer

  16. ClimateNet: A Machine Learning dataset for Climate Science Research

    Science.gov (United States)

    Prabhat, M.; Biard, J.; Ganguly, S.; Ames, S.; Kashinath, K.; Kim, S. K.; Kahou, S.; Maharaj, T.; Beckham, C.; O'Brien, T. A.; Wehner, M. F.; Williams, D. N.; Kunkel, K.; Collins, W. D.

    2017-12-01

    Deep Learning techniques have revolutionized commercial applications in Computer vision, speech recognition and control systems. The key for all of these developments was the creation of a curated, labeled dataset ImageNet, for enabling multiple research groups around the world to develop methods, benchmark performance and compete with each other. The success of Deep Learning can be largely attributed to the broad availability of this dataset. Our empirical investigations have revealed that Deep Learning is similarly poised to benefit the task of pattern detection in climate science. Unfortunately, labeled datasets, a key pre-requisite for training, are hard to find. Individual research groups are typically interested in specialized weather patterns, making it hard to unify, and share datasets across groups and institutions. In this work, we are proposing ClimateNet: a labeled dataset that provides labeled instances of extreme weather patterns, as well as associated raw fields in model and observational output. We develop a schema in NetCDF to enumerate weather pattern classes/types, store bounding boxes, and pixel-masks. We are also working on a TensorFlow implementation to natively import such NetCDF datasets, and are providing a reference convolutional architecture for binary classification tasks. Our hope is that researchers in Climate Science, as well as ML/DL, will be able to use (and extend) ClimateNet to make rapid progress in the application of Deep Learning for Climate Science research.

  17. When do price thresholds matter in retail categories?

    OpenAIRE

    Pauwels, Koen; Srinivasan, Shuba; Franses, Philip Hans

    2007-01-01

    textabstractMarketing literature has long recognized that brand price elasticity need not be monotonic and symmetric, but has yet to provide generalizable market-level insights on threshold-based price elasticity, asymmetric thresholds, and the sign and magnitude of elasticity transitions. This paper introduces smooth transition regression models to study threshold-based price elasticity of the top 4 brands across 20 fast-moving consumer good categories. Threshold-based price elasticity is fo...

  18. Resampling Methods Improve the Predictive Power of Modeling in Class-Imbalanced Datasets

    Directory of Open Access Journals (Sweden)

    Paul H. Lee

    2014-09-01

    Full Text Available In the medical field, many outcome variables are dichotomized, and the two possible values of a dichotomized variable are referred to as classes. A dichotomized dataset is class-imbalanced if it consists mostly of one class, and performance of common classification models on this type of dataset tends to be suboptimal. To tackle such a problem, resampling methods, including oversampling and undersampling can be used. This paper aims at illustrating the effect of resampling methods using the National Health and Nutrition Examination Survey (NHANES wave 2009–2010 dataset. A total of 4677 participants aged ≥20 without self-reported diabetes and with valid blood test results were analyzed. The Classification and Regression Tree (CART procedure was used to build a classification model on undiagnosed diabetes. A participant demonstrated evidence of diabetes according to WHO diabetes criteria. Exposure variables included demographics and socio-economic status. CART models were fitted using a randomly selected 70% of the data (training dataset, and area under the receiver operating characteristic curve (AUC was computed using the remaining 30% of the sample for evaluation (testing dataset. CART models were fitted using the training dataset, the oversampled training dataset, the weighted training dataset, and the undersampled training dataset. In addition, resampling case-to-control ratio of 1:1, 1:2, and 1:4 were examined. Resampling methods on the performance of other extensions of CART (random forests and generalized boosted trees were also examined. CARTs fitted on the oversampled (AUC = 0.70 and undersampled training data (AUC = 0.74 yielded a better classification power than that on the training data (AUC = 0.65. Resampling could also improve the classification power of random forests and generalized boosted trees. To conclude, applying resampling methods in a class-imbalanced dataset improved the classification power of CART, random forests

  19. BASE MAP DATASET, INYO COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  20. BASE MAP DATASET, JACKSON COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  1. BASE MAP DATASET, KINGFISHER COUNTY, OKLAHOMA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  2. Cocaine Promotes Coincidence Detection and Lowers Induction Threshold during Hebbian Associative Synaptic Potentiation in Prefrontal Cortex.

    Science.gov (United States)

    Ruan, Hongyu; Yao, Wei-Dong

    2017-01-25

    Addictive drugs usurp neural plasticity mechanisms that normally serve reward-related learning and memory, primarily by evoking changes in glutamatergic synaptic strength in the mesocorticolimbic dopamine circuitry. Here, we show that repeated cocaine exposure in vivo does not alter synaptic strength in the mouse prefrontal cortex during an early period of withdrawal, but instead modifies a Hebbian quantitative synaptic learning rule by broadening the temporal window and lowers the induction threshold for spike-timing-dependent LTP (t-LTP). After repeated, but not single, daily cocaine injections, t-LTP in layer V pyramidal neurons is induced at +30 ms, a normally ineffective timing interval for t-LTP induction in saline-exposed mice. This cocaine-induced, extended-timing t-LTP lasts for ∼1 week after terminating cocaine and is accompanied by an increased susceptibility to potentiation by fewer pre-post spike pairs, indicating a reduced t-LTP induction threshold. Basal synaptic strength and the maximal attainable t-LTP magnitude remain unchanged after cocaine exposure. We further show that the cocaine facilitation of t-LTP induction is caused by sensitized D1-cAMP/protein kinase A dopamine signaling in pyramidal neurons, which then pathologically recruits voltage-gated l-type Ca 2+ channels that synergize with GluN2A-containing NMDA receptors to drive t-LTP at extended timing. Our results illustrate a mechanism by which cocaine, acting on a key neuromodulation pathway, modifies the coincidence detection window during Hebbian plasticity to facilitate associative synaptic potentiation in prefrontal excitatory circuits. By modifying rules that govern activity-dependent synaptic plasticity, addictive drugs can derail the experience-driven neural circuit remodeling process important for executive control of reward and addiction. It is believed that addictive drugs often render an addict's brain reward system hypersensitive, leaving the individual more susceptible to

  3. High-order above-threshold dissociation of molecules

    Science.gov (United States)

    Lu, Peifen; Wang, Junping; Li, Hui; Lin, Kang; Gong, Xiaochun; Song, Qiying; Ji, Qinying; Zhang, Wenbin; Ma, Junyang; Li, Hanxiao; Zeng, Heping; He, Feng; Wu, Jian

    2018-03-01

    Electrons bound to atoms or molecules can simultaneously absorb multiple photons via the above-threshold ionization featured with discrete peaks in the photoelectron spectrum on account of the quantized nature of the light energy. Analogously, the above-threshold dissociation of molecules has been proposed to address the multiple-photon energy deposition in the nuclei of molecules. In this case, nuclear energy spectra consisting of photon-energy spaced peaks exceeding the binding energy of the molecular bond are predicted. Although the observation of such phenomena is difficult, this scenario is nevertheless logical and is based on the fundamental laws. Here, we report conclusive experimental observation of high-order above-threshold dissociation of H2 in strong laser fields where the tunneling-ionized electron transfers the absorbed multiphoton energy, which is above the ionization threshold to the nuclei via the field-driven inelastic rescattering. Our results provide an unambiguous evidence that the electron and nuclei of a molecule as a whole absorb multiple photons, and thus above-threshold ionization and above-threshold dissociation must appear simultaneously, which is the cornerstone of the nowadays strong-field molecular physics.

  4. Thresholds in Xeric Hydrology and Biogeochemistry

    Science.gov (United States)

    Meixner, T.; Brooks, P. D.; Simpson, S. C.; Soto, C. D.; Yuan, F.; Turner, D.; Richter, H.

    2011-12-01

    Due to water limitation, thresholds in hydrologic and biogeochemical processes are common in arid and semi-arid systems. Some of these thresholds such as those focused on rainfall runoff relationships have been well studied. However to gain a full picture of the role that thresholds play in driving the hydrology and biogeochemistry of xeric systems a full view of the entire array of processes at work is needed. Here a walk through the landscape of xeric systems will be conducted illustrating the powerful role of hydrologic thresholds on xeric system biogeochemistry. To understand xeric hydro-biogeochemistry two key ideas need to be focused on. First, it is important to start from a framework of reaction and transport. Second an understanding of the temporal and spatial components of thresholds that have a large impact on hydrologic and biogeochemical fluxes needs to be offered. In the uplands themselves episodic rewetting and drying of soils permits accelerated biogeochemical processing but also more gradual drainage of water through the subsurface than expected in simple conceptions of biogeochemical processes. Hydrologic thresholds (water content above hygroscopic) results in a stop start nutrient spiral of material across the landscape since runoff connecting uplands to xeric perennial riparian is episodic and often only transports materials a short distance (100's of m). This episodic movement results in important and counter-intuitive nutrient inputs to riparian zones but also significant processing and uptake of nutrients. The floods that transport these biogeochemicals also result in significant input to riparian groundwater and may be key to sustaining these critical ecosystems. Importantly the flood driven recharge process itself is a threshold process dependent on flood characteristics (floods greater than 100 cubic meters per second) and antecedent conditions (losing to near neutral gradients). Floods also appear to influence where arid and semi

  5. Thresholds and interactive effects of soil moisture on the temperature response of soil respiration

    DEFF Research Database (Denmark)

    Lellei-Kovács, Eszter; Kovács-Láng, Edit; Botta-Dukát, Zoltán

    2011-01-01

    efflux is soil temperature, while soil moisture has less, although significant effect on soil respiration. Clear thresholds for moisture effects on temperature sensitivity were identified at 0.6, 4.0 and 7.0vol% by almost each model, which relate well to other known limits for biological activity......Ecosystem carbon exchange is poorly understood in low-productivity, semiarid habitats. Here we studied the controls of soil temperature and moisture on soil respiration in climate change field experiment in a sandy forest-steppe. Soil CO2 efflux was measured monthly from April to November in 2003......–2008 on plots receiving either rain exclusion or nocturnal warming, or serving as ambient control. Based on this dataset, we developed and compared empirical models of temperature and moisture effects on soil respiration. Results suggest that in this semiarid ecosystem the main controlling factor for soil CO2...

  6. Determinants of Change in the Cost-effectiveness Threshold.

    Science.gov (United States)

    Paulden, Mike; O'Mahony, James; McCabe, Christopher

    2017-02-01

    The cost-effectiveness threshold in health care systems with a constrained budget should be determined by the cost-effectiveness of displacing health care services to fund new interventions. Using comparative statics, we review some potential determinants of the threshold, including the budget for health care, the demand for existing health care interventions, the technical efficiency of existing interventions, and the development of new health technologies. We consider the anticipated direction of impact that would affect the threshold following a change in each of these determinants. Where the health care system is technically efficient, an increase in the health care budget unambiguously raises the threshold, whereas an increase in the demand for existing, non-marginal health interventions unambiguously lowers the threshold. Improvements in the technical efficiency of existing interventions may raise or lower the threshold, depending on the cause of the improvement in efficiency, whether the intervention is already funded, and, if so, whether it is marginal. New technologies may also raise or lower the threshold, depending on whether the new technology is a substitute for an existing technology and, again, whether the existing technology is marginal. Our analysis permits health economists and decision makers to assess if and in what direction the threshold may change over time. This matters, as threshold changes impact the cost-effectiveness of interventions that require decisions now but have costs and effects that fall in future periods.

  7. Low heat pain thresholds in migraineurs between attacks.

    Science.gov (United States)

    Schwedt, Todd J; Zuniga, Leslie; Chong, Catherine D

    2015-06-01

    Between attacks, migraine is associated with hypersensitivities to sensory stimuli. The objective of this study was to investigate hypersensitivity to pain in migraineurs between attacks. Cutaneous heat pain thresholds were measured in 112 migraineurs, migraine free for ≥ 48 hours, and 75 healthy controls. Pain thresholds at the head and at the arm were compared between migraineurs and controls using two-tailed t-tests. Among migraineurs, correlations between heat pain thresholds and headache frequency, allodynia symptom severity, and time interval until next headache were calculated. Migraineurs had lower pain thresholds than controls at the head (43.9 ℃ ± 3.2 ℃ vs. 45.1 ℃ ± 3.0 ℃, p = 0.015) and arm (43.2 ℃ ± 3.4 ℃ vs. 44.8 ℃ ± 3.3 ℃, p pain thresholds and headache frequency or allodynia symptom severity. For the 41 migraineurs for whom time to next headache was known, there were positive correlations between time to next headache and pain thresholds at the head (r = 0.352, p = 0.024) and arm (r = 0.312, p = 0.047). This study provides evidence that migraineurs have low heat pain thresholds between migraine attacks. Mechanisms underlying these lower pain thresholds could also predispose migraineurs to their next migraine attack, a hypothesis supported by finding positive correlations between pain thresholds and time to next migraine attack. © International Headache Society 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  8. Some considerations regarding the creep crack growth threshold

    International Nuclear Information System (INIS)

    Thouless, M.D.; Evans, A.G.

    1984-01-01

    The preceding analysis reveals that the existence of a threshold determined by the sintering stress does not influence the post threshold crack velocity. Considerations of the sintering stress can thus be conveniently excluded from analysis of the post threshold crack velocity. The presence of a crack growth threshold has been predicted, based on the existence of cavity nucleation controlled crack growth. A preliminary analysis of cavity nucleation rates within the damage zone reveals that this threshold is relatively abrupt, in accord with experimental observations. Consequently, at stress intensities below K /SUB th/ growth becomes nucleation limited and crack blunting occurs in preference to crack growth

  9. Image segmentation evaluation for very-large datasets

    Science.gov (United States)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  10. The Strength Compass

    DEFF Research Database (Denmark)

    Ledertoug, Mette Marie

    In the Ph.D-project ͚Strengths-based Learning - Children͛s character strengths as a means to their learning potential͛ 750 Danish children have assessed ͚The Strength Compass͛ in order to identify their strengths and to create awareness of strengths. This was followed by a strengths......-based intervention program in order to explore the strengths. Finally different methods to apply the strength in everyday life at school were applied. The paper presentation will show the results for strengths display for children aged 6-16 in different categories: Different age groups: Are the same strengths...... present in both small children and youths? Gender: Do the results show differences between the two genders? Danish as a mother- tongue language: Do the results show any differences in the strengths display when considering different language and cultural backgrounds? Children with Special Needs: Do...

  11. When Do Price Thresholds Matter in Retail Categories?

    OpenAIRE

    Koen Pauwels; Shuba Srinivasan; Philip Hans Franses

    2007-01-01

    Marketing literature has long recognized that brand price elasticity need not be monotonic and symmetric, but has yet to provide generalizable market-level insights on threshold-based price elasticity, asymmetric thresholds, and the sign and magnitude of elasticity transitions. This paper introduces smooth transition regression models to study threshold-based price elasticity of the top 4 brands across 20 fast-moving consumer good categories. Threshold-based price elasticity is found for 76% ...

  12. A New Dataset Size Reduction Approach for PCA-Based Classification in OCR Application

    Directory of Open Access Journals (Sweden)

    Mohammad Amin Shayegan

    2014-01-01

    Full Text Available A major problem of pattern recognition systems is due to the large volume of training datasets including duplicate and similar training samples. In order to overcome this problem, some dataset size reduction and also dimensionality reduction techniques have been introduced. The algorithms presently used for dataset size reduction usually remove samples near to the centers of classes or support vector samples between different classes. However, the samples near to a class center include valuable information about the class characteristics and the support vector is important for evaluating system efficiency. This paper reports on the use of Modified Frequency Diagram technique for dataset size reduction. In this new proposed technique, a training dataset is rearranged and then sieved. The sieved training dataset along with automatic feature extraction/selection operation using Principal Component Analysis is used in an OCR application. The experimental results obtained when using the proposed system on one of the biggest handwritten Farsi/Arabic numeral standard OCR datasets, Hoda, show about 97% accuracy in the recognition rate. The recognition speed increased by 2.28 times, while the accuracy decreased only by 0.7%, when a sieved version of the dataset, which is only as half as the size of the initial training dataset, was used.

  13. Time-efficient multidimensional threshold tracking method

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Kowalewski, Borys; Dau, Torsten

    2015-01-01

    Traditionally, adaptive methods have been used to reduce the time it takes to estimate psychoacoustic thresholds. However, even with adaptive methods, there are many cases where the testing time is too long to be clinically feasible, particularly when estimating thresholds as a function of anothe...

  14. The CMS dataset bookkeeping service

    Science.gov (United States)

    Afaq, A.; Dolgert, A.; Guo, Y.; Jones, C.; Kosyakov, S.; Kuznetsov, V.; Lueking, L.; Riley, D.; Sekhri, V.

    2008-07-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  15. The CMS dataset bookkeeping service

    Energy Technology Data Exchange (ETDEWEB)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V [Fermilab, Batavia, Illinois 60510 (United States); Dolgert, A; Jones, C; Kuznetsov, V; Riley, D [Cornell University, Ithaca, New York 14850 (United States)

    2008-07-15

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  16. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V; Dolgert, A; Jones, C; Kuznetsov, V; Riley, D

    2008-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  17. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, Anzar; Dolgert, Andrew; Guo, Yuyi; Jones, Chris; Kosyakov, Sergey; Kuznetsov, Valentin; Lueking, Lee; Riley, Dan; Sekhri, Vijay

    2007-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  18. A light-powered sub-threshold microprocessor

    Energy Technology Data Exchange (ETDEWEB)

    Liu Ming; Chen Hong; Zhang Chun; Li Changmeng; Wang Zhihua, E-mail: lium02@mails.tsinghua.edu.cn [Institute of Microelectronics, Tsinghua University, Beijing 100084 (China)

    2010-11-15

    This paper presents an 8-bit sub-threshold microprocessor which can be powered by an integrated photosensitive diode. With a custom designed sub-threshold standard cell library and 1 kbit sub-threshold SRAM design, the leakage power of 58 nW, dynamic power of 385 nW - 165 kHz, EDP 13 pJ/inst and the operating voltage of 350 mV are achieved. Under a light of about 150 kLux, the microprocessor can run at a rate of up to 500 kHz. The microprocessor can be used for wireless-sensor-network nodes.

  19. Threshold concepts in finance: student perspectives

    Science.gov (United States)

    Hoadley, Susan; Kyng, Tim; Tickle, Leonie; Wood, Leigh N.

    2015-10-01

    Finance threshold concepts are the essential conceptual knowledge that underpin well-developed financial capabilities and are central to the mastery of finance. In this paper we investigate threshold concepts in finance from the point of view of students, by establishing the extent to which students are aware of threshold concepts identified by finance academics. In addition, we investigate the potential of a framework of different types of knowledge to differentiate the delivery of the finance curriculum and the role of modelling in finance. Our purpose is to identify ways to improve curriculum design and delivery, leading to better student outcomes. Whilst we find that there is significant overlap between what students identify as important in finance and the threshold concepts identified by academics, much of this overlap is expressed by indirect reference to the concepts. Further, whilst different types of knowledge are apparent in the student data, there is evidence that students do not necessarily distinguish conceptual from other types of knowledge. As well as investigating the finance curriculum, the research demonstrates the use of threshold concepts to compare and contrast student and academic perceptions of a discipline and, as such, is of interest to researchers in education and other disciplines.

  20. A cross-country Exchange Market Pressure (EMP dataset

    Directory of Open Access Journals (Sweden)

    Mohit Desai

    2017-06-01

    Full Text Available The data presented in this article are related to the research article titled - “An exchange market pressure measure for cross country analysis” (Patnaik et al. [1]. In this article, we present the dataset for Exchange Market Pressure values (EMP for 139 countries along with their conversion factors, ρ (rho. Exchange Market Pressure, expressed in percentage change in exchange rate, measures the change in exchange rate that would have taken place had the central bank not intervened. The conversion factor ρ can interpreted as the change in exchange rate associated with $1 billion of intervention. Estimates of conversion factor ρ allow us to calculate a monthly time series of EMP for 139 countries. Additionally, the dataset contains the 68% confidence interval (high and low values for the point estimates of ρ’s. Using the standard errors of estimates of ρ’s, we obtain one sigma intervals around mean estimates of EMP values. These values are also reported in the dataset.

  1. A cross-country Exchange Market Pressure (EMP) dataset.

    Science.gov (United States)

    Desai, Mohit; Patnaik, Ila; Felman, Joshua; Shah, Ajay

    2017-06-01

    The data presented in this article are related to the research article titled - "An exchange market pressure measure for cross country analysis" (Patnaik et al. [1]). In this article, we present the dataset for Exchange Market Pressure values (EMP) for 139 countries along with their conversion factors, ρ (rho). Exchange Market Pressure, expressed in percentage change in exchange rate, measures the change in exchange rate that would have taken place had the central bank not intervened. The conversion factor ρ can interpreted as the change in exchange rate associated with $1 billion of intervention. Estimates of conversion factor ρ allow us to calculate a monthly time series of EMP for 139 countries. Additionally, the dataset contains the 68% confidence interval (high and low values) for the point estimates of ρ 's. Using the standard errors of estimates of ρ 's, we obtain one sigma intervals around mean estimates of EMP values. These values are also reported in the dataset.

  2. The Acute Effect of Cryotherapy on Muscle Strength and Shoulder Proprioception.

    Science.gov (United States)

    Torres, Rui; Silva, Filipa; Pedrosa, Vera; Ferreira, João; Lopes, Alexandre

    2017-11-01

    Cryotherapy, a common intervention used by clinicians, poses several benefits in managing acute injuries. However, cooling muscle tissue can interfere with muscular properties and the sensory-motor system. The aim of this study was to analyze the influence of cryotherapy with a crushed-ice pack on shoulder proprioception concerning joint position sense, force sense, the threshold for detecting passive movement, and maximal force production. A randomized, double-blind controlled trial. 48 healthy women aged 22.6 ± 0.4 y with a mean body mass index of 22.8 ±0.37 kg/m2 and a percentage of body fat of 15.4 ± 1.5%. In the experimental group, a crushed-ice pack was applied to the shoulder for 15 min, whereas participants in the control group applied a sandbag at skin temperature, also for 15 min. An isokinetic dynamometer was used to assess maximal voluntary contraction, force sense, joint position sense, and the threshold for detecting passive movement. Paired sample t tests revealed that maximal voluntary isometric contraction decreased significantly after cryotherapy (P ≤ .001), or approximately 10% of the reduction found in both muscular groups assessed. Shoulder position sense (P < .001) and the threshold for detecting passive movement (P = .01 and P = .01 for lateral and medial shoulder rotator muscles, respectively) also suffered significant impairment. Nevertheless, no significant differences emerged in force sense at 20% and 50% of maximal force reproduction (P = .41 and P = .10 for lateral rotator muscles at 20% and 50%, respectively; and P = .20 and P = .09 for medial rotator muscles at 20% and 50%, respectively). Applying a crushed-ice pack to the shoulder for 15 min negatively affected muscle strength and impaired shoulder proprioception by decreasing joint position sense and the threshold for detecting passive movement.

  3. Bedding material affects mechanical thresholds, heat thresholds and texture preference

    Science.gov (United States)

    Moehring, Francie; O’Hara, Crystal L.; Stucky, Cheryl L.

    2015-01-01

    It has long been known that the bedding type animals are housed on can affect breeding behavior and cage environment. Yet little is known about its effects on evoked behavior responses or non-reflexive behaviors. C57BL/6 mice were housed for two weeks on one of five bedding types: Aspen Sani Chips® (standard bedding for our institute), ALPHA-Dri®, Cellu-Dri™, Pure-o’Cel™ or TEK-Fresh. Mice housed on Aspen exhibited the lowest (most sensitive) mechanical thresholds while those on TEK-Fresh exhibited 3-fold higher thresholds. While bedding type had no effect on responses to punctate or dynamic light touch stimuli, TEK-Fresh housed animals exhibited greater responsiveness in a noxious needle assay, than those housed on the other bedding types. Heat sensitivity was also affected by bedding as animals housed on Aspen exhibited the shortest (most sensitive) latencies to withdrawal whereas those housed on TEK-Fresh had the longest (least sensitive) latencies to response. Slight differences between bedding types were also seen in a moderate cold temperature preference assay. A modified tactile conditioned place preference chamber assay revealed that animals preferred TEK-Fresh to Aspen bedding. Bedding type had no effect in a non-reflexive wheel running assay. In both acute (two day) and chronic (5 week) inflammation induced by injection of Complete Freund’s Adjuvant in the hindpaw, mechanical thresholds were reduced in all groups regardless of bedding type, but TEK-Fresh and Pure-o’Cel™ groups exhibited a greater dynamic range between controls and inflamed cohorts than Aspen housed mice. PMID:26456764

  4. 40 CFR 68.115 - Threshold determination.

    Science.gov (United States)

    2010-07-01

    ... (CONTINUED) CHEMICAL ACCIDENT PREVENTION PROVISIONS Regulated Substances for Accidental Release Prevention... process exceeds the threshold. (b) For the purposes of determining whether more than a threshold quantity... portion of the process is less than 10 millimeters of mercury (mm Hg), the amount of the substance in the...

  5. Approach to DOE threshold guidance limits

    International Nuclear Information System (INIS)

    Shuman, R.D.; Wickham, L.E.

    1984-01-01

    The need for less restrictive criteria governing disposal of extremely low-level radioactive waste has long been recognized. The Low-Level Waste Management Program has been directed by the Department of Energy (DOE) to aid in the development of a threshold guidance limit for DOE low-level waste facilities. Project objectives are concernd with the definition of a threshold limit dose and pathway analysis of radionuclide transport within selected exposure scenarios at DOE sites. Results of the pathway analysis will be used to determine waste radionuclide concentration guidelines that meet the defined threshold limit dose. Methods of measurement and verification of concentration limits round out the project's goals. Work on defining a threshold limit dose is nearing completion. Pathway analysis of sanitary landfill operations at the Savannah River Plant and the Idaho National Engineering Laboratory is in progress using the DOSTOMAN computer code. Concentration limit calculations and determination of implementation procedures shall follow completion of the pathways work. 4 references

  6. Towards a unifying basis of auditory thresholds: binaural summation.

    Science.gov (United States)

    Heil, Peter

    2014-04-01

    Absolute auditory threshold decreases with increasing sound duration, a phenomenon explainable by the assumptions that the sound evokes neural events whose probabilities of occurrence are proportional to the sound's amplitude raised to an exponent of about 3 and that a constant number of events are required for threshold (Heil and Neubauer, Proc Natl Acad Sci USA 100:6151-6156, 2003). Based on this probabilistic model and on the assumption of perfect binaural summation, an equation is derived here that provides an explicit expression of the binaural threshold as a function of the two monaural thresholds, irrespective of whether they are equal or unequal, and of the exponent in the model. For exponents >0, the predicted binaural advantage is largest when the two monaural thresholds are equal and decreases towards zero as the monaural threshold difference increases. This equation is tested and the exponent derived by comparing binaural thresholds with those predicted on the basis of the two monaural thresholds for different values of the exponent. The thresholds, measured in a large sample of human subjects with equal and unequal monaural thresholds and for stimuli with different temporal envelopes, are compatible only with an exponent close to 3. An exponent of 3 predicts a binaural advantage of 2 dB when the two ears are equally sensitive. Thus, listening with two (equally sensitive) ears rather than one has the same effect on absolute threshold as doubling duration. The data suggest that perfect binaural summation occurs at threshold and that peripheral neural signals are governed by an exponent close to 3. They might also shed new light on mechanisms underlying binaural summation of loudness.

  7. Gamow-Teller strength distributions in 76Ge, 76,82Se, and 90,92Zr by the deformed proton-neutron QRPA

    Science.gov (United States)

    Ha, Eunja; Cheoun, Myung-Ki

    2015-02-01

    The deformed proton-neutron quasiparticle random phase approximation (QRPA) has been developed and applied to evaluate Gamow-Teller (GT) transition strength distributions, including high-lying excited states. The data of high-lying excited states are recently available beyond one or two nucleon threshold by charge exchange reactions using hundreds of MeV projectiles. Our calculations started with single-particle states calculated using a deformed, axially symmetric Woods-Saxon potential. The neutron-neutron and proton-proton pairing correlations are explicitly taken into account at the deformed Bardeen-Cooper-Schriffer theory. Additionally, the ground state correlations and two-particle and two-hole mixing states were included in the deformed QRPA. In this work, we used a realistic two-body interaction, given by the Brueckner G-matrix based on the CD Bonn potential to reduce the ambiguity on the nucleon-nucleon interactions inside nuclei. We applied our formalism to the GT transition strengths for 76Ge, 76,82Se, and 90,92Zr, and compared the results with the available experimental data. The GT strength distributions were sensitive to the deformation parameter as well as its sign, i.e., oblate or prolate. The Ikeda sum rule, which is usually thought to be satisfied under the one-body current approximation, regardless of nucleon models, was used to test our numerical calculations and shown to be satisfied without introducing the quenching factor, if high-lying GT excited states were properly taken into account. Most of the GT strength distributions of the nuclei considered in this work have the high-lying GT excited states beyond one-nucleon threshold, which are shown to be consistent with the available experimental data.

  8. The NASA Subsonic Jet Particle Image Velocimetry (PIV) Dataset

    Science.gov (United States)

    Bridges, James; Wernet, Mark P.

    2011-01-01

    Many tasks in fluids engineering require prediction of turbulence of jet flows. The present document documents the single-point statistics of velocity, mean and variance, of cold and hot jet flows. The jet velocities ranged from 0.5 to 1.4 times the ambient speed of sound, and temperatures ranged from unheated to static temperature ratio 2.7. Further, the report assesses the accuracies of the data, e.g., establish uncertainties for the data. This paper covers the following five tasks: (1) Document acquisition and processing procedures used to create the particle image velocimetry (PIV) datasets. (2) Compare PIV data with hotwire and laser Doppler velocimetry (LDV) data published in the open literature. (3) Compare different datasets acquired at the same flow conditions in multiple tests to establish uncertainties. (4) Create a consensus dataset for a range of hot jet flows, including uncertainty bands. (5) Analyze this consensus dataset for self-consistency and compare jet characteristics to those of the open literature. The final objective was fulfilled by using the potential core length and the spread rate of the half-velocity radius to collapse of the mean and turbulent velocity fields over the first 20 jet diameters.

  9. Spike-threshold adaptation predicted by membrane potential dynamics in vivo.

    Directory of Open Access Journals (Sweden)

    Bertrand Fontaine

    2014-04-01

    Full Text Available Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo.

  10. Statistical Algorithm for the Adaptation of Detection Thresholds

    DEFF Research Database (Denmark)

    Stotsky, Alexander A.

    2008-01-01

    Many event detection mechanisms in spark ignition automotive engines are based on the comparison of the engine signals to the detection threshold values. Different signal qualities for new and aged engines necessitate the development of an adaptation algorithm for the detection thresholds...... remains constant regardless of engine age and changing detection threshold values. This, in turn, guarantees the same event detection performance for new and aged engines/sensors. Adaptation of the engine knock detection threshold is given as an example. Udgivelsesdato: 2008...

  11. Perspective: Uses and misuses of thresholds in diagnostic decision making.

    Science.gov (United States)

    Warner, Jeremy L; Najarian, Robert M; Tierney, Lawrence M

    2010-03-01

    The concept of thresholds plays a vital role in decisions involving the initiation, continuation, and completion of diagnostic testing. Much research has focused on the development of explicit thresholds, in the form of practice guidelines and decision analyses. However, these tools are used infrequently; most medical decisions are made at the bedside, using implicit thresholds. Study of these thresholds can lead to a deeper understanding of clinical decision making. The authors examine some factors constituting individual clinicians' implicit thresholds. They propose a model for static thresholds using the concept of situational gravity to explain why some thresholds are high, and some low. Next, they consider the hypothetical effects of incorrect placement of thresholds (miscalibration) and changes to thresholds during diagnosis (manipulation). They demonstrate these concepts using common clinical scenarios. Through analysis of miscalibration of thresholds, the authors demonstrate some common maladaptive clinical behaviors, which are nevertheless internally consistent. They then explain how manipulation of thresholds gives rise to common cognitive heuristics including premature closure and anchoring. They also discuss the case where no threshold has been exceeded despite exhaustive collection of data, which commonly leads to application of the availability or representativeness heuristics. Awareness of implicit thresholds allows for a more effective understanding of the processes of medical decision making and, possibly, to the avoidance of detrimental heuristics and their associated medical errors. Research toward accurately defining these thresholds for individual physicians and toward determining their dynamic properties during the diagnostic process may yield valuable insights.

  12. Knowledge Mining from Clinical Datasets Using Rough Sets and Backpropagation Neural Network

    Directory of Open Access Journals (Sweden)

    Kindie Biredagn Nahato

    2015-01-01

    Full Text Available The availability of clinical datasets and knowledge mining methodologies encourages the researchers to pursue research in extracting knowledge from clinical datasets. Different data mining techniques have been used for mining rules, and mathematical models have been developed to assist the clinician in decision making. The objective of this research is to build a classifier that will predict the presence or absence of a disease by learning from the minimal set of attributes that has been extracted from the clinical dataset. In this work rough set indiscernibility relation method with backpropagation neural network (RS-BPNN is used. This work has two stages. The first stage is handling of missing values to obtain a smooth data set and selection of appropriate attributes from the clinical dataset by indiscernibility relation method. The second stage is classification using backpropagation neural network on the selected reducts of the dataset. The classifier has been tested with hepatitis, Wisconsin breast cancer, and Statlog heart disease datasets obtained from the University of California at Irvine (UCI machine learning repository. The accuracy obtained from the proposed method is 97.3%, 98.6%, and 90.4% for hepatitis, breast cancer, and heart disease, respectively. The proposed system provides an effective classification model for clinical datasets.

  13. Is action potential threshold lowest in the axon?

    NARCIS (Netherlands)

    Kole, Maarten H. P.; Stuart, Greg J.

    2008-01-01

    Action potential threshold is thought to be lowest in the axon, but when measured using conventional techniques, we found that action potential voltage threshold of rat cortical pyramidal neurons was higher in the axon than at other neuronal locations. In contrast, both current threshold and voltage

  14. A Critical Analysis of Grain-Size and Yield-Strength Dependence of Near-Threshold Fatigue-Crack Growth in Steels.

    Science.gov (United States)

    1981-07-15

    of high-strength steel ), and a host of microstructural types (ferritic, martensitic, pearlitic, bainitic , austenitic). Accession For NTIS GRA&l DTIC...IN STEELS , : RPRNGO.RPRTNMR 1~A Tw.R CONTRACT OA4A&XMUt8~ G/~ ! R./koderl L.A./Cooleyad T.W./Crooker 2 .{I 9PERFORMING ORGANIZATION NAME AND A10R4SI...growth Steels Microstructure Ferrous alloys Structure-sensitive crack growth 20 ABSTRACT (Con~tinue an r*,er.. side it necesar and Identity by black

  15. Determination of muscle fatigue index for strength training in patients with Duchenne dystrophy

    Directory of Open Access Journals (Sweden)

    Adriano Rodrigues Oliveira

    Full Text Available INTRODUCTION: Muscle weakness is the most prominent impairment in Duchenne muscular dystrophy (DMD and often involves the loss of functional ability as well as other limitations related to daily living. Thus, there is a need to maintain muscle strength in large muscle groups, such as the femoral quadriceps, which is responsible for diverse functional abilities. However, the load and duration of training for such rehabilitation has proven to be a great unknown, mainly due to the undesired appearance of muscle fatigue, which is a severe factor for the injury of muscle fibers. OBJECTIVES: The aim of the present study was to determine a fatigue index by means of surface electromyography (EMG for the parameterization of muscle strengthening physiotherapy training. METHODS: A cross-sectional study (case series was carried out involving four patients with DMD. Three pairs of surface electrodes were placed on the motor point of the Rectus femoris, Vastus lateralis and Vastus medialis of the dominant limb, maintaining the knee at 60º of flexion. The participants were instructed to perform the extension movement of this joint at four strength levels (100%, 80%, 60% and 40% of maximal voluntary isometric contraction. RESULTS: The slope of the linear regression line was used for the determination of the fatigue index, performed by Pearson's test on the median frequency of each strength level. CONCLUSION: Electromyographic measurements of the strength index for muscle training proved to be a simple accessible assessment method, as well as an extremely valuable tool, allowing the design of a muscle strength training program with an individualized load threshold.

  16. Standard test method for determining a threshold stress intensity factor for environment-assisted cracking of metallic materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2003-01-01

    1.1 This test method covers the determination of the environment-assisted cracking threshold stress intensity factor parameters, KIEAC and KEAC, for metallic materials from constant-force testing of fatigue precracked beam or compact fracture specimens and from constant-displacement testing of fatigue precracked bolt-load compact fracture specimens. 1.2 This test method is applicable to environment-assisted cracking in aqueous or other aggressive environments. 1.3 Materials that can be tested by this test method are not limited by thickness or by strength as long as specimens are of sufficient thickness and planar size to meet the size requirements of this test method. 1.4 A range of specimen sizes with proportional planar dimensions is provided, but size may be variable and adjusted for yield strength and applied force. Specimen thickness is a variable independent of planar size. 1.5 Specimen configurations other than those contained in this test method may be used, provided that well-established stress ...

  17. Benefits and limitations of using the weather radar for the definition of rainfall thresholds for debris flows. Case study from Catalonia (Spain).

    Science.gov (United States)

    Abancó, C.; Hürlimann, M.; Sempere, D.; Berenguer, M.

    2012-04-01

    Torrential processes such as debris flows or hyperconcentrated flows are fast movements formed by a mix of water and different amounts of unsorted solid material. They occur in steep torrents and suppose a high risk for the human settlements. Rainfall is the most common triggering factor for debris flows. The rainfall threshold defines the rainfall conditions that, when reached or exceeded, are likely to provoke one or more events. Many different types of empirical rainfall thresholds for landslide triggering have been defined. Direct measurements of rainfall data are normally not available from a point next to or in the surroundings of the initiation area of the landslide. For this reason, most of the thresholds published for debris flows have been established by data measured at the nearest rain gauges (often located several km far from the landslide). Only in very few cases, the rainfall data to analyse the triggering conditions of the debris flows have been obtained by weather (Doppler) radar. Radar devices present certain limitations in mountainous regions due to undesired reboots, but their main advantage is that radar data can be obtained for any point of the territory. The objective of this work was to test the use of the weather radar data for the definition of rainfall thresholds for debris-flow triggering. Thus, rainfall data obtained from 3 to 5 rain gauges and from radar were compared for a dataset of events occurred in Catalonia (Spain). The goal was to determine in which cases the description of the rainfall episode (in particular the maximum intensity) had been more accurate. The analysed dataset consists of: 1) three events occurred in the Rebaixader debris-flow monitoring station (Axial Pyrenees) including two hyperconcentrated flows and one debris flow; 2) one debris-flow event occurred in the Port Ainé ski resort (Axial Pyrenees); 3) one debris-flow event in Montserrat (Mediterranean Coastal range). The comparison of the hyetographs from the

  18. Strengths only or strengths and relative weaknesses? A preliminary study.

    Science.gov (United States)

    Rust, Teri; Diessner, Rhett; Reade, Lindsay

    2009-10-01

    Does working on developing character strengths and relative character weaknesses cause lower life satisfaction than working on developing character strengths only? The present study provides a preliminary answer. After 76 college students completed the Values in Action Inventory of Strengths (C. Peterson & M. E. P. Seligman, 2004), the authors randomly assigned them to work on 2 character strengths or on 1 character strength and 1 relative weakness. Combined, these groups showed significant gains on the Satisfaction With Life Scale (E. Diener, R. A. Emmons, R. J. Larsen, & S. Griffin, 1985), compared with a 32-student no-treatment group. However, there was no significant difference in gain scores between the 2-strengths group and the 1-character-strength-and-1-relative-character-weakness group. The authors discuss how focusing on relative character weaknesses (along with strengths) does not diminish-and may assist in increasing-life satisfaction.

  19. Characteristics of structural loess strength and preliminary framework for joint strength formula

    OpenAIRE

    Rong-jian Li; Jun-ding Liu; Rui Yan; Wen Zheng; Sheng-jun Shao

    2014-01-01

    The strength of structural loess consists of the shear strength and tensile strength. In this study, the stress path, the failure envelope of principal stress (Kf line), and the strength failure envelope of structurally intact loess and remolded loess were analyzed through three kinds of tests: the tensile strength test, the uniaxial compressive strength test, and the conventional triaxial shear strength test. Then, in order to describe the tensile strength and shear strength of structural lo...

  20. Spatially-explicit estimation of geographical representation in large-scale species distribution datasets.

    Science.gov (United States)

    Kalwij, Jesse M; Robertson, Mark P; Ronk, Argo; Zobel, Martin; Pärtel, Meelis

    2014-01-01

    Much ecological research relies on existing multispecies distribution datasets. Such datasets, however, can vary considerably in quality, extent, resolution or taxonomic coverage. We provide a framework for a spatially-explicit evaluation of geographical representation within large-scale species distribution datasets, using the comparison of an occurrence atlas with a range atlas dataset as a working example. Specifically, we compared occurrence maps for 3773 taxa from the widely-used Atlas Florae Europaeae (AFE) with digitised range maps for 2049 taxa of the lesser-known Atlas of North European Vascular Plants. We calculated the level of agreement at a 50-km spatial resolution using average latitudinal and longitudinal species range, and area of occupancy. Agreement in species distribution was calculated and mapped using Jaccard similarity index and a reduced major axis (RMA) regression analysis of species richness between the entire atlases (5221 taxa in total) and between co-occurring species (601 taxa). We found no difference in distribution ranges or in the area of occupancy frequency distribution, indicating that atlases were sufficiently overlapping for a valid comparison. The similarity index map showed high levels of agreement for central, western, and northern Europe. The RMA regression confirmed that geographical representation of AFE was low in areas with a sparse data recording history (e.g., Russia, Belarus and the Ukraine). For co-occurring species in south-eastern Europe, however, the Atlas of North European Vascular Plants showed remarkably higher richness estimations. Geographical representation of atlas data can be much more heterogeneous than often assumed. Level of agreement between datasets can be used to evaluate geographical representation within datasets. Merging atlases into a single dataset is worthwhile in spite of methodological differences, and helps to fill gaps in our knowledge of species distribution ranges. Species distribution

  1. Applying Threshold Concepts to Finance Education

    Science.gov (United States)

    Hoadley, Susan; Wood, Leigh N.; Tickle, Leonie; Kyng, Tim

    2016-01-01

    Purpose: The purpose of this paper is to investigate and identify threshold concepts that are the essential conceptual content of finance programmes. Design/Methodology/Approach: Conducted in three stages with finance academics and students, the study uses threshold concepts as both a theoretical framework and a research methodology. Findings: The…

  2. The Global Precipitation Climatology Project (GPCP) Combined Precipitation Dataset

    Science.gov (United States)

    Huffman, George J.; Adler, Robert F.; Arkin, Philip; Chang, Alfred; Ferraro, Ralph; Gruber, Arnold; Janowiak, John; McNab, Alan; Rudolf, Bruno; Schneider, Udo

    1997-01-01

    The Global Precipitation Climatology Project (GPCP) has released the GPCP Version 1 Combined Precipitation Data Set, a global, monthly precipitation dataset covering the period July 1987 through December 1995. The primary product in the dataset is a merged analysis incorporating precipitation estimates from low-orbit-satellite microwave data, geosynchronous-orbit -satellite infrared data, and rain gauge observations. The dataset also contains the individual input fields, a combination of the microwave and infrared satellite estimates, and error estimates for each field. The data are provided on 2.5 deg x 2.5 deg latitude-longitude global grids. Preliminary analyses show general agreement with prior studies of global precipitation and extends prior studies of El Nino-Southern Oscillation precipitation patterns. At the regional scale there are systematic differences with standard climatologies.

  3. A new dataset and algorithm evaluation for mood estimation in music

    OpenAIRE

    Godec, Primož

    2014-01-01

    This thesis presents a new dataset of perceived and induced emotions for 200 audio clips. The gathered dataset provides users' perceived and induced emotions for each clip, the association of color, along with demographic and personal data, such as user's emotion state and emotion ratings, genre preference, music experience, among others. With an online survey we collected more than 7000 responses for a dataset of 200 audio excerpts, thus providing about 37 user responses per clip. The foc...

  4. Minimum energy requirements for desalination of brackish groundwater in the United States with comparison to international datasets

    Science.gov (United States)

    Ahdab, Yvana D.; Thiel, Gregory P.; Böhlke, John Karl; Stanton, Jennifer S.; Lienhard, John H.

    2018-01-01

    This paper uses chemical and physical data from a large 2017 U.S. Geological Surveygroundwater dataset with wells in the U.S. and three smaller international groundwater datasets with wells primarily in Australia and Spain to carry out a comprehensive investigation of brackish groundwater composition in relation to minimum desalinationenergy costs. First, we compute the site-specific least work required for groundwater desalination. Least work of separation represents a baseline for specific energy consumptionof desalination systems. We develop simplified equations based on the U.S. data for least work as a function of water recovery ratio and a proxy variable for composition, either total dissolved solids, specific conductance, molality or ionic strength. We show that the U.S. correlations for total dissolved solids and molality may be applied to the international datasets. We find that total molality can be used to calculate the least work of dilute solutions with very high accuracy. Then, we examine the effects of groundwater solute composition on minimum energy requirements, showing that separation requirements increase from calcium to sodium for cations and from sulfate to bicarbonate to chloride for anions, for any given TDS concentration. We study the geographic distribution of least work, total dissolved solids, and major ions concentration across the U.S. We determine areas with both low least work and high water stress in order to highlight regions holding potential for desalination to decrease the disparity between high water demand and low water supply. Finally, we discuss the implications of the USGS results on water resource planning, by comparing least work to the specific energy consumption of brackish water reverse osmosisplants and showing the scaling propensity of major electrolytes and silica in the U.S. groundwater samples.

  5. A Large-Scale 3D Object Recognition dataset

    DEFF Research Database (Denmark)

    Sølund, Thomas; Glent Buch, Anders; Krüger, Norbert

    2016-01-01

    geometric groups; concave, convex, cylindrical and flat 3D object models. The object models have varying amount of local geometric features to challenge existing local shape feature descriptors in terms of descriptiveness and robustness. The dataset is validated in a benchmark which evaluates the matching...... performance of 7 different state-of-the-art local shape descriptors. Further, we validate the dataset in a 3D object recognition pipeline. Our benchmark shows as expected that local shape feature descriptors without any global point relation across the surface have a poor matching performance with flat...

  6. The Wind Integration National Dataset (WIND) toolkit (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Caroline Draxl: NREL

    2014-01-01

    Regional wind integration studies require detailed wind power output data at many locations to perform simulations of how the power system will operate under high penetration scenarios. The wind datasets that serve as inputs into the study must realistically reflect the ramping characteristics, spatial and temporal correlations, and capacity factors of the simulated wind plants, as well as being time synchronized with available load profiles.As described in this presentation, the WIND Toolkit fulfills these requirements by providing a state-of-the-art national (US) wind resource, power production and forecast dataset.

  7. The asymmetry of U.S. monetary policy: Evidence from a threshold Taylor rule with time-varying threshold values

    Science.gov (United States)

    Zhu, Yanli; Chen, Haiqiang

    2017-05-01

    In this paper, we revisit the issue whether U.S. monetary policy is asymmetric by estimating a forward-looking threshold Taylor rule with quarterly data from 1955 to 2015. In order to capture the potential heterogeneity for regime shift mechanism under different economic conditions, we modify the threshold model by assuming the threshold value as a latent variable following an autoregressive (AR) dynamic process. We use the unemployment rate as the threshold variable and separate the sample into two periods: expansion periods and recession periods. Our findings support that the U.S. monetary policy operations are asymmetric in these two regimes. More precisely, the monetary authority tends to implement an active Taylor rule with a weaker response to the inflation gap (the deviation of inflation from its target) and a stronger response to the output gap (the deviation of output from its potential level) in recession periods. The threshold value, interpreted as the targeted unemployment rate of monetary authorities, exhibits significant time-varying properties, confirming the conjecture that policy makers may adjust their reference point for the unemployment rate accordingly to reflect their attitude on the health of general economy.

  8. An integrated pan-tropical biomass map using multiple reference datasets

    NARCIS (Netherlands)

    Avitabile, V.; Herold, M.; Heuvelink, G.B.M.; Lewis, S.L.; Phillips, O.L.; Asner, G.P.; Armston, J.; Asthon, P.; Banin, L.F.; Bayol, N.; Berry, N.; Boeckx, P.; Jong, De B.; Devries, B.; Girardin, C.; Kearsley, E.; Lindsell, J.A.; Lopez-gonzalez, G.; Lucas, R.; Malhi, Y.; Morel, A.; Mitchard, E.; Nagy, L.; Qie, L.; Quinones, M.; Ryan, C.M.; Slik, F.; Sunderland, T.; Vaglio Laurin, G.; Valentini, R.; Verbeeck, H.; Wijaya, A.; Willcock, S.

    2016-01-01

    We combined two existing datasets of vegetation aboveground biomass (AGB) (Proceedings of the National Academy of Sciences of the United States of America, 108, 2011, 9899; Nature Climate Change, 2, 2012, 182) into a pan-tropical AGB map at 1-km resolution using an independent reference dataset of

  9. Model Threshold untuk Pembelajaran Memproduksi Pantun Kelas XI

    Directory of Open Access Journals (Sweden)

    Fitri Nura Murti

    2017-03-01

    Full Text Available Abstract: The learning pantun method in schools provided less opportunity to develop the students’ creativity in producing pantun. This situation was supported by the result of the observation conducted on eleventh graders at SMAN 2 Bondowoso. It showed that the students tend to plagiarize their pantun. The general objective of this research and development is to develop Threshold Pantun model for learning to produce pantun for elevent graders. The product was presented in guidance book for teachers entitled “Pembelajaran Memproduksi Pantun Menggunakan Model Threshold Pantun untuk Kelas XI”. This study adapted design method of Borg-Gall’s R&D procedure. The result of this study showed that Threshold Pantun model was appropriate to be implemented for learning to produce pantun. Key Words: Threshold Pantun model, produce pantun Abstrak: Pembelajaran pantun di sekolah selama ini kurang mengembangkan kreativitas siswa dalam memproduksi pantun. Hal tersebut dikuatkan oleh hasil observasi siswa kelas XI SMAN 2 Bondowoso yang menunjukkan adanya kecenderungan produk siswa bersifat plagiat. Tujuan penelitian dan pengembangan ini secara umum adalah mengembangkan model Threshold Pantun untuk pembelajaran memproduksi pantun kelas XI..Produk disajikan dalam bentuk buku panduan bagi guru dengan judul “Pembelajaran Memproduksi Pantun Menggunakan Model Threshold Pantun untuk Kelas XI”. Penelitian ini menggunakan rancangan penelitian yang diadaptasi dari prosedur penelitian dan pengembangan Borg dan Gall. Berdasarkan hasil validasi model Threshold Pantun untuk pembelajaran memproduksi pantun layak diimplementasikan. Kata kunci: model Threshold Pantun, memproduksi pantun

  10. The H-mode power threshold in JET

    Energy Technology Data Exchange (ETDEWEB)

    Start, D F.H.; Bhatnagar, V P; Campbell, D J; Cordey, J G; Esch, H P.L. de; Gormezano, C; Hawkes, N; Horton, L; Jones, T T.C.; Lomas, P J; Lowry, C; Righi, E; Rimini, F G; Saibene, G; Sartori, R; Sips, G; Stork, D; Thomas, P; Thomsen, K; Tubbing, B J.D.; Von Hellermann, M; Ward, D J [Commission of the European Communities, Abingdon (United Kingdom). JET Joint Undertaking

    1994-07-01

    New H-mode threshold data over a range of toroidal field and density values have been obtained from the present campaign. The scaling with n{sub e} B{sub t} is almost identical with that of the 91/92 period for the same discharge conditions. The scaling with toroidal field alone gives somewhat higher thresholds than the older data. The 1991/2 database shows a scaling of P{sub th} (power threshold) with n{sub e} B{sub t} which is approximately linear and agrees well with that observed on other tokamaks. For NBI and carbon target tiles the threshold power is a factor of two higher with the ion {Nu}B drift away from the target compared with the value found with the drift towards the target. The combination of ICRH and beryllium tiles appears to be beneficial for reducing P{sub th}. The power threshold is largely insensitive to plasma current, X-point height and distance between the last closed flux surface and the limiter, at least for values greater than 2 cm. (authors). 3 refs., 6 figs.

  11. QRS Detection Based on Improved Adaptive Threshold

    Directory of Open Access Journals (Sweden)

    Xuanyu Lu

    2018-01-01

    Full Text Available Cardiovascular disease is the first cause of death around the world. In accomplishing quick and accurate diagnosis, automatic electrocardiogram (ECG analysis algorithm plays an important role, whose first step is QRS detection. The threshold algorithm of QRS complex detection is known for its high-speed computation and minimized memory storage. In this mobile era, threshold algorithm can be easily transported into portable, wearable, and wireless ECG systems. However, the detection rate of the threshold algorithm still calls for improvement. An improved adaptive threshold algorithm for QRS detection is reported in this paper. The main steps of this algorithm are preprocessing, peak finding, and adaptive threshold QRS detecting. The detection rate is 99.41%, the sensitivity (Se is 99.72%, and the specificity (Sp is 99.69% on the MIT-BIH Arrhythmia database. A comparison is also made with two other algorithms, to prove our superiority. The suspicious abnormal area is shown at the end of the algorithm and RR-Lorenz plot drawn for doctors and cardiologists to use as aid for diagnosis.

  12. [Effect of amount of silane coupling agent on flexural strength of dental composite resins reinforced with aluminium borate whisker].

    Science.gov (United States)

    Zhu, Ming-yi; Zhang, Xiu-yin

    2015-06-01

    To evaluate the effect of amount of silane coupling agent on flexural strength of dental composite resins reinforced with aluminium borate whisker (ABW). ABW was surface-treated with 0%, 1%, 2%, 3% and 4% silan coupling agent (γ-MPS), and mixed with resin matrix to synthesize 5 groups of composite resins. After heat-cured at 120 degrees centigrade for 1 h, specimens were tested in three-point flexure to measure strength according to ISO-4049. One specimen was selected randomly from each group and observed under scanning electron microscope (SEM). The data was analyzed with SAS 9.2 software package. The flexural strength (117.93±11.9 Mpa) of the group treated with 2% silane coupling agent was the highest, and significantly different from that of the other 4 groups (α=0.01). The amount of silane coupling agent has impact on the flexural strength of dental composite resins reinforced with whiskers; The flexual strength will be reduced whenever the amount is higher or lower than the threshold. Supported by Research Fund of Science and Technology Committee of Shanghai Municipality (08DZ2271100).

  13. Comparison of global 3-D aviation emissions datasets

    Directory of Open Access Journals (Sweden)

    S. C. Olsen

    2013-01-01

    Full Text Available Aviation emissions are unique from other transportation emissions, e.g., from road transportation and shipping, in that they occur at higher altitudes as well as at the surface. Aviation emissions of carbon dioxide, soot, and water vapor have direct radiative impacts on the Earth's climate system while emissions of nitrogen oxides (NOx, sulfur oxides, carbon monoxide (CO, and hydrocarbons (HC impact air quality and climate through their effects on ozone, methane, and clouds. The most accurate estimates of the impact of aviation on air quality and climate utilize three-dimensional chemistry-climate models and gridded four dimensional (space and time aviation emissions datasets. We compare five available aviation emissions datasets currently and historically used to evaluate the impact of aviation on climate and air quality: NASA-Boeing 1992, NASA-Boeing 1999, QUANTIFY 2000, Aero2k 2002, and AEDT 2006 and aviation fuel usage estimates from the International Energy Agency. Roughly 90% of all aviation emissions are in the Northern Hemisphere and nearly 60% of all fuelburn and NOx emissions occur at cruise altitudes in the Northern Hemisphere. While these datasets were created by independent methods and are thus not strictly suitable for analyzing trends they suggest that commercial aviation fuelburn and NOx emissions increased over the last two decades while HC emissions likely decreased and CO emissions did not change significantly. The bottom-up estimates compared here are consistently lower than International Energy Agency fuelburn statistics although the gap is significantly smaller in the more recent datasets. Overall the emissions distributions are quite similar for fuelburn and NOx with regional peaks over the populated land masses of North America, Europe, and East Asia. For CO and HC there are relatively larger differences. There are however some distinct differences in the altitude distribution

  14. The strength compass

    DEFF Research Database (Denmark)

    Ledertoug, Mette Marie

    of agreement/disagreement. Also the child/teacher is asked whether the actual strength is important and if he or she has the possibilities to apply the strength in the school. In a PhDproject ‘Strengths-based Learning - Children’s Character Strengths as Means to their Learning Potential’ 750 Danish children......Individual paper presentation: The ‘Strength Compass’. The results of a PhDresearch project among schoolchildren (age 6-16) identifying VIAstrengths concerning age, gender, mother-tongue-langue and possible child psychiatric diagnosis. Strengths-based interventions in schools have a theoretical...... Psychological Publishing Company. ‘The Strength Compass’ is a computer/Ipad based qualitative tool to identify the strengths of a child by a self-survey or a teacher’s survey. It is designed as a visual analogue scale with a statement of the strength in which the child/teacher may declare the degree...

  15. Deterministic Approach for Estimating Critical Rainfall Threshold of Rainfall-induced Landslide in Taiwan

    Science.gov (United States)

    Chung, Ming-Chien; Tan, Chih-Hao; Chen, Mien-Min; Su, Tai-Wei

    2013-04-01

    Taiwan is an active mountain belt created by the oblique collision between the northern Luzon arc and the Asian continental margin. The inherent complexities of geological nature create numerous discontinuities through rock masses and relatively steep hillside on the island. In recent years, the increase in the frequency and intensity of extreme natural events due to global warming or climate change brought significant landslides. The causes of landslides in these slopes are attributed to a number of factors. As is well known, rainfall is one of the most significant triggering factors for landslide occurrence. In general, the rainfall infiltration results in changing the suction and the moisture of soil, raising the unit weight of soil, and reducing the shear strength of soil in the colluvium of landslide. The stability of landslide is closely related to the groundwater pressure in response to rainfall infiltration, the geological and topographical conditions, and the physical and mechanical parameters. To assess the potential susceptibility to landslide, an effective modeling of rainfall-induced landslide is essential. In this paper, a deterministic approach is adopted to estimate the critical rainfall threshold of the rainfall-induced landslide. The critical rainfall threshold is defined as the accumulated rainfall while the safety factor of the slope is equal to 1.0. First, the process of deterministic approach establishes the hydrogeological conceptual model of the slope based on a series of in-situ investigations, including geological drilling, surface geological investigation, geophysical investigation, and borehole explorations. The material strength and hydraulic properties of the model were given by the field and laboratory tests. Second, the hydraulic and mechanical parameters of the model are calibrated with the long-term monitoring data. Furthermore, a two-dimensional numerical program, GeoStudio, was employed to perform the modelling practice. Finally

  16. Grip strength in mice with joint inflammation: A rheumatology function test sensitive to pain and analgesia.

    Science.gov (United States)

    Montilla-García, Ángeles; Tejada, Miguel Á; Perazzoli, Gloria; Entrena, José M; Portillo-Salido, Enrique; Fernández-Segura, Eduardo; Cañizares, Francisco J; Cobos, Enrique J

    2017-10-01

    Grip strength deficit is a measure of pain-induced functional disability in rheumatic disease. We tested whether this parameter and tactile allodynia, the standard pain measure in preclinical studies, show parallels in their response to analgesics and basic mechanisms. Mice with periarticular injections of complete Freund's adjuvant (CFA) in the ankles showed periarticular immune infiltration and synovial membrane alterations, together with pronounced grip strength deficits and tactile allodynia measured with von Frey hairs. However, inflammation-induced tactile allodynia lasted longer than grip strength alterations, and therefore did not drive the functional deficits. Oral administration of the opioid drugs oxycodone (1-8 mg/kg) and tramadol (10-80 mg/kg) induced a better recovery of grip strength than acetaminophen (40-320 mg/kg) or the nonsteroidal antiinflammatory drugs ibuprofen (10-80 mg/kg) or celecoxib (40-160 mg/kg); these results are consistent with their analgesic efficacy in humans. Functional impairment was generally a more sensitive indicator of drug-induced analgesia than tactile allodynia, as drug doses that attenuated grip strength deficits showed little or no effect on von Frey thresholds. Finally, ruthenium red (a nonselective TRP antagonist) or the in vivo ablation of TRPV1-expressing neurons with resiniferatoxin abolished tactile allodynia without altering grip strength deficits, indicating that the neurobiology of tactile allodynia and grip strength deficits differ. In conclusion, grip strength deficits are due to a distinct type of pain that reflects an important aspect of the human pain experience, and therefore merits further exploration in preclinical studies to improve the translation of new analgesics from bench to bedside. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Global Human Built-up And Settlement Extent (HBASE) Dataset From Landsat

    Data.gov (United States)

    National Aeronautics and Space Administration — The Global Human Built-up And Settlement Extent (HBASE) Dataset from Landsat is a global map of HBASE derived from the Global Land Survey (GLS) Landsat dataset for...

  18. Passive Containment DataSet

    Science.gov (United States)

    This data is for Figures 6 and 7 in the journal article. The data also includes the two EPANET input files used for the analysis described in the paper, one for the looped system and one for the block system.This dataset is associated with the following publication:Grayman, W., R. Murray , and D. Savic. Redesign of Water Distribution Systems for Passive Containment of Contamination. JOURNAL OF THE AMERICAN WATER WORKS ASSOCIATION. American Water Works Association, Denver, CO, USA, 108(7): 381-391, (2016).

  19. Short-term Periodization Models: Effects on Strength and Speed-strength Performance.

    Science.gov (United States)

    Hartmann, Hagen; Wirth, Klaus; Keiner, Michael; Mickel, Christoph; Sander, Andre; Szilvas, Elena

    2015-10-01

    Dividing training objectives into consecutive phases to gain morphological adaptations (hypertrophy phase) and neural adaptations (strength and power phases) is called strength-power periodization (SPP). These phases differ in program variables (volume, intensity, and exercise choice or type) and use stepwise intensity progression and concomitant decreasing volume, converging to peak intensity (peaking phase). Undulating periodization strategies rotate these program variables in a bi-weekly, weekly, or daily fashion. The following review addresses the effects of different short-term periodization models on strength and speed-strength both with subjects of different performance levels and with competitive athletes from different sports who use a particular periodization model during off-season, pre-season, and in-season conditioning. In most periodization studies, it is obvious that the strength endurance sessions are characterized by repetition zones (12-15 repetitions) that induce muscle hypertrophy in persons with a low performance level. Strictly speaking, when examining subjects with a low training level, many periodization studies include mainly hypertrophy sessions interspersed with heavy strength/power sessions. Studies have demonstrated equal or statistically significant higher gains in maximal strength for daily undulating periodization compared with SPP in subjects with a low to moderate performance level. The relatively short intervention period and the lack of concomitant sports conditioning call into question the practical value of these findings for competitive athletes. Possibly owing to differences in mesocycle length, conditioning programs, and program variables, competitive athletes either maintained or improved strength and/or speed-strength performance by integrating daily undulating periodization and SPP during off-season, pre-season and in-season conditioning. In high-performance sports, high-repetition strength training (>15) should be

  20. Threshold enhancement of diphoton resonances

    Directory of Open Access Journals (Sweden)

    Aoife Bharucha

    2016-10-01

    Full Text Available We revisit a mechanism to enhance the decay width of (pseudo-scalar resonances to photon pairs when the process is mediated by loops of charged fermions produced near threshold. Motivated by the recent LHC data, indicating the presence of an excess in the diphoton spectrum at approximately 750 GeV, we illustrate this threshold enhancement mechanism in the case of a 750 GeV pseudoscalar boson A with a two-photon decay mediated by a charged and uncolored fermion having a mass at the 12MA threshold and a small decay width, <1 MeV. The implications of such a threshold enhancement are discussed in two explicit scenarios: i the Minimal Supersymmetric Standard Model in which the A state is produced via the top quark mediated gluon fusion process and decays into photons predominantly through loops of charginos with masses close to 12MA and ii a two Higgs doublet model in which A is again produced by gluon fusion but decays into photons through loops of vector-like charged heavy leptons. In both these scenarios, while the mass of the charged fermion has to be adjusted to be extremely close to half of the A resonance mass, the small total widths are naturally obtained if only suppressed three-body decay channels occur. Finally, the implications of some of these scenarios for dark matter are discussed.

  1. Bayesian and Classical Estimation of Stress-Strength Reliability for Inverse Weibull Lifetime Models

    Directory of Open Access Journals (Sweden)

    Qixuan Bi

    2017-06-01

    Full Text Available In this paper, we consider the problem of estimating stress-strength reliability for inverse Weibull lifetime models having the same shape parameters but different scale parameters. We obtain the maximum likelihood estimator and its asymptotic distribution. Since the classical estimator doesn’t hold explicit forms, we propose an approximate maximum likelihood estimator. The asymptotic confidence interval and two bootstrap intervals are obtained. Using the Gibbs sampling technique, Bayesian estimator and the corresponding credible interval are obtained. The Metropolis-Hastings algorithm is used to generate random variates. Monte Carlo simulations are conducted to compare the proposed methods. Analysis of a real dataset is performed.

  2. ENTRIA workshop. Determine threshold values in radiation protection

    International Nuclear Information System (INIS)

    Diener, Lisa

    2015-01-01

    Threshold values affect our daily lives. Whether it concerns traffic or noise regulations, we all experience thresholds on a regular basis. But how are such values generated? The conference ''Determine Thres-hold Values in Radiation Protection'', taking place on January 27th 2015 in Braunschweig, focused on this question. The conference was undertaken in the context of the BMBF-funded interdisciplinary research project ''ENTRIA - Disposal Options for Radioactive Residues''. It aimed to stimulate a cross-disciplinary discussion. Spea-kers from different disciplinary backgrounds talked about topics like procedures of setting threshold values, standards for evaluating dosages, and public participation in the standardization of threshold values. Two major theses emerged: First, setting threshold values always requires considering contexts and protection targets. Second, existing uncertainties must be communicated in and with the public. Altogether, the conference offered lots of input and issues for discussion. In addition, it raised interesting and important questions for further and ongoing work in the research project ENTRIA.

  3. Thresholds of ion turbulence in tokamaks

    International Nuclear Information System (INIS)

    Garbet, X.; Laurent, L.; Mourgues, F.; Roubin, J.P.; Samain, A.; Zou, X.L.

    1991-01-01

    The linear thresholds of ionic turbulence are numerically calculated for the Tokamaks JET and TORE SUPRA. It is proved that the stability domain at η i >0 is determined by trapped ion modes and is characterized by η i ≥1 and a threshold L Ti /R of order (0.2/0.3)/(1+T i /T e ). The latter value is significantly smaller than what has been previously predicted. Experimental temperature profiles in heated discharges are usually marginal with respect to this criterium. It is also shown that the eigenmodes are low frequency, low wavenumber ballooned modes, which may produce a very large transport once the threshold ion temperature gradient is reached

  4. Gridded 5km GHCN-Daily Temperature and Precipitation Dataset, Version 1

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Gridded 5km GHCN-Daily Temperature and Precipitation Dataset (nClimGrid) consists of four climate variables derived from the GHCN-D dataset: maximum temperature,...

  5. Effect of dissipation on dynamical fusion thresholds

    International Nuclear Information System (INIS)

    Sierk, A.J.

    1986-01-01

    The existence of dynamical thresholds to fusion in heavy nuclei (A greater than or equal to 200) due to the nature of the potential-energy surface is shown. These thresholds exist even in the absence of dissipative forces, due to the coupling between the various collective deformation degrees of freedom. Using a macroscopic model of nuclear shape dynamics, It is shown how three different suggested dissipation mechanisms increase by varying amounts the excitation energy over the one-dimensional barrier required to cause compound-nucleus formation. The recently introduced surface-plus-window dissipation may give a reasonable representation of experimental data on fusion thresholds, in addition to properly describing fission-fragment kinetic energies and isoscalar giant multipole widths. Scaling of threshold results to asymmetric systems is discussed. 48 refs., 10 figs

  6. ENHANCED DATA DISCOVERABILITY FOR IN SITU HYPERSPECTRAL DATASETS

    Directory of Open Access Journals (Sweden)

    B. Rasaiah

    2016-06-01

    Full Text Available Field spectroscopic metadata is a central component in the quality assurance, reliability, and discoverability of hyperspectral data and the products derived from it. Cataloguing, mining, and interoperability of these datasets rely upon the robustness of metadata protocols for field spectroscopy, and on the software architecture to support the exchange of these datasets. Currently no standard for in situ spectroscopy data or metadata protocols exist. This inhibits the effective sharing of growing volumes of in situ spectroscopy datasets, to exploit the benefits of integrating with the evolving range of data sharing platforms. A core metadataset for field spectroscopy was introduced by Rasaiah et al., (2011-2015 with extended support for specific applications. This paper presents a prototype model for an OGC and ISO compliant platform-independent metadata discovery service aligned to the specific requirements of field spectroscopy. In this study, a proof-of-concept metadata catalogue has been described and deployed in a cloud-based architecture as a demonstration of an operationalized field spectroscopy metadata standard and web-based discovery service.

  7. Smartphone threshold audiometry in underserved primary health-care contexts.

    Science.gov (United States)

    Sandström, Josefin; Swanepoel, De Wet; Carel Myburgh, Hermanus; Laurent, Claude

    2016-01-01

    To validate a calibrated smartphone-based hearing test in a sound booth environment and in primary health-care clinics. A repeated-measure within-subject study design was employed whereby air-conduction hearing thresholds determined by smartphone-based audiometry was compared to conventional audiometry in a sound booth and a primary health-care clinic environment. A total of 94 subjects (mean age 41 years ± 17.6 SD and range 18-88; 64% female) were assessed of whom 64 were tested in the sound booth and 30 within primary health-care clinics without a booth. In the sound booth 63.4% of conventional and smartphone thresholds indicated normal hearing (≤15 dBHL). Conventional thresholds exceeding 15 dB HL corresponded to smartphone thresholds within ≤10 dB in 80.6% of cases with an average threshold difference of -1.6 dB ± 9.9 SD. In primary health-care clinics 13.7% of conventional and smartphone thresholds indicated normal hearing (≤15 dBHL). Conventional thresholds exceeding 15 dBHL corresponded to smartphone thresholds within ≤10 dB in 92.9% of cases with an average threshold difference of -1.0 dB ± 7.1 SD. Accurate air-conduction audiometry can be conducted in a sound booth and without a sound booth in an underserved community health-care clinic using a smartphone.

  8. Environmental Dataset Gateway (EDG) CS-W Interface

    Data.gov (United States)

    U.S. Environmental Protection Agency — Use the Environmental Dataset Gateway (EDG) to find and access EPA's environmental resources. Many options are available for easily reusing EDG content in other...

  9. Is the diagnostic threshold for bulimia nervosa clinically meaningful?

    Science.gov (United States)

    Chapa, Danielle A N; Bohrer, Brittany K; Forbush, Kelsie T

    2018-01-01

    The DSM-5 differentiates full- and sub-threshold bulimia nervosa (BN) according to average weekly frequencies of binge eating and inappropriate compensatory behaviors. This study was the first to evaluate the modified frequency criterion for BN published in the DSM-5. The purpose of this study was to test whether community-recruited adults (N=125; 83.2% women) with current full-threshold (n=77) or sub-threshold BN (n=48) differed in comorbid psychopathology and eating disorder (ED) illness duration, symptom severity, and clinical impairment. Participants completed the Clinical Impairment Assessment and participated in semi-structured clinical interviews of ED- and non-ED psychopathology. Differences between the sub- and full-threshold BN groups were assessed using MANOVA and Chi-square analyses. ED illness duration, age-of-onset, body mass index (BMI), alcohol and drug misuse, and the presence of current and lifetime mood or anxiety disorders did not differ between participants with sub- and full-threshold BN. Participants with full-threshold BN had higher levels of clinical impairment and weight concern than those with sub-threshold BN. However, minimal clinically important difference analyses suggested that statistically significant differences between participants with sub- and full-threshold BN on clinical impairment and weight concern were not clinically significant. In conclusion, sub-threshold BN did not differ from full-threshold BN in clinically meaningful ways. Future studies are needed to identify an improved frequency criterion for BN that better distinguishes individuals in ways that will more validly inform prognosis and effective treatment planning for BN. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Plant responses to precipitation in desert ecosystems: integrating functional types, pulses, thresholds, and delays.

    Science.gov (United States)

    Ogle, Kiona; Reynolds, James F

    2004-10-01

    The 'two-layer' and 'pulse-reserve' hypotheses were developed 30 years ago and continue to serve as the standard for many experiments and modeling studies that examine relationships between primary productivity and rainfall variability in aridlands. The two-layer hypothesis considers two important plant functional types (FTs) and predicts that woody and herbaceous plants are able to co-exist in savannas because they utilize water from different soil layers (or depths). The pulse-reserve model addresses the response of individual plants to precipitation and predicts that there are 'biologically important' rain events that stimulate plant growth and reproduction. These pulses of precipitation may play a key role in long-term plant function and survival (as compared to seasonal or annual rainfall totals as per the two-layer model). In this paper, we re-evaluate these paradigms in terms of their generality, strengths, and limitations. We suggest that while seasonality and resource partitioning (key to the two-layer model) and biologically important precipitation events (key to the pulse-reserve model) are critical to understanding plant responses to precipitation in aridlands, both paradigms have significant limitations. Neither account for plasticity in rooting habits of woody plants, potential delayed responses of plants to rainfall, explicit precipitation thresholds, or vagaries in plant phenology. To address these limitations, we integrate the ideas of precipitation thresholds and plant delays, resource partitioning, and plant FT strategies into a simple 'threshold-delay' model. The model contains six basic parameters that capture the nonlinear nature of plant responses to pulse precipitation. We review the literature within the context of our threshold-delay model to: (i) develop testable hypotheses about how different plant FTs respond to pulses; (ii) identify weaknesses in the current state-of-knowledge; and (iii) suggest future research directions that will

  11. NEUTRON SPECTRUM MEASUREMENTS USING MULTIPLE THRESHOLD DETECTORS

    Energy Technology Data Exchange (ETDEWEB)

    Gerken, William W.; Duffey, Dick

    1963-11-15

    From American Nuclear Society Meeting, New York, Nov. 1963. The use of threshold detectors, which simultaneously undergo reactions with thermal neutrons and two or more fast neutron threshold reactions, was applied to measurements of the neutron spectrum in a reactor. A number of different materials were irradiated to determine the most practical ones for use as multiple threshold detectors. These results, as well as counting techniques and corrections, are presented. Some materials used include aluminum, alloys of Al -Ni, aluminum-- nickel oxides, and magesium orthophosphates. (auth)

  12. Cost-effectiveness thresholds: pros and cons.

    Science.gov (United States)

    Bertram, Melanie Y; Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R

    2016-12-01

    Cost-effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost-effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost-effectiveness thresholds allow cost-effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization's Commission on Macroeconomics in Health suggested cost-effectiveness thresholds based on multiples of a country's per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this - in addition to uncertainty in the modelled cost-effectiveness ratios - can lead to the wrong decision on how to spend health-care resources. Cost-effectiveness information should be used alongside other considerations - e.g. budget impact and feasibility considerations - in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost-effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair.

  13. Cost–effectiveness thresholds: pros and cons

    Science.gov (United States)

    Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R

    2016-01-01

    Abstract Cost–effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost–effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost–effectiveness thresholds allow cost–effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization’s Commission on Macroeconomics in Health suggested cost–effectiveness thresholds based on multiples of a country’s per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this – in addition to uncertainty in the modelled cost–effectiveness ratios – can lead to the wrong decision on how to spend health-care resources. Cost–effectiveness information should be used alongside other considerations – e.g. budget impact and feasibility considerations – in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost–effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair. PMID:27994285

  14. Annotating spatio-temporal datasets for meaningful analysis in the Web

    Science.gov (United States)

    Stasch, Christoph; Pebesma, Edzer; Scheider, Simon

    2014-05-01

    More and more environmental datasets that vary in space and time are available in the Web. This comes along with an advantage of using the data for other purposes than originally foreseen, but also with the danger that users may apply inappropriate analysis procedures due to lack of important assumptions made during the data collection process. In order to guide towards a meaningful (statistical) analysis of spatio-temporal datasets available in the Web, we have developed a Higher-Order-Logic formalism that captures some relevant assumptions in our previous work [1]. It allows to proof on meaningful spatial prediction and aggregation in a semi-automated fashion. In this poster presentation, we will present a concept for annotating spatio-temporal datasets available in the Web with concepts defined in our formalism. Therefore, we have defined a subset of the formalism as a Web Ontology Language (OWL) pattern. It allows capturing the distinction between the different spatio-temporal variable types, i.e. point patterns, fields, lattices and trajectories, that in turn determine whether a particular dataset can be interpolated or aggregated in a meaningful way using a certain procedure. The actual annotations that link spatio-temporal datasets with the concepts in the ontology pattern are provided as Linked Data. In order to allow data producers to add the annotations to their datasets, we have implemented a Web portal that uses a triple store at the backend to store the annotations and to make them available in the Linked Data cloud. Furthermore, we have implemented functions in the statistical environment R to retrieve the RDF annotations and, based on these annotations, to support a stronger typing of spatio-temporal datatypes guiding towards a meaningful analysis in R. [1] Stasch, C., Scheider, S., Pebesma, E., Kuhn, W. (2014): "Meaningful spatial prediction and aggregation", Environmental Modelling & Software, 51, 149-165.

  15. Evolving hard problems: Generating human genetics datasets with a complex etiology

    Directory of Open Access Journals (Sweden)

    Himmelstein Daniel S

    2011-07-01

    Full Text Available Abstract Background A goal of human genetics is to discover genetic factors that influence individuals' susceptibility to common diseases. Most common diseases are thought to result from the joint failure of two or more interacting components instead of single component failures. This greatly complicates both the task of selecting informative genetic variants and the task of modeling interactions between them. We and others have previously developed algorithms to detect and model the relationships between these genetic factors and disease. Previously these methods have been evaluated with datasets simulated according to pre-defined genetic models. Results Here we develop and evaluate a model free evolution strategy to generate datasets which display a complex relationship between individual genotype and disease susceptibility. We show that this model free approach is capable of generating a diverse array of datasets with distinct gene-disease relationships for an arbitrary interaction order and sample size. We specifically generate eight-hundred Pareto fronts; one for each independent run of our algorithm. In each run the predictiveness of single genetic variation and pairs of genetic variants have been minimized, while the predictiveness of third, fourth, or fifth-order combinations is maximized. Two hundred runs of the algorithm are further dedicated to creating datasets with predictive four or five order interactions and minimized lower-level effects. Conclusions This method and the resulting datasets will allow the capabilities of novel methods to be tested without pre-specified genetic models. This allows researchers to evaluate which methods will succeed on human genetics problems where the model is not known in advance. We further make freely available to the community the entire Pareto-optimal front of datasets from each run so that novel methods may be rigorously evaluated. These 76,600 datasets are available from http://discovery.dartmouth.edu/model_free_data/.

  16. A Dataset from TIMSS to Examine the Relationship between Computer Use and Mathematics Achievement

    Science.gov (United States)

    Kadijevich, Djordje M.

    2015-01-01

    Because the relationship between computer use and achievement is still puzzling, there is a need to prepare and analyze good quality datasets on computer use and achievement. Such a dataset can be derived from TIMSS data. This paper describes how this dataset can be prepared. It also gives an example of how the dataset may be analyzed. The…

  17. Summary report of a workshop on establishing cumulative effects thresholds : a suggested approach for establishing cumulative effects thresholds in a Yukon context

    International Nuclear Information System (INIS)

    2003-01-01

    Increasingly, thresholds are being used as a land and cumulative effects assessment and management tool. To assist in the management of wildlife species such as woodland caribou, the Department of Indian and Northern Affairs (DIAND) Environment Directorate, Yukon sponsored a workshop to develop and use cumulative thresholds in the Yukon. The approximately 30 participants reviewed recent initiatives in the Yukon and other jurisdictions. The workshop is expected to help formulate a strategic vision for implementing cumulative effects thresholds in the Yukon. The key to success resides in building relationships with Umbrella Final Agreement (UFA) Boards, the Development Assessment Process (DAP), and the Yukon Environmental and Socio-Economic Assessment Act (YESAA). Broad support is required within an integrated resource management framework. The workshop featured discussions on current science and theory of cumulative effects thresholds. Potential data and implementation issues were also discussed. It was concluded that thresholds are useful and scientifically defensible. The threshold research results obtained in Alberta, British Columbia and the Northwest Territories are applicable to the Yukon. One of the best tools for establishing and tracking thresholds is habitat effectiveness. Effects must be monitored and tracked. Biologists must share their information with decision makers. Interagency coordination and assistance should be facilitated through the establishment of working groups. Regional land use plans should include thresholds. 7 refs.

  18. Tensile strength and failure load of sutures for robotic surgery.

    Science.gov (United States)

    Abiri, Ahmad; Paydar, Omeed; Tao, Anna; LaRocca, Megan; Liu, Kang; Genovese, Bradley; Candler, Robert; Grundfest, Warren S; Dutson, Erik P

    2017-08-01

    Robotic surgical platforms have seen increased use among minimally invasive gastrointestinal surgeons (von Fraunhofer et al. in J Biomed Mater Res 19(5):595-600, 1985. doi: 10.1002/jbm.820190511 ). However, these systems still suffer from lack of haptic feedback, which results in exertion of excessive force, often leading to suture failures (Barbash et al. in Ann Surg 259(1):1-6, 2014. doi: 10.1097/SLA.0b013e3182a5c8b8 ). This work catalogs tensile strength and failure load among commonly used sutures in an effort to prevent robotic surgical consoles from exceeding identified thresholds. Trials were thus conducted on common sutures varying in material type, gauge size, rate of pulling force, and method of applied force. Polydioxanone, Silk, Vicryl, and Prolene, gauges 5-0 to 1-0, were pulled till failure using a commercial mechanical testing system. 2-0 and 3-0 sutures were further tested for the effect of pull rate on failure load at rates of 50, 200, and 400 mm/min. 3-0 sutures were also pulled till failure using a da Vinci robotic surgical system in unlooped, looped, and at the needle body arrangements. Generally, Vicryl and PDS sutures had the highest mechanical strength (47-179 kN/cm 2 ), while Silk had the lowest (40-106 kN/cm 2 ). Larger diameter sutures withstand higher total force, but finer gauges consistently show higher force per unit area. The difference between material types becomes increasingly significant as the diameters decrease. Comparisons of identical suture materials and gauges show 27-50% improvement in the tensile strength over data obtained in 1985 (Ballantyne in Surg Endosc Other Interv Tech 16(10):1389-1402, 2002. doi: 10.1007/s00464-001-8283-7 ). No significant differences were observed when sutures were pulled at different rates. Reduction in suture strength appeared to be strongly affected by the technique used to manipulate the suture. Availability of suture tensile strength and failure load data will help define software safety

  19. ‘Soglitude’- introducing a method of thinking thresholds

    Directory of Open Access Journals (Sweden)

    Tatjana Barazon

    2010-04-01

    Full Text Available ‘Soglitude’ is an invitation to acknowledge the existence of thresholds in thought. A threshold in thought designates the indetermination, the passage, the evolution of every state the world is in. The creation we add to it, and the objectivity we suppose, on the border of those two ideas lies our perceptive threshold. No state will ever be permanent, and in order to stress the temporary, fluent character of the world and our perception of it, we want to introduce a new suitable method to think change and transformation, when we acknowledge our own threshold nature. The contributions gathered in this special issue come from various disciplines: anthropology, philosophy, critical theory, film studies, political science, literature and history. The variety of these insights shows the resonance of the idea of threshold in every category of thought. We hope to enlarge the notion in further issues on physics and chemistry, as well as mathematics. The articles in this issue introduce the method of threshold thinking by showing the importance of the in-between, of the changing of perspective in their respective domain. The ‘Documents’ section named INTERSTICES, includes a selection of poems, two essays, a philosophical-artistic project called ‘infraphysique’, a performance on thresholds in the soul, and a dialogue with Israel Rosenfield. This issue presents a kaleidoscope of possible threshold thinking and hopes to initiate new ways of looking at things.For every change that occurs in reality there is a subjective counterpart in our perception and this needs to be acknowledged as such. What we name objective is reflected in our own personal perception in its own personal manner, in such a way that the objectivity of an event might altogether be questioned. The absolute point of view, the view from “nowhere”, could well be the projection that causes dogmatism. By introducing the method of thinking thresholds into a system, be it

  20. Identifying Threshold Concepts for Information Literacy: A Delphi Study

    OpenAIRE

    Lori Townsend; Amy R. Hofer; Silvia Lin Hanick; Korey Brunetti

    2016-01-01

    This study used the Delphi method to engage expert practitioners on the topic of threshold concepts for information literacy. A panel of experts considered two questions. First, is the threshold concept approach useful for information literacy instruction? The panel unanimously agreed that the threshold concept approach holds potential for information literacy instruction. Second, what are the threshold concepts for information literacy instruction? The panel proposed and discussed over fift...

  1. Melanin microcavitation threshold in the near infrared

    Science.gov (United States)

    Schmidt, Morgan S.; Kennedy, Paul K.; Vincelette, Rebecca L.; Schuster, Kurt J.; Noojin, Gary D.; Wharmby, Andrew W.; Thomas, Robert J.; Rockwell, Benjamin A.

    2014-02-01

    Thresholds for microcavitation of isolated bovine and porcine melanosomes were determined using single nanosecond (ns) laser pulses in the NIR (1000 - 1319 nm) wavelength regime. Average fluence thresholds for microcavitation increased non-linearly with increasing wavelength. Average fluence thresholds were also measured for 10-ns pulses at 532 nm, and found to be comparable to visible ns pulse values published in previous reports. Fluence thresholds were used to calculate melanosome absorption coefficients, which decreased with increasing wavelength. This trend was found to be comparable to the decrease in retinal pigmented epithelial (RPE) layer absorption coefficients reported over the same wavelength region. Estimated corneal total intraocular energy (TIE) values were determined and compared to the current and proposed maximum permissible exposure (MPE) safe exposure levels. Results from this study support the proposed changes to the MPE levels.

  2. The relationship between compressive strength and flexural strength of pavement geopolymer grouting material

    Science.gov (United States)

    Zhang, L.; Han, X. X.; Ge, J.; Wang, C. H.

    2018-01-01

    To determine the relationship between compressive strength and flexural strength of pavement geopolymer grouting material, 20 groups of geopolymer grouting materials were prepared, the compressive strength and flexural strength were determined by mechanical properties test. On the basis of excluding the abnormal values through boxplot, the results show that, the compressive strength test results were normal, but there were two mild outliers in 7days flexural strength test. The compressive strength and flexural strength were linearly fitted by SPSS, six regression models were obtained by linear fitting of compressive strength and flexural strength. The linear relationship between compressive strength and flexural strength can be better expressed by the cubic curve model, and the correlation coefficient was 0.842.

  3. Creatine supplementation prevents acute strength loss induced by concurrent exercise.

    Science.gov (United States)

    de Salles Painelli, Vítor; Alves, Victor Tavares; Ugrinowitsch, Carlos; Benatti, Fabiana Braga; Artioli, Guilherme Giannini; Lancha, Antonio Herbert; Gualano, Bruno; Roschel, Hamilton

    2014-08-01

    To investigate the effect of creatine (CR) supplementation on the acute interference induced by aerobic exercise on subsequent maximum dynamic strength (1RM) and strength endurance (SE, total number of repetitions) performance. Thirty-two recreationally strength-trained men were submitted to a graded exercise test to determine maximal oxygen consumption (VO2max: 41.56 ± 5.24 ml kg(-1) min(-1)), anaerobic threshold velocity (ATv: 8.3 ± 1.18 km h(-1)), and baseline performance (control) on the 1RM and SE (4 × 80 % 1RM to failure) tests. After the control tests, participants were randomly assigned to either a CR (20 g day(-1) for 7 days followed by 5 g day(-1) throughout the study) or a placebo (PL-dextrose) group, and then completed 4 experimental sessions, consisting of a 5-km run on a treadmill either continuously (90 % ATv) or intermittently (1:1 min at vVO2max) followed by either a leg- or bench-press SE/1RM test. CR was able to maintain the leg-press SE performance after the intermittent aerobic exercise when compared with C (p > 0.05). On the other hand, the PL group showed a significant decrease in leg-press SE (p ≤ 0.05). CR supplementation significantly increased bench-press SE after both aerobic exercise modes, while the bench-press SE was not affected by either mode of aerobic exercise in the PL group. Although small increases in 1RM were observed after either continuous (bench press and leg press) or intermittent (bench press) aerobic exercise in the CR group, they were within the range of variability of the measurement. The PL group only maintained their 1RM. In conclusion, the acute interference effect on strength performance observed in concurrent exercise may be counteracted by CR supplementation.

  4. A new dataset validation system for the Planetary Science Archive

    Science.gov (United States)

    Manaud, N.; Zender, J.; Heather, D.; Martinez, S.

    2007-08-01

    The Planetary Science Archive is the official archive for the Mars Express mission. It has received its first data by the end of 2004. These data are delivered by the PI teams to the PSA team as datasets, which are formatted conform to the Planetary Data System (PDS). The PI teams are responsible for analyzing and calibrating the instrument data as well as the production of reduced and calibrated data. They are also responsible of the scientific validation of these data. ESA is responsible of the long-term data archiving and distribution to the scientific community and must ensure, in this regard, that all archived products meet quality. To do so, an archive peer-review is used to control the quality of the Mars Express science data archiving process. However a full validation of its content is missing. An independent review board recently recommended that the completeness of the archive as well as the consistency of the delivered data should be validated following well-defined procedures. A new validation software tool is being developed to complete the overall data quality control system functionality. This new tool aims to improve the quality of data and services provided to the scientific community through the PSA, and shall allow to track anomalies in and to control the completeness of datasets. It shall ensure that the PSA end-users: (1) can rely on the result of their queries, (2) will get data products that are suitable for scientific analysis, (3) can find all science data acquired during a mission. We defined dataset validation as the verification and assessment process to check the dataset content against pre-defined top-level criteria, which represent the general characteristics of good quality datasets. The dataset content that is checked includes the data and all types of information that are essential in the process of deriving scientific results and those interfacing with the PSA database. The validation software tool is a multi-mission tool that

  5. Data Recommender: An Alternative Way to Discover Open Scientific Datasets

    Science.gov (United States)

    Klump, J. F.; Devaraju, A.; Williams, G.; Hogan, D.; Davy, R.; Page, J.; Singh, D.; Peterson, N.

    2017-12-01

    Over the past few years, institutions and government agencies have adopted policies to openly release their data, which has resulted in huge amounts of open data becoming available on the web. When trying to discover the data, users face two challenges: an overload of choice and the limitations of the existing data search tools. On the one hand, there are too many datasets to choose from, and therefore, users need to spend considerable effort to find the datasets most relevant to their research. On the other hand, data portals commonly offer keyword and faceted search, which depend fully on the user queries to search and rank relevant datasets. Consequently, keyword and faceted search may return loosely related or irrelevant results, although the results may contain the same query. They may also return highly specific results that depend more on how well metadata was authored. They do not account well for variance in metadata due to variance in author styles and preferences. The top-ranked results may also come from the same data collection, and users are unlikely to discover new and interesting datasets. These search modes mainly suits users who can express their information needs in terms of the structure and terminology of the data portals, but may pose a challenge otherwise. The above challenges reflect that we need a solution that delivers the most relevant (i.e., similar and serendipitous) datasets to users, beyond the existing search functionalities on the portals. A recommender system is an information filtering system that presents users with relevant and interesting contents based on users' context and preferences. Delivering data recommendations to users can make data discovery easier, and as a result may enhance user engagement with the portal. We developed a hybrid data recommendation approach for the CSIRO Data Access Portal. The approach leverages existing recommendation techniques (e.g., content-based filtering and item co-occurrence) to produce

  6. Resonances, cusp effects and a virtual state in e/sup -/-He scattering near the n = 3 thresholds. [Variational methods, resonance, threshold structures

    Energy Technology Data Exchange (ETDEWEB)

    Nesbet, R K [International Business Machines Corp., San Jose, Calif. (USA). Research Lab.

    1978-01-14

    Variational calculations locate and identify resonances and new threshold structures in electron impact excitation of He metastable states, in the region of the 3/sup 3/S and 3/sup 1/S excitation thresholds. A virtual state is found at the 3/sup 3/S threshold.

  7. Data assimilation and model evaluation experiment datasets

    Science.gov (United States)

    Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.

    1994-01-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.

  8. Artificial intelligence (AI) systems for interpreting complex medical datasets.

    Science.gov (United States)

    Altman, R B

    2017-05-01

    Advances in machine intelligence have created powerful capabilities in algorithms that find hidden patterns in data, classify objects based on their measured characteristics, and associate similar patients/diseases/drugs based on common features. However, artificial intelligence (AI) applications in medical data have several technical challenges: complex and heterogeneous datasets, noisy medical datasets, and explaining their output to users. There are also social challenges related to intellectual property, data provenance, regulatory issues, economics, and liability. © 2017 ASCPT.

  9. Graph Analysis and Modularity of Brain Functional Connectivity Networks: Searching for the Optimal Threshold

    Directory of Open Access Journals (Sweden)

    Cécile Bordier

    2017-08-01

    Full Text Available Neuroimaging data can be represented as networks of nodes and edges that capture the topological organization of the brain connectivity. Graph theory provides a general and powerful framework to study these networks and their structure at various scales. By way of example, community detection methods have been widely applied to investigate the modular structure of many natural networks, including brain functional connectivity networks. Sparsification procedures are often applied to remove the weakest edges, which are the most affected by experimental noise, and to reduce the density of the graph, thus making it theoretically and computationally more tractable. However, weak links may also contain significant structural information, and procedures to identify the optimal tradeoff are the subject of active research. Here, we explore the use of percolation analysis, a method grounded in statistical physics, to identify the optimal sparsification threshold for community detection in brain connectivity networks. By using synthetic networks endowed with a ground-truth modular structure and realistic topological features typical of human brain functional connectivity networks, we show that percolation analysis can be applied to identify the optimal sparsification threshold that maximizes information on the networks' community structure. We validate this approach using three different community detection methods widely applied to the analysis of brain connectivity networks: Newman's modularity, InfoMap and Asymptotical Surprise. Importantly, we test the effects of noise and data variability, which are critical factors to determine the optimal threshold. This data-driven method should prove particularly useful in the analysis of the community structure of brain networks in populations characterized by different connectivity strengths, such as patients and controls.

  10. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai; Sang, Huiyan; Huang, Jianhua Z.

    2014-01-01

    of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov

  11. PERFORMANCE COMPARISON FOR INTRUSION DETECTION SYSTEM USING NEURAL NETWORK WITH KDD DATASET

    Directory of Open Access Journals (Sweden)

    S. Devaraju

    2014-04-01

    Full Text Available Intrusion Detection Systems are challenging task for finding the user as normal user or attack user in any organizational information systems or IT Industry. The Intrusion Detection System is an effective method to deal with the kinds of problem in networks. Different classifiers are used to detect the different kinds of attacks in networks. In this paper, the performance of intrusion detection is compared with various neural network classifiers. In the proposed research the four types of classifiers used are Feed Forward Neural Network (FFNN, Generalized Regression Neural Network (GRNN, Probabilistic Neural Network (PNN and Radial Basis Neural Network (RBNN. The performance of the full featured KDD Cup 1999 dataset is compared with that of the reduced featured KDD Cup 1999 dataset. The MATLAB software is used to train and test the dataset and the efficiency and False Alarm Rate is measured. It is proved that the reduced dataset is performing better than the full featured dataset.

  12. Review of ATLAS Open Data 8 TeV datasets, tools and activities

    CERN Document Server

    The ATLAS collaboration

    2018-01-01

    The ATLAS Collaboration has released two 8 TeV datasets and relevant simulated samples to the public for educational use. A number of groups within ATLAS have used these ATLAS Open Data 8 TeV datasets, developing tools and educational material to promote particle physics. The general aim of these activities is to provide simple and user-friendly interactive interfaces to simulate the procedures used by high-energy physics researchers. International Masterclasses introduce particle physics to high school students and have been studying 8 TeV ATLAS Open Data since 2015. Inspired by this success, a new ATLAS Open Data initiative was launched in 2016 for university students. A comprehensive educational platform was thus developed featuring a second 8 TeV dataset and a new set of educational tools. The 8 TeV datasets and associated tools are presented and discussed here, as well as a selection of activities studying the ATLAS Open Data 8 TeV datasets.

  13. Recent Development on the NOAA's Global Surface Temperature Dataset

    Science.gov (United States)

    Zhang, H. M.; Huang, B.; Boyer, T.; Lawrimore, J. H.; Menne, M. J.; Rennie, J.

    2016-12-01

    Global Surface Temperature (GST) is one of the most widely used indicators for climate trend and extreme analyses. A widely used GST dataset is the NOAA merged land-ocean surface temperature dataset known as NOAAGlobalTemp (formerly MLOST). The NOAAGlobalTemp had recently been updated from version 3.5.4 to version 4. The update includes a significant improvement in the ocean surface component (Extended Reconstructed Sea Surface Temperature or ERSST, from version 3b to version 4) which resulted in an increased temperature trends in recent decades. Since then, advancements in both the ocean component (ERSST) and land component (GHCN-Monthly) have been made, including the inclusion of Argo float SSTs and expanded EOT modes in ERSST, and the use of ISTI databank in GHCN-Monthly. In this presentation, we describe the impact of those improvements on the merged global temperature dataset, in terms of global trends and other aspects.

  14. The OXL format for the exchange of integrated datasets

    Directory of Open Access Journals (Sweden)

    Taubert Jan

    2007-12-01

    Full Text Available A prerequisite for systems biology is the integration and analysis of heterogeneous experimental data stored in hundreds of life-science databases and millions of scientific publications. Several standardised formats for the exchange of specific kinds of biological information exist. Such exchange languages facilitate the integration process; however they are not designed to transport integrated datasets. A format for exchanging integrated datasets needs to i cover data from a broad range of application domains, ii be flexible and extensible to combine many different complex data structures, iii include metadata and semantic definitions, iv include inferred information, v identify the original data source for integrated entities and vi transport large integrated datasets. Unfortunately, none of the exchange formats from the biological domain (e.g. BioPAX, MAGE-ML, PSI-MI, SBML or the generic approaches (RDF, OWL fulfil these requirements in a systematic way.

  15. Effects of pulse duration on magnetostimulation thresholds

    Energy Technology Data Exchange (ETDEWEB)

    Saritas, Emine U., E-mail: saritas@ee.bilkent.edu.tr [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Department of Electrical and Electronics Engineering, Bilkent University, Bilkent, Ankara 06800 (Turkey); National Magnetic Resonance Research Center (UMRAM), Bilkent University, Bilkent, Ankara 06800 (Turkey); Goodwill, Patrick W. [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Conolly, Steven M. [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Department of EECS, University of California, Berkeley, California 94720-1762 (United States)

    2015-06-15

    Purpose: Medical imaging techniques such as magnetic resonance imaging and magnetic particle imaging (MPI) utilize time-varying magnetic fields that are subject to magnetostimulation limits, which often limit the speed of the imaging process. Various human-subject experiments have studied the amplitude and frequency dependence of these thresholds for gradient or homogeneous magnetic fields. Another contributing factor was shown to be number of cycles in a magnetic pulse, where the thresholds decreased with longer pulses. The latter result was demonstrated on two subjects only, at a single frequency of 1.27 kHz. Hence, whether the observed effect was due to the number of cycles or due to the pulse duration was not specified. In addition, a gradient-type field was utilized; hence, whether the same phenomenon applies to homogeneous magnetic fields remained unknown. Here, the authors investigate the pulse duration dependence of magnetostimulation limits for a 20-fold range of frequencies using homogeneous magnetic fields, such as the ones used for the drive field in MPI. Methods: Magnetostimulation thresholds were measured in the arms of six healthy subjects (age: 27 ± 5 yr). Each experiment comprised testing the thresholds at eight different pulse durations between 2 and 125 ms at a single frequency, which took approximately 30–40 min/subject. A total of 34 experiments were performed at three different frequencies: 1.2, 5.7, and 25.5 kHz. A solenoid coil providing homogeneous magnetic field was used to induce stimulation, and the field amplitude was measured in real time. A pre-emphasis based pulse shaping method was employed to accurately control the pulse durations. Subjects reported stimulation via a mouse click whenever they felt a twitching/tingling sensation. A sigmoid function was fitted to the subject responses to find the threshold at a specific frequency and duration, and the whole procedure was repeated at all relevant frequencies and pulse durations

  16. Effects of pulse duration on magnetostimulation thresholds

    International Nuclear Information System (INIS)

    Saritas, Emine U.; Goodwill, Patrick W.; Conolly, Steven M.

    2015-01-01

    Purpose: Medical imaging techniques such as magnetic resonance imaging and magnetic particle imaging (MPI) utilize time-varying magnetic fields that are subject to magnetostimulation limits, which often limit the speed of the imaging process. Various human-subject experiments have studied the amplitude and frequency dependence of these thresholds for gradient or homogeneous magnetic fields. Another contributing factor was shown to be number of cycles in a magnetic pulse, where the thresholds decreased with longer pulses. The latter result was demonstrated on two subjects only, at a single frequency of 1.27 kHz. Hence, whether the observed effect was due to the number of cycles or due to the pulse duration was not specified. In addition, a gradient-type field was utilized; hence, whether the same phenomenon applies to homogeneous magnetic fields remained unknown. Here, the authors investigate the pulse duration dependence of magnetostimulation limits for a 20-fold range of frequencies using homogeneous magnetic fields, such as the ones used for the drive field in MPI. Methods: Magnetostimulation thresholds were measured in the arms of six healthy subjects (age: 27 ± 5 yr). Each experiment comprised testing the thresholds at eight different pulse durations between 2 and 125 ms at a single frequency, which took approximately 30–40 min/subject. A total of 34 experiments were performed at three different frequencies: 1.2, 5.7, and 25.5 kHz. A solenoid coil providing homogeneous magnetic field was used to induce stimulation, and the field amplitude was measured in real time. A pre-emphasis based pulse shaping method was employed to accurately control the pulse durations. Subjects reported stimulation via a mouse click whenever they felt a twitching/tingling sensation. A sigmoid function was fitted to the subject responses to find the threshold at a specific frequency and duration, and the whole procedure was repeated at all relevant frequencies and pulse durations

  17. Developing a Data-Set for Stereopsis

    Directory of Open Access Journals (Sweden)

    D.W Hunter

    2014-08-01

    Full Text Available Current research on binocular stereopsis in humans and non-human primates has been limited by a lack of available data-sets. Current data-sets fall into two categories; stereo-image sets with vergence but no ranging information (Hibbard, 2008, Vision Research, 48(12, 1427-1439 or combinations of depth information with binocular images and video taken from cameras in fixed fronto-parallel configurations exhibiting neither vergence or focus effects (Hirschmuller & Scharstein, 2007, IEEE Conf. Computer Vision and Pattern Recognition. The techniques for generating depth information are also imperfect. Depth information is normally inaccurate or simply missing near edges and on partially occluded surfaces. For many areas of vision research these are the most interesting parts of the image (Goutcher, Hunter, Hibbard, 2013, i-Perception, 4(7, 484; Scarfe & Hibbard, 2013, Vision Research. Using state-of-the-art open-source ray-tracing software (PBRT as a back-end, our intention is to release a set of tools that will allow researchers in this field to generate artificial binocular stereoscopic data-sets. Although not as realistic as photographs, computer generated images have significant advantages in terms of control over the final output and ground-truth information about scene depth is easily calculated at all points in the scene, even partially occluded areas. While individual researchers have been developing similar stimuli by hand for many decades, we hope that our software will greatly reduce the time and difficulty of creating naturalistic binocular stimuli. Our intension in making this presentation is to elicit feedback from the vision community about what sort of features would be desirable in such software.

  18. BASE MAP DATASET, MAYES COUNTY, OKLAHOMA, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications: cadastral, geodetic control,...

  19. PENERAPAN TEKNIK BAGGING PADA ALGORITMA KLASIFIKASI UNTUK MENGATASI KETIDAKSEIMBANGAN KELAS DATASET MEDIS

    Directory of Open Access Journals (Sweden)

    Rizki Tri Prasetio

    2016-03-01

    Full Text Available ABSTRACT – The class imbalance problems have been reported to severely hinder classification performance of many standard learning algorithms, and have attracted a great deal of attention from researchers of different fields. Therefore, a number of methods, such as sampling methods, cost-sensitive learning methods, and bagging and boosting based ensemble methods, have been proposed to solve these problems. Some medical dataset has two classes has two classes or binominal experiencing an imbalance that causes lack of accuracy in classification. This research proposed a combination technique of bagging and algorithms of classification to improve the accuracy of medical datasets. Bagging technique used to solve the problem of imbalanced class. The proposed method is applied on three classifier algorithm i.e., naïve bayes, decision tree and k-nearest neighbor. This research uses five medical datasets obtained from UCI Machine Learning i.e.., breast-cancer, liver-disorder, heart-disease, pima-diabetes and vertebral column. Results of this research indicate that the proposed method makes a significant improvement on two algorithms of classification i.e. decision tree with p value of t-Test 0.0184 and k-nearest neighbor with p value of t-Test 0.0292, but not significant in naïve bayes with p value of t-Test 0.9236. After bagging technique applied at five medical datasets, naïve bayes has the highest accuracy for breast-cancer dataset of 96.14% with AUC of 0.984, heart-disease of 84.44% with AUC of 0.911 and pima-diabetes of 74.73% with AUC of 0.806. While the k-nearest neighbor has the best accuracy for dataset liver-disorder of 62.03% with AUC of 0.632 and vertebral-column of 82.26% with the AUC of 0.867. Keywords: ensemble technique, bagging, imbalanced class, medical dataset. ABSTRAKSI – Masalah ketidakseimbangan kelas telah dilaporkan sangat menghambat kinerja klasifikasi banyak algoritma klasifikasi dan telah menarik banyak perhatian dari

  20. Regional rainfall thresholds for landslide occurrence using a centenary database

    Science.gov (United States)

    Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Garcia, Ricardo A. C.; Quaresma, Ivânia

    2018-04-01

    This work proposes a comprehensive method to assess rainfall thresholds for landslide initiation using a centenary landslide database associated with a single centenary daily rainfall data set. The method is applied to the Lisbon region and includes the rainfall return period analysis that was used to identify the critical rainfall combination (cumulated rainfall duration) related to each landslide event. The spatial representativeness of the reference rain gauge is evaluated and the rainfall thresholds are assessed and calibrated using the receiver operating characteristic (ROC) metrics. Results show that landslide events located up to 10 km from the rain gauge can be used to calculate the rainfall thresholds in the study area; however, these thresholds may be used with acceptable confidence up to 50 km from the rain gauge. The rainfall thresholds obtained using linear and potential regression perform well in ROC metrics. However, the intermediate thresholds based on the probability of landslide events established in the zone between the lower-limit threshold and the upper-limit threshold are much more informative as they indicate the probability of landslide event occurrence given rainfall exceeding the threshold. This information can be easily included in landslide early warning systems, especially when combined with the probability of rainfall above each threshold.

  1. CERC Dataset (Full Hadza Data)

    DEFF Research Database (Denmark)

    2016-01-01

    The dataset includes demographic, behavioral, and religiosity data from eight different populations from around the world. The samples were drawn from: (1) Coastal and (2) Inland Tanna, Vanuatu; (3) Hadzaland, Tanzania; (4) Lovu, Fiji; (5) Pointe aux Piment, Mauritius; (6) Pesqueiro, Brazil; (7......) Kyzyl, Tyva Republic; and (8) Yasawa, Fiji. Related publication: Purzycki, et al. (2016). Moralistic Gods, Supernatural Punishment and the Expansion of Human Sociality. Nature, 530(7590): 327-330....

  2. In-season monitoring of hip and groin strength, health and function in elite youth soccer: Implementing an early detection and management strategy over two consecutive seasons.

    Science.gov (United States)

    Wollin, Martin; Thorborg, Kristian; Welvaert, Marijke; Pizzari, Tania

    2018-03-14

    The primary purpose of this study was to describe an early detection and management strategy when monitoring in-season hip and groin strength, health and function in soccer. Secondly to compare pre-season to in-season test results. Longitudinal cohort study. Twenty-seven elite male youth soccer players (age: 15.07±0.73years) volunteered to participate in the study. Monitoring tests included: adductor strength, adductor/abductor strength ratio and hip and groin outcome scores (HAGOS). Data were recorded at pre-season and at 22 monthly intervals in-season. Thresholds for alerts to initiate further investigations were defined as any of the following: adductor strength reductions >15%, adductor/abductor strength ratio time-point one (ptime-loss were classified minimal or mild. In-season monitoring aimed at early detection and management of hip and groin strength, health and function appears promising. Hip and groin strength, health and function improved quickly from pre-season to in-season in a high-risk population for ongoing hip and groin problems. Copyright © 2018 Sports Medicine Australia. All rights reserved.

  3. The gradual nature of threshold switching

    International Nuclear Information System (INIS)

    Wimmer, M; Salinga, M

    2014-01-01

    The recent commercialization of electronic memories based on phase change materials proved the usability of this peculiar family of materials for application purposes. More advanced data storage and computing concepts, however, demand a deeper understanding especially of the electrical properties of the amorphous phase and the switching behaviour. In this work, we investigate the temporal evolution of the current through the amorphous state of the prototypical phase change material, Ge 2 Sb 2 Te 5 , under constant voltage. A custom-made electrical tester allows the measurement of delay times over five orders of magnitude, as well as the transient states of electrical excitation prior to the actual threshold switching. We recognize a continuous current increase over time prior to the actual threshold-switching event to be a good measure for the electrical excitation. A clear correlation between a significant rise in pre-switching-current and the later occurrence of threshold switching can be observed. This way, we found experimental evidence for the existence of an absolute minimum for the threshold voltage (or electric field respectively) holding also for time scales far beyond the measurement range. (paper)

  4. Error characterisation of global active and passive microwave soil moisture datasets

    Directory of Open Access Journals (Sweden)

    W. A. Dorigo

    2010-12-01

    Full Text Available Understanding the error structures of remotely sensed soil moisture observations is essential for correctly interpreting observed variations and trends in the data or assimilating them in hydrological or numerical weather prediction models. Nevertheless, a spatially coherent assessment of the quality of the various globally available datasets is often hampered by the limited availability over space and time of reliable in-situ measurements. As an alternative, this study explores the triple collocation error estimation technique for assessing the relative quality of several globally available soil moisture products from active (ASCAT and passive (AMSR-E and SSM/I microwave sensors. The triple collocation is a powerful statistical tool to estimate the root mean square error while simultaneously solving for systematic differences in the climatologies of a set of three linearly related data sources with independent error structures. Prerequisite for this technique is the availability of a sufficiently large number of timely corresponding observations. In addition to the active and passive satellite-based datasets, we used the ERA-Interim and GLDAS-NOAH reanalysis soil moisture datasets as a third, independent reference. The prime objective is to reveal trends in uncertainty related to different observation principles (passive versus active, the use of different frequencies (C-, X-, and Ku-band for passive microwave observations, and the choice of the independent reference dataset (ERA-Interim versus GLDAS-NOAH. The results suggest that the triple collocation method provides realistic error estimates. Observed spatial trends agree well with the existing theory and studies on the performance of different observation principles and frequencies with respect to land cover and vegetation density. In addition, if all theoretical prerequisites are fulfilled (e.g. a sufficiently large number of common observations is available and errors of the different

  5. Optimization Problems on Threshold Graphs

    Directory of Open Access Journals (Sweden)

    Elena Nechita

    2010-06-01

    Full Text Available During the last three decades, different types of decompositions have been processed in the field of graph theory. Among these we mention: decompositions based on the additivity of some characteristics of the graph, decompositions where the adjacency law between the subsets of the partition is known, decompositions where the subgraph induced by every subset of the partition must have predeterminate properties, as well as combinations of such decompositions. In this paper we characterize threshold graphs using the weakly decomposition, determine: density and stability number, Wiener index and Wiener polynomial for threshold graphs.

  6. Bond strength of masonry

    NARCIS (Netherlands)

    Pluijm, van der R.; Vermeltfoort, A.Th.

    1992-01-01

    Bond strength is not a well defined property of masonry. Normally three types of bond strength can be distinguished: - tensile bond strength, - shear (and torsional) bond strength, - flexural bond strength. In this contribution the behaviour and strength of masonry in deformation controlled uniaxial

  7. Cumulative Damage in Strength-Dominated Collisions of Rocky Asteroids: Rubble Piles and Brick Piles

    Science.gov (United States)

    Housen, Kevin

    2009-01-01

    Laboratory impact experiments were performed to investigate the conditions that produce large-scale damage in rock targets. Aluminum cylinders (6.3 mm diameter) impacted basalt cylinders (69 mm diameter) at speeds ranging from 0.7 to 2.0 km/s. Diagnostics included measurements of the largest fragment mass, velocities of the largest remnant and large fragments ejected from the periphery of the target, and X-ray computed tomography imaging to inspect some of the impacted targets for internal damage. Significant damage to the target occurred when the kinetic energy per unit target mass exceeded roughly 1/4 of the energy required for catastrophic shattering (where the target is reduced to one-half its original mass). Scaling laws based on a rate-dependent strength were developed that provide a basis for extrapolating the results to larger strength-dominated collisions. The threshold specific energy for widespread damage was found to scale with event size in the same manner as that for catastrophic shattering. Therefore, the factor of four difference between the two thresholds observed in the lab also applies to larger collisions. The scaling laws showed that for a sequence of collisions that are similar in that they produce the same ratio of largest fragment mass to original target mass, the fragment velocities decrease with increasing event size. As a result, rocky asteroids a couple hundred meters in diameter should retain their large ejecta fragments in a jumbled rubble-pile state. For somewhat larger bodies, the ejection velocities are sufficiently low that large fragments are essentially retained in place, possibly forming ordered "brick-pile" structures.

  8. Identifying thresholds for ecosystem-based management.

    Directory of Open Access Journals (Sweden)

    Jameal F Samhouri

    Full Text Available BACKGROUND: One of the greatest obstacles to moving ecosystem-based management (EBM from concept to practice is the lack of a systematic approach to defining ecosystem-level decision criteria, or reference points that trigger management action. METHODOLOGY/PRINCIPAL FINDINGS: To assist resource managers and policymakers in developing EBM decision criteria, we introduce a quantitative, transferable method for identifying utility thresholds. A utility threshold is the level of human-induced pressure (e.g., pollution at which small changes produce substantial improvements toward the EBM goal of protecting an ecosystem's structural (e.g., diversity and functional (e.g., resilience attributes. The analytical approach is based on the detection of nonlinearities in relationships between ecosystem attributes and pressures. We illustrate the method with a hypothetical case study of (1 fishing and (2 nearshore habitat pressure using an empirically-validated marine ecosystem model for British Columbia, Canada, and derive numerical threshold values in terms of the density of two empirically-tractable indicator groups, sablefish and jellyfish. We also describe how to incorporate uncertainty into the estimation of utility thresholds and highlight their value in the context of understanding EBM trade-offs. CONCLUSIONS/SIGNIFICANCE: For any policy scenario, an understanding of utility thresholds provides insight into the amount and type of management intervention required to make significant progress toward improved ecosystem structure and function. The approach outlined in this paper can be applied in the context of single or multiple human-induced pressures, to any marine, freshwater, or terrestrial ecosystem, and should facilitate more effective management.

  9. Robust Adaptive Thresholder For Document Scanning Applications

    Science.gov (United States)

    Hsing, To R.

    1982-12-01

    In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.

  10. Single event upset threshold estimation based on local laser irradiation

    International Nuclear Information System (INIS)

    Chumakov, A.I.; Egorov, A.N.; Mavritsky, O.B.; Yanenko, A.V.

    1999-01-01

    An approach for estimation of ion-induced SEU threshold based on local laser irradiation is presented. Comparative experiment and software simulation research were performed at various pulse duration and spot size. Correlation of single event threshold LET to upset threshold laser energy under local irradiation was found. The computer analysis of local laser irradiation of IC structures was developed for SEU threshold LET estimation. The correlation of local laser threshold energy with SEU threshold LET was shown. Two estimation techniques were suggested. The first one is based on the determination of local laser threshold dose taking into account the relation of sensitive area to local irradiated area. The second technique uses the photocurrent peak value instead of this relation. The agreement between the predicted and experimental results demonstrates the applicability of this approach. (authors)

  11. Synthetic ALSPAC longitudinal datasets for the Big Data VR project.

    Science.gov (United States)

    Avraam, Demetris; Wilson, Rebecca C; Burton, Paul

    2017-01-01

    Three synthetic datasets - of observation size 15,000, 155,000 and 1,555,000 participants, respectively - were created by simulating eleven cardiac and anthropometric variables from nine collection ages of the ALSAPC birth cohort study. The synthetic datasets retain similar data properties to the ALSPAC study data they are simulated from (co-variance matrices, as well as the mean and variance values of the variables) without including the original data itself or disclosing participant information.  In this instance, the three synthetic datasets have been utilised in an academia-industry collaboration to build a prototype virtual reality data analysis software, but they could have a broader use in method and software development projects where sensitive data cannot be freely shared.

  12. Correction of elevation offsets in multiple co-located lidar datasets

    Science.gov (United States)

    Thompson, David M.; Dalyander, P. Soupy; Long, Joseph W.; Plant, Nathaniel G.

    2017-04-07

    IntroductionTopographic elevation data collected with airborne light detection and ranging (lidar) can be used to analyze short- and long-term changes to beach and dune systems. Analysis of multiple lidar datasets at Dauphin Island, Alabama, revealed systematic, island-wide elevation differences on the order of 10s of centimeters (cm) that were not attributable to real-world change and, therefore, were likely to represent systematic sampling offsets. These offsets vary between the datasets, but appear spatially consistent within a given survey. This report describes a method that was developed to identify and correct offsets between lidar datasets collected over the same site at different times so that true elevation changes over time, associated with sediment accumulation or erosion, can be analyzed.

  13. BASE MAP DATASET, HONOLULU COUNTY, HAWAII, USA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  14. BASE MAP DATASET, LOS ANGELES COUNTY, CALIFORNIA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  15. BASE MAP DATASET, CHEROKEE COUNTY, SOUTH CAROLINA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  16. BASE MAP DATASET, EDGEFIELD COUNTY, SOUTH CAROLINA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  17. BASE MAP DATASET, SANTA CRIZ COUNTY, CALIFORNIA

    Data.gov (United States)

    Federal Emergency Management Agency, Department of Homeland Security — FEMA Framework Basemap datasets comprise six of the seven FGDC themes of geospatial data that are used by most GIS applications (Note: the seventh framework theme,...

  18. Image thresholding in the high resolution target movement monitor

    Science.gov (United States)

    Moss, Randy H.; Watkins, Steve E.; Jones, Tristan H.; Apel, Derek B.; Bairineni, Deepti

    2009-03-01

    Image thresholding in the High Resolution Target Movement Monitor (HRTMM) is examined. The HRTMM was developed at the Missouri University of Science and Technology to detect and measure wall movements in underground mines to help reduce fatality and injury rates. The system detects the movement of a target with sub-millimeter accuracy based on the images of one or more laser dots projected on the target and viewed by a high-resolution camera. The relative position of the centroid of the laser dot (determined by software using thresholding concepts) in the images is the key factor in detecting the target movement. Prior versions of the HRTMM set the image threshold based on a manual, visual examination of the images. This work systematically examines the effect of varying threshold on the calculated centroid position and describes an algorithm for determining a threshold setting. First, the thresholding effects on the centroid position are determined for a stationary target. Plots of the centroid positions as a function of varying thresholds are obtained to identify clusters of thresholds for which the centroid position does not change for stationary targets. Second, the target is moved away from the camera in sub-millimeter increments and several images are obtained at each position and analyzed as a function of centroid position, target movement and varying threshold values. With this approach, the HRTMM can accommodate images in batch mode without the need for manual intervention. The capability for the HRTMM to provide automated, continuous monitoring of wall movement is enhanced.

  19. Reference Values of Grip Strength, Prevalence of Low Grip Strength, and Factors Affecting Grip Strength Values in Chinese Adults.

    Science.gov (United States)

    Yu, Ruby; Ong, Sherlin; Cheung, Osbert; Leung, Jason; Woo, Jean

    2017-06-01

    The objectives of this study were to update the reference values of grip strength, to estimate the prevalence of low grip strength, and to examine the impact of different aspects of measurement protocol on grip strength values in Chinese adults. A cross-sectional survey of Chinese men (n = 714) and women (n = 4014) aged 18-102 years was undertaken in different community settings in Hong Kong. Grip strength was measured with a digital dynamometer (TKK 5401 Grip-D; Takei, Niigata, Japan). Low grip strength was defined as grip strength 2 standard deviations or more below the mean for young adults. The effects of measurement protocol on grip strength values were examined in a subsample of 45 men and women with repeated measures of grip strength taken with a hydraulic dynamometer (Baseline; Fabrication Enterprises Inc, Irvington, NY), using pair t-tests, intraclass correlation coefficient, and Bland and Altman plots. Grip strength was greater among men than among women (P values than the Baseline hydraulic dynamometer (P values were also observed when the measurement was performed with the elbow extended in a standing position, compared with that with the elbow flexed at 90° in a sitting position, using the same dynamometer (P values of grip strength and estimated the prevalence of low grip strength among Chinese adults spanning a wide age range. These findings might be useful for risk estimation and evaluation of interventions. However, grip strength measurements should be interpreted with caution, as grip strength values can be affected by type of dynamometer used, assessment posture, and elbow position. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  20. A Robust Threshold for Iterative Channel Estimation in OFDM Systems

    Directory of Open Access Journals (Sweden)

    A. Kalaycioglu

    2010-04-01

    Full Text Available A novel threshold computation method for pilot symbol assisted iterative channel estimation in OFDM systems is considered. As the bits are transmitted in packets, the proposed technique is based on calculating a particular threshold for each data packet in order to select the reliable decoder output symbols to improve the channel estimation performance. Iteratively, additional pilot symbols are established according to the threshold and the channel is re-estimated with the new pilots inserted to the known channel estimation pilot set. The proposed threshold calculation method for selecting additional pilots performs better than non-iterative channel estimation, no threshold and fixed threshold techniques in poor HF channel simulations.

  1. Threshold-adaptive canny operator based on cross-zero points

    Science.gov (United States)

    Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu

    2018-03-01

    Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.

  2. SART-Type Half-Threshold Filtering Approach for CT Reconstruction.

    Science.gov (United States)

    Yu, Hengyong; Wang, Ge

    2014-01-01

    The [Formula: see text] regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the [Formula: see text] norm (0 < p < 1) and solve the [Formula: see text] minimization problem. Very recently, Xu et al. developed an analytic solution for the [Formula: see text] regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering.

  3. Simplified Threshold RSA with Adaptive and Proactive Security

    DEFF Research Database (Denmark)

    Almansa Guerra, Jesus Fernando; Damgård, Ivan Bjerre; Nielsen, Jesper Buus

    2006-01-01

    We present the currently simplest, most efficient, optimally resilient, adaptively secure, and proactive threshold RSA scheme. A main technical contribution is a new rewinding strategy for analysing threshold signature schemes. This new rewinding strategy allows to prove adaptive security...... of a proactive threshold signature scheme which was previously assumed to be only statically secure. As a separate contribution we prove that our protocol is secure in the UC framework....

  4. Alternative method for determining anaerobic threshold in rowers

    Directory of Open Access Journals (Sweden)

    Giovani Dos Santos Cunha

    2008-01-01

    Full Text Available http://dx.doi.org/10.5007/1980-0037.2008v10n4p367 In rowing, the standard breathing that athletes are trained to use makes it difficult, or even impossible, to detect ventilatory limits, due to the coupling of the breath with the technical movement. For this reason, some authors have proposed determining the anaerobic threshold from the respiratory exchange ratio (RER, but there is not yet consensus on what value of RER should be used. The objective of this study was to test what value of RER corresponds to the anaerobic threshold and whether this value can be used as an independent parameter for determining the anaerobic threshold of rowers. The sample comprised 23 male rowers. They were submitted to a maximal cardiorespiratory test on a rowing ergometer with concurrent ergospirometry in order to determine VO2máx and the physiological variables corresponding to their anaerobic threshold. The anaerobic threshold was determined using the Dmax (maximal distance method. The physiological variables were classified into maximum values and anaerobic threshold values. The maximal state of these rowers reached VO2 (58.2±4.4 ml.kg-1.min-1, lactate (8.2±2.1 mmol.L-1, power (384±54.3 W and RER (1.26±0.1. At the anaerobic threshold they reached VO2 (46.9±7.5 ml.kg-1.min-1, lactate (4.6±1.3 mmol.L-1, power (300± 37.8 W and RER (0.99±0.1. Conclusions - the RER can be used as an independent method for determining the anaerobic threshold of rowers, adopting a value of 0.99, however, RER should exhibit a non-linear increase above this figure.

  5. Satellite-Based Precipitation Datasets

    Science.gov (United States)

    Munchak, S. J.; Huffman, G. J.

    2017-12-01

    Of the possible sources of precipitation data, those based on satellites provide the greatest spatial coverage. There is a wide selection of datasets, algorithms, and versions from which to choose, which can be confusing to non-specialists wishing to use the data. The International Precipitation Working Group (IPWG) maintains tables of the major publicly available, long-term, quasi-global precipitation data sets (http://www.isac.cnr.it/ ipwg/data/datasets.html), and this talk briefly reviews the various categories. As examples, NASA provides two sets of quasi-global precipitation data sets: the older Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) and current Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (GPM) mission (IMERG). Both provide near-real-time and post-real-time products that are uniformly gridded in space and time. The TMPA products are 3-hourly 0.25°x0.25° on the latitude band 50°N-S for about 16 years, while the IMERG products are half-hourly 0.1°x0.1° on 60°N-S for over 3 years (with plans to go to 16+ years in Spring 2018). In addition to the precipitation estimates, each data set provides fields of other variables, such as the satellite sensor providing estimates and estimated random error. The discussion concludes with advice about determining suitability for use, the necessity of being clear about product names and versions, and the need for continued support for satellite- and surface-based observation.

  6. FASTQSim: platform-independent data characterization and in silico read generation for NGS datasets.

    Science.gov (United States)

    Shcherbina, Anna

    2014-08-15

    High-throughput next generation sequencing technologies have enabled rapid characterization of clinical and environmental samples. Consequently, the largest bottleneck to actionable data has become sample processing and bioinformatics analysis, creating a need for accurate and rapid algorithms to process genetic data. Perfectly characterized in silico datasets are a useful tool for evaluating the performance of such algorithms. Background contaminating organisms are observed in sequenced mixtures of organisms. In silico samples provide exact truth. To create the best value for evaluating algorithms, in silico data should mimic actual sequencer data as closely as possible. FASTQSim is a tool that provides the dual functionality of NGS dataset characterization and metagenomic data generation. FASTQSim is sequencing platform-independent, and computes distributions of read length, quality scores, indel rates, single point mutation rates, indel size, and similar statistics for any sequencing platform. To create training or testing datasets, FASTQSim has the ability to convert target sequences into in silico reads with specific error profiles obtained in the characterization step. FASTQSim enables users to assess the quality of NGS datasets. The tool provides information about read length, read quality, repetitive and non-repetitive indel profiles, and single base pair substitutions. FASTQSim allows the user to simulate individual read datasets that can be used as standardized test scenarios for planning sequencing projects or for benchmarking metagenomic software. In this regard, in silico datasets generated with the FASTQsim tool hold several advantages over natural datasets: they are sequencing platform independent, extremely well characterized, and less expensive to generate. Such datasets are valuable in a number of applications, including the training of assemblers for multiple platforms, benchmarking bioinformatics algorithm performance, and creating challenge

  7. The body fades away: investigating the effects of transparency of an embodied virtual body on pain threshold and body ownership

    Science.gov (United States)

    Martini, Matteo; Kilteni, Konstantina; Maselli, Antonella; Sanchez-Vives, Maria V.

    2015-01-01

    The feeling of “ownership” over an external dummy/virtual body (or body part) has been proven to have both physiological and behavioural consequences. For instance, the vision of an “embodied” dummy or virtual body can modulate pain perception. However, the impact of partial or total invisibility of the body on physiology and behaviour has been hardly explored since it presents obvious difficulties in the real world. In this study we explored how body transparency affects both body ownership and pain threshold. By means of virtual reality, we presented healthy participants with a virtual co-located body with four different levels of transparency, while participants were tested for pain threshold by increasing ramps of heat stimulation. We found that the strength of the body ownership illusion decreases when the body gets more transparent. Nevertheless, in the conditions where the body was semi-transparent, higher levels of ownership over a see-through body resulted in an increased pain sensitivity. Virtual body ownership can be used for the development of pain management interventions. However, we demonstrate that providing invisibility of the body does not increase pain threshold. Therefore, body transparency is not a good strategy to decrease pain in clinical contexts, yet this remains to be tested. PMID:26415748

  8. Fatigue threshold studies in Fe, Fe-Si, and HSLA steel: Part II. Thermally activated behavior of the effective stress intensity at threshold

    International Nuclear Information System (INIS)

    Yu, W.; Esaklul, K.; Gerberich, W.W.

    1984-01-01

    It is shown that closure mechanisms alone cannot fully explain increasing fatigue thresholds with decreasing test temperature. Implications are that fatigue crack propagation near threshold is a thermally activated process. The effective threshold stress intensity correlate to the thermal component of the flow stress. A fractographic study of the fatigue surface was performed. Water vapor in room air promotes the formation of oxide and intergranular crack growth. At lower temperatures, a brittle-type cyclic cleavage fatigue surface was observed but the ductile process persisted even at 123 K. Arrest marks found on all three modes of fatigue crack growth suggest that fatigue crack growth controlled by the subcell structure near threshold. The effective fatigue threshold may be related to the square root of (one plus the strain rate sensitivity)

  9. Se-SAD serial femtosecond crystallography datasets from selenobiotinyl-streptavidin

    Science.gov (United States)

    Yoon, Chun Hong; Demirci, Hasan; Sierra, Raymond G.; Dao, E. Han; Ahmadi, Radman; Aksit, Fulya; Aquila, Andrew L.; Batyuk, Alexander; Ciftci, Halilibrahim; Guillet, Serge; Hayes, Matt J.; Hayes, Brandon; Lane, Thomas J.; Liang, Meng; Lundström, Ulf; Koglin, Jason E.; Mgbam, Paul; Rao, Yashas; Rendahl, Theodore; Rodriguez, Evan; Zhang, Lindsey; Wakatsuki, Soichi; Boutet, Sébastien; Holton, James M.; Hunter, Mark S.

    2017-04-01

    We provide a detailed description of selenobiotinyl-streptavidin (Se-B SA) co-crystal datasets recorded using the Coherent X-ray Imaging (CXI) instrument at the Linac Coherent Light Source (LCLS) for selenium single-wavelength anomalous diffraction (Se-SAD) structure determination. Se-B SA was chosen as the model system for its high affinity between biotin and streptavidin where the sulfur atom in the biotin molecule (C10H16N2O3S) is substituted with selenium. The dataset was collected at three different transmissions (100, 50, and 10%) using a serial sample chamber setup which allows for two sample chambers, a front chamber and a back chamber, to operate simultaneously. Diffraction patterns from Se-B SA were recorded to a resolution of 1.9 Å. The dataset is publicly available through the Coherent X-ray Imaging Data Bank (CXIDB) and also on LCLS compute nodes as a resource for research and algorithm development.

  10. Dataset of transcriptional landscape of B cell early activation

    Directory of Open Access Journals (Sweden)

    Alexander S. Garruss

    2015-09-01

    Full Text Available Signaling via B cell receptors (BCR and Toll-like receptors (TLRs result in activation of B cells with distinct physiological outcomes, but transcriptional regulatory mechanisms that drive activation and distinguish these pathways remain unknown. At early time points after BCR and TLR ligand exposure, 0.5 and 2 h, RNA-seq was performed allowing observations on rapid transcriptional changes. At 2 h, ChIP-seq was performed to allow observations on important regulatory mechanisms potentially driving transcriptional change. The dataset includes RNA-seq, ChIP-seq of control (Input, RNA Pol II, H3K4me3, H3K27me3, and a separate RNA-seq for miRNA expression, which can be found at Gene Expression Omnibus Dataset GSE61608. Here, we provide details on the experimental and analysis methods used to obtain and analyze this dataset and to examine the transcriptional landscape of B cell early activation.

  11. Taste perception with age: pleasantness and its relationships with threshold sensitivity and supra-threshold intensity of five taste qualities

    NARCIS (Netherlands)

    Mojet, J.; Christ-Hazelhof, E.; Heidema, J.

    2005-01-01

    The relationships between threshold sensitivity, supra-threshold intensity of NaCl, KCl, sucrose, aspartame, acetic acid, citric acid, caffeine, quinine HCl, monosodium glutamate (MSG) and inosine 5¿-monophosphate (IMP), and the pleasantness of these stimuli in products, were studied in 21 young

  12. U.S. Climate Divisional Dataset (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data has been superseded by a newer version of the dataset. Please refer to NOAA's Climate Divisional Database for more information. The U.S. Climate Divisional...

  13. Schubert calculus and threshold polynomials of affine fusion

    International Nuclear Information System (INIS)

    Irvine, S.E.; Walton, M.A.

    2000-01-01

    We show how the threshold level of affine fusion, the fusion of Wess-Zumino-Witten (WZW) conformal field theories, fits into the Schubert calculus introduced by Gepner. The Pieri rule can be modified in a simple way to include the threshold level, so that calculations may be done for all (non-negative integer) levels at once. With the usual Giambelli formula, the modified Pieri formula deforms the tensor product coefficients (and the fusion coefficients) into what we call threshold polynomials. We compare them with the q-deformed tensor product coefficients and fusion coefficients that are related to q-deformed weight multiplicities. We also discuss the meaning of the threshold level in the context of paths on graphs

  14. UK surveillance: provision of quality assured information from combined datasets.

    Science.gov (United States)

    Paiba, G A; Roberts, S R; Houston, C W; Williams, E C; Smith, L H; Gibbens, J C; Holdship, S; Lysons, R

    2007-09-14

    Surveillance information is most useful when provided within a risk framework, which is achieved by presenting results against an appropriate denominator. Often the datasets are captured separately and for different purposes, and will have inherent errors and biases that can be further confounded by the act of merging. The United Kingdom Rapid Analysis and Detection of Animal-related Risks (RADAR) system contains data from several sources and provides both data extracts for research purposes and reports for wider stakeholders. Considerable efforts are made to optimise the data in RADAR during the Extraction, Transformation and Loading (ETL) process. Despite efforts to ensure data quality, the final dataset inevitably contains some data errors and biases, most of which cannot be rectified during subsequent analysis. So, in order for users to establish the 'fitness for purpose' of data merged from more than one data source, Quality Statements are produced as defined within the overarching surveillance Quality Framework. These documents detail identified data errors and biases following ETL and report construction as well as relevant aspects of the datasets from which the data originated. This paper illustrates these issues using RADAR datasets, and describes how they can be minimised.

  15. Climate Prediction Center IR 4km Dataset

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — CPC IR 4km dataset was created from all available individual geostationary satellite data which have been merged to form nearly seamless global (60N-60S) IR...

  16. Multivariate Analysis of Multiple Datasets: a Practical Guide for Chemical Ecology.

    Science.gov (United States)

    Hervé, Maxime R; Nicolè, Florence; Lê Cao, Kim-Anh

    2018-03-01

    Chemical ecology has strong links with metabolomics, the large-scale study of all metabolites detectable in a biological sample. Consequently, chemical ecologists are often challenged by the statistical analyses of such large datasets. This holds especially true when the purpose is to integrate multiple datasets to obtain a holistic view and a better understanding of a biological system under study. The present article provides a comprehensive resource to analyze such complex datasets using multivariate methods. It starts from the necessary pre-treatment of data including data transformations and distance calculations, to the application of both gold standard and novel multivariate methods for the integration of different omics data. We illustrate the process of analysis along with detailed results interpretations for six issues representative of the different types of biological questions encountered by chemical ecologists. We provide the necessary knowledge and tools with reproducible R codes and chemical-ecological datasets to practice and teach multivariate methods.

  17. Biomechanical Strength of Retrograde Fixation in Proximal Third Scaphoid Fractures.

    Science.gov (United States)

    Daly, Charles A; Boden, Allison L; Hutton, William C; Gottschalk, Michael B

    2018-04-01

    Current techniques for fixation of proximal pole scaphoid fractures utilize antegrade fixation via a dorsal approach endangering the delicate vascular supply of the dorsal scaphoid. Volar and dorsal approaches demonstrate equivalent clinical outcomes in scaphoid wrist fractures, but no study has evaluated the biomechanical strength for fractures of the proximal pole. This study compares biomechanical strength of antegrade and retrograde fixation for fractures of the proximal pole of the scaphoid. A simulated proximal pole scaphoid fracture was produced in 22 matched cadaveric scaphoids, which were then assigned randomly to either antegrade or retrograde fixation with a cannulated headless compression screw. Cyclic loading and load to failure testing were performed and screw length, number of cycles, and maximum load sustained were recorded. There were no significant differences in average screw length (25.5 mm vs 25.6 mm, P = .934), average number of cyclic loading cycles (3738 vs 3847, P = .552), average load to failure (348 N vs 371 N, P = .357), and number of catastrophic failures observed between the antegrade and retrograde fixation groups (3 in each). Practical equivalence between the 2 groups was calculated and the 2 groups were demonstrated to be practically equivalent (upper threshold P = .010). For this model of proximal pole scaphoid wrist fractures, antegrade and retrograde screw configuration have been proven to be equivalent in terms of biomechanical strength. With further clinical study, we hope surgeons will be able to make their decision for fixation technique based on approaches to bone grafting, concern for tenuous blood supply, and surgeon preference without fear of poor biomechanical properties.

  18. Development of K-Basin High-Strength Homogeneous Sludge Simulants and Correlations Between Unconfined Compressive Strength and Shear Strength

    Energy Technology Data Exchange (ETDEWEB)

    Onishi, Yasuo; Baer, Ellen BK; Chun, Jaehun; Yokuda, Satoru T.; Schmidt, Andrew J.; Sande, Susan; Buchmiller, William C.

    2011-02-20

    K-Basin sludge will be stored in the Sludge Transport and Storage Containers (STSCs) at an interim storage location on Central Plateau before being treated and packaged for disposal. During the storage period, sludge in the STSCs may consolidate/agglomerate, potentially resulting in high-shear-strength material. The Sludge Treatment Project (STP) plans to use water jets to retrieve K-Basin sludge after the interim storage. STP has identified shear strength to be a key parameter that should be bounded to verify the operability and performance of sludge retrieval systems. Determining the range of sludge shear strength is important to gain high confidence that a water-jet retrieval system can mobilize stored K-Basin sludge from the STSCs. The shear strength measurements will provide a basis for bounding sludge properties for mobilization and erosion. Thus, it is also important to develop potential simulants to investigate these phenomena. Long-term sludge storage tests conducted by Pacific Northwest National Laboratory (PNNL) show that high-uranium-content K-Basin sludge can self-cement and form a strong sludge with a bulk shear strength of up to 65 kPa. Some of this sludge has 'paste' and 'chunks' with shear strengths of approximately 3-5 kPa and 380-770 kPa, respectively. High-uranium-content sludge samples subjected to hydrothermal testing (e.g., 185 C, 10 hours) have been observed to form agglomerates with a shear strength up to 170 kPa. These high values were estimated by measured unconfined compressive strength (UCS) obtained with a pocket penetrometer. Due to its ease of use, it is anticipated that a pocket penetrometer will be used to acquire additional shear strength data from archived K-Basin sludge samples stored at the PNNL Radiochemical Processing Laboratory (RPL) hot cells. It is uncertain whether the pocket penetrometer provides accurate shear strength measurements of the material. To assess the bounding material strength and

  19. Cumulative t-link threshold models for the genetic analysis of calving ease scores

    Directory of Open Access Journals (Sweden)

    Tempelman Robert J

    2003-09-01

    Full Text Available Abstract In this study, a hierarchical threshold mixed model based on a cumulative t-link specification for the analysis of ordinal data or more, specifically, calving ease scores, was developed. The validation of this model and the Markov chain Monte Carlo (MCMC algorithm was carried out on simulated data from normally and t4 (i.e. a t-distribution with four degrees of freedom distributed populations using the deviance information criterion (DIC and a pseudo Bayes factor (PBF measure to validate recently proposed model choice criteria. The simulation study indicated that although inference on the degrees of freedom parameter is possible, MCMC mixing was problematic. Nevertheless, the DIC and PBF were validated to be satisfactory measures of model fit to data. A sire and maternal grandsire cumulative t-link model was applied to a calving ease dataset from 8847 Italian Piemontese first parity dams. The cumulative t-link model was shown to lead to posterior means of direct and maternal heritabilities (0.40 ± 0.06, 0.11 ± 0.04 and a direct maternal genetic correlation (-0.58 ± 0.15 that were not different from the corresponding posterior means of the heritabilities (0.42 ± 0.07, 0.14 ± 0.04 and the genetic correlation (-0.55 ± 0.14 inferred under the conventional cumulative probit link threshold model. Furthermore, the correlation (> 0.99 between posterior means of sire progeny merit from the two models suggested no meaningful rerankings. Nevertheless, the cumulative t-link model was decisively chosen as the better fitting model for this calving ease data using DIC and PBF.

  20. Effects of programming threshold and maplaw settings on acoustic thresholds and speech discrimination with the MED-EL COMBI 40+ cochlear implant.

    Science.gov (United States)

    Boyd, Paul J

    2006-12-01

    The principal task in the programming of a cochlear implant (CI) speech processor is the setting of the electrical dynamic range (output) for each electrode, to ensure that a comfortable loudness percept is obtained for a range of input levels. This typically involves separate psychophysical measurement of electrical threshold ([theta] e) and upper tolerance levels using short current bursts generated by the fitting software. Anecdotal clinical experience and some experimental studies suggest that the measurement of [theta]e is relatively unimportant and that the setting of upper tolerance limits is more critical for processor programming. The present study aims to test this hypothesis and examines in detail how acoustic thresholds and speech recognition are affected by setting of the lower limit of the output ("Programming threshold" or "PT") to understand better the influence of this parameter and how it interacts with certain other programming parameters. Test programs (maps) were generated with PT set to artificially high and low values and tested on users of the MED-EL COMBI 40+ CI system. Acoustic thresholds and speech recognition scores (sentence tests) were measured for each of the test maps. Acoustic thresholds were also measured using maps with a range of output compression functions ("maplaws"). In addition, subjective reports were recorded regarding the presence of "background threshold stimulation" which is occasionally reported by CI users if PT is set to relatively high values when using the CIS strategy. Manipulation of PT was found to have very little effect. Setting PT to minimum produced a mean 5 dB (S.D. = 6.25) increase in acoustic thresholds, relative to thresholds with PT set normally, and had no statistically significant effect on speech recognition scores on a sentence test. On the other hand, maplaw setting was found to have a significant effect on acoustic thresholds (raised as maplaw is made more linear), which provides some theoretical