WorldWideScience

Sample records for minimum features sizes

  1. Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2015-01-01

    In this work we consider the problem of feature enhancement for noise-robust automatic speech recognition (ASR). We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features, which is based on a minimum number of well-established, theoretically consistent......-of-the-art MFCC feature enhancement algorithms within this class of algorithms, while theoretically suboptimal or based on theoretically inconsistent assumptions, perform close to optimally in the MMSE sense....

  2. LDPC Codes with Minimum Distance Proportional to Block Size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low

  3. A minimum-size tokamak concept for conditions near ignition

    International Nuclear Information System (INIS)

    Lehnert, B.

    1983-01-01

    Based on a combination of Alcator scaling and a recent theory on the Murakami density limit, a minimum-size tokamak concept (Minitor) is proposed. Even if this concept does not aim at alpha particle containment, it has the important goal of reaching plasma core temperatures and Lawson parameter values required for ignition, by ohmic heating alone and under macroscopically stable conditions. The minimized size, and the associated enhancement of the plasma current density, are found to favour high plasma temperatues, average densities, and beta values. The goal of this concept appears to be realizable by relatively modest technical means. (author)

  4. Estimation of minimum sample size for identification of the most important features: a case study providing a qualitative B2B sales data set

    Directory of Open Access Journals (Sweden)

    Marko Bohanec

    2017-01-01

    Full Text Available An important task in machine learning is to reduce data set dimensionality, which in turn contributes to reducing computational load and data collection costs, while improving human understanding and interpretation of models. We introduce an operational guideline for determining the minimum number of instances sufficient to identify correct ranks of features with the highest impact. We conduct tests based on qualitative B2B sales forecasting data. The results show that a relatively small instance subset is sufficient for identifying the most important features when rank is not important.

  5. A comparative analysis of DNA barcode microarray feature size

    Directory of Open Access Journals (Sweden)

    Smith Andrew M

    2009-10-01

    Full Text Available Abstract Background Microarrays are an invaluable tool in many modern genomic studies. It is generally perceived that decreasing the size of microarray features leads to arrays with higher resolution (due to greater feature density, but this increase in resolution can compromise sensitivity. Results We demonstrate that barcode microarrays with smaller features are equally capable of detecting variation in DNA barcode intensity when compared to larger feature sizes within a specific microarray platform. The barcodes used in this study are the well-characterized set derived from the Yeast KnockOut (YKO collection used for screens of pooled yeast (Saccharomyces cerevisiae deletion mutants. We treated these pools with the glycosylation inhibitor tunicamycin as a test compound. Three generations of barcode microarrays at 30, 8 and 5 μm features sizes independently identified the primary target of tunicamycin to be ALG7. Conclusion We show that the data obtained with 5 μm feature size is of comparable quality to the 30 μm size and propose that further shrinking of features could yield barcode microarrays with equal or greater resolving power and, more importantly, higher density.

  6. Estimating minimum polycrystalline aggregate size for macroscopic material homogeneity

    International Nuclear Information System (INIS)

    Kovac, M.; Simonovski, I.; Cizelj, L.

    2002-01-01

    During severe accidents the pressure boundary of reactor coolant system can be subjected to extreme loadings, which might cause failure. Reliable estimation of the extreme deformations can be crucial to determine the consequences of severe accidents. Important drawback of classical continuum mechanics is idealization of inhomogenous microstructure of materials. Classical continuum mechanics therefore cannot predict accurately the differences between measured responses of specimens, which are different in size but geometrical similar (size effect). A numerical approach, which models elastic-plastic behavior on mesoscopic level, is proposed to estimate minimum size of polycrystalline aggregate above which it can be considered macroscopically homogeneous. The main idea is to divide continuum into a set of sub-continua. Analysis of macroscopic element is divided into modeling the random grain structure (using Voronoi tessellation and random orientation of crystal lattice) and calculation of strain/stress field. Finite element method is used to obtain numerical solutions of strain and stress fields. The analysis is limited to 2D models.(author)

  7. Feature-size dependent selective edge enhancement of x-ray images

    International Nuclear Information System (INIS)

    Herman, S.

    1988-01-01

    Morphological filters are nonlinear signal transformations that operate on a picture directly in the space domain. Such filters are based on the theory of mathematical morphology previously formulated. The filt4er being presented here features a ''mask'' operator (called a ''structuring element'' in some of the literature) which is a function of the two spatial coordinates x and y. The two basic mathematical operations are called ''masked erosion'' and ''masked dilation''. In the case of masked erosion the mask is passed over the input image in a raster pattern. At each position of the mask, the pixel values under the mask are multiplied by the mask pixel values. Then the output pixel value, located at the center position of the mask,is set equal to the minimum of the product of the mask and input values. Similarity, for masked dilation, the output pixel value is the maximum of the product of the input and the mask pixel values. The two basic processes of dilation and erosion can be used to construct the next level of operations the ''positive sieve'' (also called ''opening'') and the ''negative sieve'' (''closing''). The positive sieve modifies the peaks in the image whereas the negative sieve works on image valleys. The positive sieve is implemented by passing the output of the masked erosion step through the masked dilation function. The negative sieve reverses this procedure, using a dilation followed by an erosion. Each such sifting operator is characterized by a ''hole size''. It will be shown that the choice of hole size will select the range of pixel detail sizes which are to be enhanced. The shape of the mask will govern the shape of the enhancement. Finally positive sifting is used to enhance positive-going (peak) features, whereas negative enhances the negative-going (valley) landmarks

  8. Estimation of minimum sample size for identification of the most important features: a case study providing a qualitative B2B sales data set

    OpenAIRE

    Marko Bohanec; Mirjana Kljajić Borštnar; Marko Robnik-Šikonja

    2017-01-01

    An important task in machine learning is to reduce data set dimensionality, which in turn contributes to reducing computational load and data collection costs, while improving human understanding and interpretation of models. We introduce an operational guideline for determining the minimum number of instances sufficient to identify correct ranks of features with the highest impact. We conduct tests based on qualitative B2B sales forecasting data. The results show that a relatively small inst...

  9. The Minimum Binding Energy and Size of Doubly Muonic D3 Molecule

    Science.gov (United States)

    Eskandari, M. R.; Faghihi, F.; Mahdavi, M.

    The minimum energy and size of doubly muonic D3 molecule, which two of the electrons are replaced by the much heavier muons, are calculated by the well-known variational method. The calculations show that the system possesses two minimum positions, one at typically muonic distance and the second at the atomic distance. It is shown that at the muonic distance, the effective charge, zeff is 2.9. We assumed a symmetric planar vibrational model between two minima and an oscillation potential energy is approximated in this region.

  10. Protograph based LDPC codes with minimum distance linearly growing with block size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  11. On the sizes of expander graphs and minimum distances of graph codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn

    2014-01-01

    We give lower bounds for the minimum distances of graph codes based on expander graphs. The bounds depend only on the second eigenvalue of the graph and the parameters of the component codes. We also give an upper bound on the size of a degree regular graph with given second eigenvalue....

  12. Metagenomic analysis of size-fractionated picoplankton in a marine oxygen minimum zone

    OpenAIRE

    Ganesh, Sangita; Parris, Darren J; DeLong, Edward F; Stewart, Frank J

    2013-01-01

    Marine oxygen minimum zones (OMZs) support diverse microbial communities with roles in major elemental cycles. It is unclear how the taxonomic composition and metabolism of OMZ microorganisms vary between particle-associated and free-living size fractions. We used amplicon (16S rRNA gene) and shotgun metagenome sequencing to compare microbial communities from large (>1.6 μm) and small (0.2–1.6 μm) filter size fractions along a depth gradient in the OMZ off Chile. Despite steep vertical redox ...

  13. Chromospheric rotation. II. Dependence on the size of chromospheric features

    Energy Technology Data Exchange (ETDEWEB)

    Azzarelli, L; Casalini, P; Cerri, S; Denoth, F [Consiglio Nazionale delle Ricerche, Pisa (Italy). Ist. di Elaborazione della Informazione

    1979-08-01

    The dependence of solar rotation on the size of the chromospheric tracers is considered. On the basis of an analysis of Ca II K/sub 3/ daily filtergrams taken in the period 8 May-14 August, 1972, chromospheric features can be divided into two classes according to their size. Features with size falling into the range 24 000-110 000 km can be identified with network elements, while those falling into the range 120 000-300 000 km with active regions, or brightness features of comparable size present at high latitudes. The rotation rate is determined separately for the two families of chromospheric features by means of a cross-correlation technique directly yields the average daily displacement of tracers due to rotation. Before computing the cross-correlation functions, chromospheric brightness data have been filtered with appropriate bandpass and highpass filters for separating spatial periodicities whose wavelengths fall into the two ranges of size, characteristic of the network pattern and of the activity centers. A difference less than 1% of the rotation rate of the two families of chromospheric features has been found. This is an indication for a substantial corotation at chromospheric levels of different short-lived features, both related to solar activity and controlled by the convective supergranular motions.

  14. A Theoretically Consistent Method for Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2014-01-01

    We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features for noise robust automatic speech recognition (ASR). The method is based on a minimum number of well-established statistical assumptions; no assumptions are made which are inconsistent with others....... The strength of the proposed method is that it allows MMSE estimation of mel-frequency cepstral coefficients (MFCC's), cepstral mean-subtracted MFCC's (CMS-MFCC's), velocity, and acceleration coefficients. Furthermore, the method is easily modified to take into account other compressive non-linearities than...... the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non...

  15. Minimum Lens Size Supporting the Leaky-Wave Nature of Slit Dipole Antenna at Terahertz Frequency

    Directory of Open Access Journals (Sweden)

    Niamat Hussain

    2016-01-01

    Full Text Available We designed a slit dipole antenna backed by an extended hemispherical silicon lens and investigated the minimum lens size in which the slit dipole antenna works as a leaky-wave antenna. The slit dipole antenna consists of a planar feeding structure, which is a center-fed and open-ended slot line. A slit dipole antenna backed by an extended hemispherical silicon lens is investigated over a frequency range from 0.2 to 0.4 THz with the center frequency at 0.3 THz. The numerical results show that the antenna gain responses exhibited an increased level of sensitivity to the lens size and increased linearly with increasing lens radius. The lens with the radius of 1.2λo is found to be the best possible minimum lens size for a slit dipole antenna on an extended hemispherical silicon lens.

  16. Spatial, socio-economic, and ecological implications of incorporating minimum size constraints in marine protected area network design.

    Science.gov (United States)

    Metcalfe, Kristian; Vaughan, Gregory; Vaz, Sandrine; Smith, Robert J

    2015-12-01

    Marine protected areas (MPAs) are the cornerstone of most marine conservation strategies, but the effectiveness of each one partly depends on its size and distance to other MPAs in a network. Despite this, current recommendations on ideal MPA size and spacing vary widely, and data are lacking on how these constraints might influence the overall spatial characteristics, socio-economic impacts, and connectivity of the resultant MPA networks. To address this problem, we tested the impact of applying different MPA size constraints in English waters. We used the Marxan spatial prioritization software to identify a network of MPAs that met conservation feature targets, whilst minimizing impacts on fisheries; modified the Marxan outputs with the MinPatch software to ensure each MPA met a minimum size; and used existing data on the dispersal distances of a range of species found in English waters to investigate the likely impacts of such spatial constraints on the region's biodiversity. Increasing MPA size had little effect on total network area or the location of priority areas, but as MPA size increased, fishing opportunity cost to stakeholders increased. In addition, as MPA size increased, the number of closely connected sets of MPAs in networks and the average distance between neighboring MPAs decreased, which consequently increased the proportion of the planning region that was isolated from all MPAs. These results suggest networks containing large MPAs would be more viable for the majority of the region's species that have small dispersal distances, but dispersal between MPA sets and spill-over of individuals into unprotected areas would be reduced. These findings highlight the importance of testing the impact of applying different MPA size constraints because there are clear trade-offs that result from the interaction of size, number, and distribution of MPAs in a network. © 2015 Society for Conservation Biology.

  17. Pupil size reflects the focus of feature-based attention.

    Science.gov (United States)

    Binda, Paola; Pereverzeva, Maria; Murray, Scott O

    2014-12-15

    We measured pupil size in adult human subjects while they selectively attended to one of two surfaces, bright and dark, defined by coherently moving dots. The two surfaces were presented at the same location; therefore, subjects could select the cued surface only on the basis of its features. With no luminance change in the stimulus, we find that pupil size was smaller when the bright surface was attended and larger when the dark surface was attended: an effect of feature-based (or surface-based) attention. With the same surfaces at nonoverlapping locations, we find a similar effect of spatial attention. The pupil size modulation cannot be accounted for by differences in eye position and by other variables known to affect pupil size such as task difficulty, accommodation, or the mere anticipation (imagery) of bright/dark stimuli. We conclude that pupil size reflects not just luminance or cognitive state, but the interaction between the two: it reflects which luminance level in the visual scene is relevant for the task at hand. Copyright © 2014 the American Physiological Society.

  18. Nanopatterned surface with adjustable area coverage and feature size fabricated by photocatalysis

    Energy Technology Data Exchange (ETDEWEB)

    Bai Yang; Zhang Yan; Li Wei; Zhou Xuefeng; Wang Changsong; Feng Xin [State Key Laboratory of Materials-oriented Chemical Engineering, Nanjing University of Technology, Nanjing, Jiangsu 210009 (China); Zhang Luzheng [Petroleum Research Recovery Center, New Mexico Institute of Mining and Technology, Socorro, NM 87801 (United States); Lu Xiaohua, E-mail: xhlu@njut.edu.cn [State Key Laboratory of Materials-oriented Chemical Engineering, Nanjing University of Technology, Nanjing, Jiangsu 210009 (China)

    2009-08-30

    We report an effective approach to fabricate nanopatterns of alkylsilane self-assembly monolayers (SAMs) with desirable coverage and feature size by gradient photocatalysis in TiO{sub 2} aqueous suspension. Growth and photocatalytic degradation of octadecyltrichlorosilane (OTS) were combined to fabricate adjustable monolayered nanopatterns on mica sheet in this work. Systematic atomic force microscopy (AFM) analysis showed that OTS-SAMs that have similar area coverage with different feature sizes and similar feature size with different area coverages can be fabricated by this approach. Contact angle measurement was applied to confirm the gradually varied nanopatterns contributed to the gradient of UV light illumination. Since this approach is feasible for various organic SAMs and substrates, a versatile method was presented to prepare tunable nanopatterns with desirable area coverage and feature size in many applications, such as molecular and biomolecular recognition, sensor and electrode modification.

  19. Nanopatterned surface with adjustable area coverage and feature size fabricated by photocatalysis

    International Nuclear Information System (INIS)

    Bai Yang; Zhang Yan; Li Wei; Zhou Xuefeng; Wang Changsong; Feng Xin; Zhang Luzheng; Lu Xiaohua

    2009-01-01

    We report an effective approach to fabricate nanopatterns of alkylsilane self-assembly monolayers (SAMs) with desirable coverage and feature size by gradient photocatalysis in TiO 2 aqueous suspension. Growth and photocatalytic degradation of octadecyltrichlorosilane (OTS) were combined to fabricate adjustable monolayered nanopatterns on mica sheet in this work. Systematic atomic force microscopy (AFM) analysis showed that OTS-SAMs that have similar area coverage with different feature sizes and similar feature size with different area coverages can be fabricated by this approach. Contact angle measurement was applied to confirm the gradually varied nanopatterns contributed to the gradient of UV light illumination. Since this approach is feasible for various organic SAMs and substrates, a versatile method was presented to prepare tunable nanopatterns with desirable area coverage and feature size in many applications, such as molecular and biomolecular recognition, sensor and electrode modification.

  20. Minimum bar size for flexure testing of irradiated SiC/SiC composite

    International Nuclear Information System (INIS)

    Youngblood, G.E.; Jones, R.H.

    1998-01-01

    This report covers material presented at the IEA/Jupiter Joint International Workshop on SiC/SiC Composites for Fusion structural Applications held in conjunction with ICFRM-8, Sendai, Japan, Oct. 23-24, 1997. The minimum bar size for 4-point flexure testing of SiC/SiC composite recommended by PNNL for irradiation effects studies is 30 x 6 x 2 mm 3 with a span-to-depth ratio of 10/1

  1. Impact of minimum catch size on the population viability of Strombus gigas (Mesogastropoda: Strombidae) in Quintana Roo, Mexico.

    Science.gov (United States)

    Peel, Joanne R; Mandujano, María del Carmen

    2014-12-01

    The queen conch Strombus gigas represents one of the most important fishery resources of the Caribbean but heavy fishing pressure has led to the depletion of stocks throughout the region, causing the inclusion of this species into CITES Appendix II and IUCN's Red-List. In Mexico, the queen conch is managed through a minimum fishing size of 200 mm shell length and a fishing quota which usually represents 50% of the adult biomass. The objectives of this study were to determine the intrinsic population growth rate of the queen conch population of Xel-Ha, Quintana Roo, Mexico, and to assess the effects of a regulated fishing impact, simulating the extraction of 50% adult biomass on the population density. We used three different minimum size criteria to demonstrate the effects of minimum catch size on the population density and discuss biological implications. Demographic data was obtained through capture-mark-recapture sampling, collecting all animals encountered during three hours, by three divers, at four different sampling sites of the Xel-Ha inlet. The conch population was sampled each month between 2005 and 2006, and bimonthly between 2006 and 2011, tagging a total of 8,292 animals. Shell length and lip thickness were determined for each individual. The average shell length for conch with formed lip in Xel-Ha was 209.39 ± 14.18 mm and the median 210 mm. Half of the sampled conch with lip ranged between 200 mm and 219 mm shell length. Assuming that the presence of the lip is an indicator for sexual maturity, it can be concluded that many animals may form their lip at greater shell lengths than 200 mm and ought to be considered immature. Estimation of relative adult abundance and densities varied greatly depending on the criteria employed for adult classification. When using a minimum fishing size of 200 mm shell length, between 26.2% and up to 54.8% of the population qualified as adults, which represented a simulated fishing impact of almost one third of the

  2. The minimum or natural rate of flow and droplet size ejected by Taylor cone–jets: physical symmetries and scaling laws

    International Nuclear Information System (INIS)

    Gañán-Calvo, A M; Rebollo-Muñoz, N; Montanero, J M

    2013-01-01

    We aim to establish the scaling laws for both the minimum rate of flow attainable in the steady cone–jet mode of electrospray, and the size of the resulting droplets in that limit. Use is made of a small body of literature on Taylor cone–jets reporting precise measurements of the transported electric current and droplet size as a function of the liquid properties and flow rate. The projection of the data onto an appropriate non-dimensional parameter space maps a region bounded by the minimum rate of flow attainable in the steady state. To explain these experimental results, we propose a theoretical model based on the generalized concept of physical symmetry, stemming from the system time invariance (steadiness). A group of symmetries rising at the cone-to-jet geometrical transition determines the scaling for the minimum flow rate and related variables. If the flow rate is decreased below that minimum value, those symmetries break down, which leads to dripping. We find that the system exhibits two instability mechanisms depending on the nature of the forces arising against the flow: one dominated by viscosity and the other by the liquid polarity. In the former case, full charge relaxation is guaranteed down to the minimum flow rate, while in the latter the instability condition becomes equivalent to the symmetry breakdown by charge relaxation or separation. When cone–jets are formed without artificially imposing a flow rate, a microjet is issued quasi-steadily. The flow rate naturally ejected this way coincides with the minimum flow rate studied here. This natural flow rate determines the minimum droplet size that can be steadily produced by any electrohydrodynamic means for a given set of liquid properties. (paper)

  3. The minimum or natural rate of flow and droplet size ejected by Taylor cone-jets: physical symmetries and scaling laws

    Science.gov (United States)

    Gañán-Calvo, A. M.; Rebollo-Muñoz, N.; Montanero, J. M.

    2013-03-01

    We aim to establish the scaling laws for both the minimum rate of flow attainable in the steady cone-jet mode of electrospray, and the size of the resulting droplets in that limit. Use is made of a small body of literature on Taylor cone-jets reporting precise measurements of the transported electric current and droplet size as a function of the liquid properties and flow rate. The projection of the data onto an appropriate non-dimensional parameter space maps a region bounded by the minimum rate of flow attainable in the steady state. To explain these experimental results, we propose a theoretical model based on the generalized concept of physical symmetry, stemming from the system time invariance (steadiness). A group of symmetries rising at the cone-to-jet geometrical transition determines the scaling for the minimum flow rate and related variables. If the flow rate is decreased below that minimum value, those symmetries break down, which leads to dripping. We find that the system exhibits two instability mechanisms depending on the nature of the forces arising against the flow: one dominated by viscosity and the other by the liquid polarity. In the former case, full charge relaxation is guaranteed down to the minimum flow rate, while in the latter the instability condition becomes equivalent to the symmetry breakdown by charge relaxation or separation. When cone-jets are formed without artificially imposing a flow rate, a microjet is issued quasi-steadily. The flow rate naturally ejected this way coincides with the minimum flow rate studied here. This natural flow rate determines the minimum droplet size that can be steadily produced by any electrohydrodynamic means for a given set of liquid properties.

  4. Study on Relation between Hydrodynamic Feature Size of HPAM and Pore Size of Reservoir Rock in Daqing Oilfield

    Directory of Open Access Journals (Sweden)

    Qing Fang

    2015-01-01

    Full Text Available The flow mechanism of the injected fluid was studied by the constant pressure core displacement experiments in the paper. It is assumed under condition of the constant pressure gradient in deep formation based on the characteristic of pressure gradient distribution between the injection and production wells and the mobility of different polymer systems in deep reservoir. Moreover, the flow rate of steady stream was quantitatively analyzed and the critical flow pressure gradient of different injection parameters polymer solutions in different permeability cores was measured. The result showed that polymer hydrodynamic feature size increases with the increasing molecular weight. If the concentration of polymer solutions overlaps beyond critical concentration, then molecular chains entanglement will be occur and cause the augment of its hydrodynamic feature size. The polymer hydrodynamic feature size decreased as the salinity of the dilution water increased. When the median radius of the core pore and throat was 5–10 times of the polymer system hydrodynamic feature size, the polymer solution had a better compatibility with the microscopic pore structure of the reservoir. The estimation of polymer solutions mobility in the porous media can be used to guide the polymer displacement plan and select the optimum injection parameters.

  5. Biomass size-spectra of macrobenthic communities in the oxygen minimum zone off Chile

    Science.gov (United States)

    Quiroga, Eduardo; Quiñones, Renato; Palma, Maritza; Sellanes, Javier; Gallardo, Víctor A.; Gerdes, Dieter; Rowe, Gilbert

    2005-01-01

    Estimates of macrofaunal secondary production and normalized biomass size-spectra (NBSS) were constructed for macrobenthic communities associated with the oxygen minimum zone (OMZ) in four areas of the continental margin off Chile. The presence of low oxygen conditions in the Humboldt Current System (HCS) off Chile was shown to have important effects on the size structure and secondary production of the benthic communities living in this ecosystem. The distribution of normalized biomass by size was linear (log 2-log 2 scale) at all stations. The slope of the NBSS ranged from -0.481 to -0.908. There were significant differences between the slopes of the NBS-spectra from the stations located in the OMZ (slope = -0.837) and those located outside the OMZ (slope = -0.463) ( p oxygen conditions (Chile (6.8 g C m -2 y -1) than off northern Chile (2.02 g C m -2 y -1) and off southern Chile (0.83 g C m -2 y -1). A comparison with other studies suggests that secondary production in terms of carbon equivalents was higher than in other upwelling regions.

  6. Metagenomic analysis of size-fractionated picoplankton in a marine oxygen minimum zone.

    Science.gov (United States)

    Ganesh, Sangita; Parris, Darren J; DeLong, Edward F; Stewart, Frank J

    2014-01-01

    Marine oxygen minimum zones (OMZs) support diverse microbial communities with roles in major elemental cycles. It is unclear how the taxonomic composition and metabolism of OMZ microorganisms vary between particle-associated and free-living size fractions. We used amplicon (16S rRNA gene) and shotgun metagenome sequencing to compare microbial communities from large (>1.6 μm) and small (0.2-1.6 μm) filter size fractions along a depth gradient in the OMZ off Chile. Despite steep vertical redox gradients, size fraction was a significantly stronger predictor of community composition compared to depth. Phylogenetic diversity showed contrasting patterns, decreasing towards the anoxic OMZ core in the small size fraction, but exhibiting maximal values at these depths within the larger size fraction. Fraction-specific distributions were evident for key OMZ taxa, including anammox planctomycetes, whose coding sequences were enriched up to threefold in the 0.2-1.6 μm community. Functional gene composition also differed between fractions, with the >1.6 μm community significantly enriched in genes mediating social interactions, including motility, adhesion, cell-to-cell transfer, antibiotic resistance and mobile element activity. Prokaryotic transposase genes were three to six fold more abundant in this fraction, comprising up to 2% of protein-coding sequences, suggesting that particle surfaces may act as hotbeds for transposition-based genome changes in marine microbes. Genes for nitric and nitrous oxide reduction were also more abundant (three to seven fold) in the larger size fraction, suggesting microniche partitioning of key denitrification steps. These results highlight an important role for surface attachment in shaping community metabolic potential and genome content in OMZ microorganisms.

  7. No Effect of Featural Attention on Body Size Aftereffects

    Directory of Open Access Journals (Sweden)

    Ian David Stephen

    2016-08-01

    Full Text Available Prolonged exposure to images of narrow bodies has been shown to induce a perceptual aftereffect, such that observers’ point of subjective normality (PSN for bodies shifts towards narrower bodies. The converse effect is shown for adaptation to wide bodies. In low-level stimuli, object attention (attention directed to the object and spatial attention (attention directed to the location of the object have been shown to increase the magnitude of visual aftereffects, while object-based attention enhances the adaptation effect in faces. It is not known whether featural attention (attention directed to a specific aspect of the object affects the magnitude of adaptation effects in body stimuli. Here, we manipulate the attention of Caucasian observers to different featural information in body images, by asking them to rate the fatness or sex typicality of male and female bodies manipulated to appear fatter or thinner than average. PSNs for body fatness were taken at baseline and after adaptation, and a change in PSN (ΔPSN was calculated. A body size adaptation effect was found, with observers who viewed fat bodies showing an increased PSN, and those exposed to thin bodies showing a reduced PSN. However, manipulations of featural attention to body fatness or sex typicality produced equivalent results, suggesting that featural attention may not affect the strength of the body size aftereffect.

  8. No Effect of Featural Attention on Body Size Aftereffects.

    Science.gov (United States)

    Stephen, Ian D; Bickersteth, Chloe; Mond, Jonathan; Stevenson, Richard J; Brooks, Kevin R

    2016-01-01

    Prolonged exposure to images of narrow bodies has been shown to induce a perceptual aftereffect, such that observers' point of subjective normality (PSN) for bodies shifts toward narrower bodies. The converse effect is shown for adaptation to wide bodies. In low-level stimuli, object attention (attention directed to the object) and spatial attention (attention directed to the location of the object) have been shown to increase the magnitude of visual aftereffects, while object-based attention enhances the adaptation effect in faces. It is not known whether featural attention (attention directed to a specific aspect of the object) affects the magnitude of adaptation effects in body stimuli. Here, we manipulate the attention of Caucasian observers to different featural information in body images, by asking them to rate the fatness or sex typicality of male and female bodies manipulated to appear fatter or thinner than average. PSNs for body fatness were taken at baseline and after adaptation, and a change in PSN (ΔPSN) was calculated. A body size adaptation effect was found, with observers who viewed fat bodies showing an increased PSN, and those exposed to thin bodies showing a reduced PSN. However, manipulations of featural attention to body fatness or sex typicality produced equivalent results, suggesting that featural attention may not affect the strength of the body size aftereffect.

  9. Sequence-Based Prediction of RNA-Binding Proteins Using Random Forest with Minimum Redundancy Maximum Relevance Feature Selection

    Directory of Open Access Journals (Sweden)

    Xin Ma

    2015-01-01

    Full Text Available The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR method, followed by incremental feature selection (IFS. We incorporated features of conjoint triad features and three novel features: binding propensity (BP, nonbinding propensity (NBP, and evolutionary information combined with physicochemical properties (EIPP. The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient. High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.

  10. Rocket photographs of fine structure and wave patterns in the solar temperature minimum

    Science.gov (United States)

    Bonnet, R. M.; Decaudin, M.; Foing, B.; Bruner, M.; Acton, L. W.; Brown, W. A.

    1982-01-01

    A new series of high resolution pictures of the sun has been obtained during the second flight of the Transition Region Camera which occurred on September 23, 1980. The qualitative analysis of the results indicates that a substantial portion of the solar surface at the temperature minimum radiates in non-magnetic regions and from features below 1 arcsec in size. Wave patterns are observed on the 160 nm temperature minimum pictures. They are absent on the Lyman alpha pictures. Their physical characteristics are compatible with those of gravitational and acoustic waves generated by exploding granules.

  11. Determination of minimum sample size for fault diagnosis of automobile hydraulic brake system using power analysis

    Directory of Open Access Journals (Sweden)

    V. Indira

    2015-03-01

    Full Text Available Hydraulic brake in automobile engineering is considered to be one of the important components. Condition monitoring and fault diagnosis of such a component is very essential for safety of passengers, vehicles and to minimize the unexpected maintenance time. Vibration based machine learning approach for condition monitoring of hydraulic brake system is gaining momentum. Training and testing the classifier are two important activities in the process of feature classification. This study proposes a systematic statistical method called power analysis to find the minimum number of samples required to train the classifier with statistical stability so as to get good classification accuracy. Descriptive statistical features have been used and the more contributing features have been selected by using C4.5 decision tree algorithm. The results of power analysis have also been verified using a decision tree algorithm namely, C4.5.

  12. Prediction of minimum UO2 particle size based on thermal stress initiated fracture model

    International Nuclear Information System (INIS)

    Corradini, M.

    1976-08-01

    An analytic study was employed to determine the minimum UO 2 particle size that could survive fragmentation induced by thermal stresses in a UO 2 -Na Fuel Coolant Interaction (FCI). A brittle fracture mechanics approach was the basis of the study whereby stress intensity factors K/sub I/ were compared to the fracture toughness K/sub IC/ to determine if the particle could fracture. Solid and liquid UO 2 droplets were considered each with two possible interface contact conditions; perfect wetting by the sodium or a finite heat transfer coefficient. The analysis indicated that particles below the range of 50 microns in radius could survive a UO 2 -Na fuel coolant interaction under the most severe temperature conditions without thermal stress fragmentation. Environmental conditions of the fuel-coolant interaction were varied to determine the effects upon K/sub I/ and possible fragmentation. The underlying assumptions of the analysis were investigated in light of the analytic results. It was concluded that the analytic study seemed to verify the experimental observations as to the range of the minimum particle size due to thermal stress fragmentation by FCI. However the method used when the results are viewed in light of the basic assumptions indicates that the analysis is crude at best, and can be viewed as only a rough order of magnitude analysis. The basic complexities in fracture mechanics make further investigation in this area interesting but not necessarily fruitful for the immediate future

  13. System and method employing a minimum distance and a load feature database to identify electric load types of different electric loads

    Science.gov (United States)

    Lu, Bin; Yang, Yi; Sharma, Santosh K; Zambare, Prachi; Madane, Mayura A

    2014-12-23

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a load feature database of a plurality of different electric load types, each of the different electric load types including a first load feature vector having at least four different load features; sensing a voltage signal and a current signal for each of the different electric loads; determining a second load feature vector comprising at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the different electric loads; and identifying by a processor one of the different electric load types by determining a minimum distance of the second load feature vector to the first load feature vector of the different electric load types of the load feature database.

  14. Size-effect features on the magnetothermopower of bismuth nanowires

    International Nuclear Information System (INIS)

    Condrea, E.; Nicorici, A.

    2011-01-01

    Full text: In this work we have studied the magnetic field dependence of the thermopower (TEP) and resistance of glass-coated Bi wires with diameter (d) from 100 nm to at 1.5 μm below 80 K. Nanowires have anomalously large values of the thermopower (+100 μV K.1) and relatively high effective resistivities, but their frequencies of SdH oscillations remain those of bulk Bi. The TEP stays positive in longitudinal magnetic fields up to 15 T, where the surface scattering of charge carriers is negligible. Our analysis shows that the anomalous thermopower has a diffusion origin and is a consequence of the microstructure rather than the result of the strong scattering of electrons by the wire walls. The intensities of field at which the size-effect features appear on the magnetothermopower curves correspond to a value at which the diameter of the hole cyclotron orbit equals d. Size-effect features were observed only for set of nanowires with d = 100-350 nm, where diffusion TEP is dominant. The contribution of the phonon-drag effect was observed in a wire with diameter larger than 400 nm and becomes dominant at diameter of 1 μm. (authors)

  15. Passive Rocket Diffuser Theory: A Re-Examination of Minimum Second Throat Size

    Science.gov (United States)

    Jones, Daniel R.

    2016-01-01

    Second-throat diffusers serve to isolate rocket engines from the effects of ambient back pressure during testing without using active control systems. Among the most critical design parameters is the relative area of the diffuser throat to that of the nozzle throat. A smaller second throat is generally desirable because it decreases the stagnation-to-ambient pressure ratio the diffuser requires for nominal operation. There is a limit, however. Below a certain size, the second throat can cause pressure buildup within the diffuser and prevent it from reaching the start condition that protects the nozzle from side-load damage. This paper presents a method for improved estimation of the minimum second throat area which enables diffuser start. The new 3-zone model uses traditional quasi-one-dimensional compressible flow theory to approximate the structure of two distinct diffuser flow fields observed in Computational Fluid Dynamics (CFD) simulations and combines them to provide a less-conservative estimate of the second throat size limit. It is unique among second throat sizing methods in that it accounts for all major conical nozzle and second throat diffuser design parameters within its limits of application. The performance of the 3-zone method is compared to the historical normal shock and force balance methods, and verified against a large number of CFD simulations at specific heat ratios of 1.4 and 1.25. Validation is left as future work, and the model is currently intended to function only as a first-order design tool.

  16. Point Counts of Birds in Bottomland Hardwood Forests of the Mississippi Alluvial Valley: Duration, Minimum Sample Size, and Points Versus Visits

    Science.gov (United States)

    Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper

    1993-01-01

    To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.

  17. Haptic exploration of fingertip-sized geometric features using a multimodal tactile sensor

    Science.gov (United States)

    Ponce Wong, Ruben D.; Hellman, Randall B.; Santos, Veronica J.

    2014-06-01

    Haptic perception remains a grand challenge for artificial hands. Dexterous manipulators could be enhanced by "haptic intelligence" that enables identification of objects and their features via touch alone. Haptic perception of local shape would be useful when vision is obstructed or when proprioceptive feedback is inadequate, as observed in this study. In this work, a robot hand outfitted with a deformable, bladder-type, multimodal tactile sensor was used to replay four human-inspired haptic "exploratory procedures" on fingertip-sized geometric features. The geometric features varied by type (bump, pit), curvature (planar, conical, spherical), and footprint dimension (1.25 - 20 mm). Tactile signals generated by active fingertip motions were used to extract key parameters for use as inputs to supervised learning models. A support vector classifier estimated order of curvature while support vector regression models estimated footprint dimension once curvature had been estimated. A distal-proximal stroke (along the long axis of the finger) enabled estimation of order of curvature with an accuracy of 97%. Best-performing, curvature-specific, support vector regression models yielded R2 values of at least 0.95. While a radial-ulnar stroke (along the short axis of the finger) was most helpful for estimating feature type and size for planar features, a rolling motion was most helpful for conical and spherical features. The ability to haptically perceive local shape could be used to advance robot autonomy and provide haptic feedback to human teleoperators of devices ranging from bomb defusal robots to neuroprostheses.

  18. Feature Size Effect on Formability of Multilayer Metal Composite Sheets under Microscale Laser Flexible Forming

    Directory of Open Access Journals (Sweden)

    Huixia Liu

    2017-07-01

    Full Text Available Multilayer metal composite sheets possess superior properties to monolithic metal sheets, and formability is different from monolithic metal sheets. In this research, the feature size effect on formability of multilayer metal composite sheets under microscale laser flexible forming was studied by experiment. Two-layer copper/nickel composite sheets were selected as experimental materials. Five types of micro molds with different diameters were utilized. The formability of materials was evaluated by forming depth, thickness thinning, surface quality, and micro-hardness distribution. The research results showed that the formability of two-layer copper/nickel composite sheets was strongly influenced by feature size. With feature size increasing, the effect of layer stacking sequence on forming depth, thickness thinning ratio, and surface roughness became increasingly larger. However, the normalized forming depth, thickness thinning ratio, surface roughness, and micro-hardness of the formed components under the same layer stacking sequence first increased and then decreased with increasing feature size. The deformation behavior of copper/nickel composite sheets was determined by the external layer. The deformation extent was larger when the copper layer was set as the external layer.

  19. Spatial reorientation in rats (Rattus norvegicus): Use of geometric and featural information as a function of arena size and feature location

    NARCIS (Netherlands)

    Maes, J.H.R.; Fontanari, L.; Regolin, L.

    2009-01-01

    Rats were used in a spatial reorientation task to assess their ability to use geometric and non-geometric, featural, information. Experimental conditions differed in the size of the arena (small, medium, or large) and whether the food-baited corner was near or far from a visual feature. The main

  20. Design features to achieve defence-in-depth in small and medium sized reactors

    International Nuclear Information System (INIS)

    Kuznetsov, Vladimir

    2009-01-01

    Broader incorporation of inherent and passive safety design features has become a 'trademark' of many advanced reactor concepts, including several evolutionary designs and nearly all innovative small and medium sized design concepts. Ensuring adequate defence-in-depth is important for reactors of smaller output because many of them are being designed to allow more proximity to the user, specifically, when non-electrical energy products are targeted. Based on the activities recently performed by the International Atomic Energy Agency, the paper provides a summary description of the design features used to achieve defence in depth in the eleven representative concepts of small and medium sized reactors. (author)

  1. Beam-transport study of an isocentric rotating ion gantry with minimum number of quadrupoles

    International Nuclear Information System (INIS)

    Pavlovic, Marius; Griesmayer, Erich; Seemann, Rolf

    2005-01-01

    A beam-transport study of an isocentric gantry for ion therapy is presented. The gantry is designed with the number of quadrupoles down to the theoretical minimum, which is the feature published for the first time in this paper. This feature has been achieved without compromising the ion-optical functions of the beam-transport system that is capable of handling non-symmetric beams (beams with different emittances in vertical and horizontal plane), pencil-beam scanning, double-achromatic optics and beam-size control. Ion-optical properties of the beam-transport system are described, discussed and illustrated by computer simulations performed by the TRANSPORT-code

  2. Control of minimum member size in parameter-free structural shape optimization by a medial axis approximation

    Science.gov (United States)

    Schmitt, Oliver; Steinmann, Paul

    2017-09-01

    We introduce a manufacturing constraint for controlling the minimum member size in structural shape optimization problems, which is for example of interest for components fabricated in a molding process. In a parameter-free approach, whereby the coordinates of the FE boundary nodes are used as design variables, the challenging task is to find a generally valid definition for the thickness of non-parametric geometries in terms of their boundary nodes. Therefore we use the medial axis, which is the union of all points with at least two closest points on the boundary of the domain. Since the effort for the exact computation of the medial axis of geometries given by their FE discretization highly increases with the number of surface elements we use the distance function instead to approximate the medial axis by a cloud of points. The approximation is demonstrated on three 2D examples. Moreover, the formulation of a minimum thickness constraint is applied to a sensitivity-based shape optimization problem of one 2D and one 3D model.

  3. Communication target object recognition for D2D connection with feature size limit

    Science.gov (United States)

    Ok, Jiheon; Kim, Soochang; Kim, Young-hoon; Lee, Chulhee

    2015-03-01

    Recently, a new concept of device-to-device (D2D) communication, which is called "point-and-link communication" has attracted great attentions due to its intuitive and simple operation. This approach enables user to communicate with target devices without any pre-identification information such as SSIDs, MAC addresses by selecting the target image displayed on the user's own device. In this paper, we present an efficient object matching algorithm that can be applied to look(point)-and-link communications for mobile services. Due to the limited channel bandwidth and low computational power of mobile terminals, the matching algorithm should satisfy low-complexity, low-memory and realtime requirements. To meet these requirements, we propose fast and robust feature extraction by considering the descriptor size and processing time. The proposed algorithm utilizes a HSV color histogram, SIFT (Scale Invariant Feature Transform) features and object aspect ratios. To reduce the descriptor size under 300 bytes, a limited number of SIFT key points were chosen as feature points and histograms were binarized while maintaining required performance. Experimental results show the robustness and the efficiency of the proposed algorithm.

  4. Calculation of Appropriate Minimum Size of Isolation Rooms based on Questionnaire Survey of Experts and Analysis on Conditions of Isolation Room Use

    Science.gov (United States)

    Won, An-Na; Song, Hae-Eun; Yang, Young-Kwon; Park, Jin-Chul; Hwang, Jung-Ha

    2017-07-01

    After the outbreak of the MERS (Middle East Respiratory Syndrome) epidemic, issues were raised regarding response capabilities of medical institutions, including the lack of isolation rooms at hospitals. Since then, the government of Korea has been revising regulations to enforce medical laws in order to expand the operation of isolation rooms and to strengthen standards regarding their mandatory installation at hospitals. Among general and tertiary hospitals in Korea, a total of 159 are estimated to be required to install isolation rooms to meet minimum standards. For the purpose of contributing to hospital construction plans in the future, this study conducted a questionnaire survey of experts and analysed the environment and devices necessary in isolation rooms, to determine their appropriate minimum size to treat patients. The result of the analysis is as follows: First, isolation rooms at hospitals are required to have a minimum 3,300mm minor axis and a minimum 5,000mm major axis for the isolation room itself, and a minimum 1,800mm minor axis for the antechamber where personal protective equipment is donned and removed. Second, the 15 ㎡-or-larger standard for the floor area of isolation rooms will have to be reviewed and standards for the minimum width of isolation rooms will have to be established.

  5. Metastable Features of Economic Networks and Responses to Exogenous Shocks.

    Directory of Open Access Journals (Sweden)

    Ali Hosseiny

    Full Text Available It is well known that a network structure plays an important role in addressing a collective behavior. In this paper we study a network of firms and corporations for addressing metastable features in an Ising based model. In our model we observe that if in a recession the government imposes a demand shock to stimulate the network, metastable features shape its response. Actually we find that there exists a minimum bound where any demand shock with a size below it is unable to trigger the market out of recession. We then investigate the impact of network characteristics on this minimum bound. We surprisingly observe that in a Watts-Strogatz network, although the minimum bound depends on the average of the degrees, when translated into the language of economics, such a bound is independent of the average degrees. This bound is about 0.44ΔGDP, where ΔGDP is the gap of GDP between recession and expansion. We examine our suggestions for the cases of the United States and the European Union in the recent recession, and compare them with the imposed stimulations. While the stimulation in the US has been above our threshold, in the EU it has been far below our threshold. Beside providing a minimum bound for a successful stimulation, our study on the metastable features suggests that in the time of crisis there is a "golden time passage" in which the minimum bound for successful stimulation can be much lower. Hence, our study strongly suggests stimulations to arise within this time passage.

  6. Determination of the minimum size of a statistical representative volume element from a fibre-reinforced composite based on point pattern statistics

    DEFF Research Database (Denmark)

    Hansen, Jens Zangenberg; Brøndsted, Povl

    2013-01-01

    In a previous study, Trias et al. [1] determined the minimum size of a statistical representative volume element (SRVE) of a unidirectional fibre-reinforced composite primarily based on numerical analyses of the stress/strain field. In continuation of this, the present study determines the minimu...... size of an SRVE based on a statistical analysis on the spatial statistics of the fibre packing patterns found in genuine laminates, and those generated numerically using a microstructure generator. © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved....

  7. Fabrication of Pt nanowires with a diffraction-unlimited feature size by high-threshold lithography

    International Nuclear Information System (INIS)

    Li, Li; Zhang, Ziang; Yu, Miao; Song, Zhengxun; Weng, Zhankun; Wang, Zuobin; Li, Wenjun; Wang, Dapeng; Zhao, Le; Peng, Kuiqing

    2015-01-01

    Although the nanoscale world can already be observed at a diffraction-unlimited resolution using far-field optical microscopy, to make the step from microscopy to lithography still requires a suitable photoresist material system. In this letter, we consider the threshold to be a region with a width characterized by the extreme feature size obtained using a Gaussian beam spot. By narrowing such a region through improvement of the threshold sensitization to intensity in a high-threshold material system, the minimal feature size becomes smaller. By using platinum as the negative photoresist, we demonstrate that high-threshold lithography can be used to fabricate nanowire arrays with a scalable resolution along the axial direction of the linewidth from the micro- to the nanoscale using a nanosecond-pulsed laser source with a wavelength λ 0  = 1064 nm. The minimal feature size is only several nanometers (sub λ 0 /100). Compared with conventional polymer resist lithography, the advantages of high-threshold lithography are sharper pinpoints of laser intensity triggering the threshold response and also higher robustness allowing for large area exposure by a less-expensive nanosecond-pulsed laser

  8. Identifying the Minimum Model Features to Replicate Historic Morphodynamics of a Juvenile Delta

    Science.gov (United States)

    Czapiga, M. J.; Parker, G.

    2017-12-01

    We introduce a quasi-2D morphodynamic delta model that improves on past models that require many simplifying assumptions, e.g. a single channel representative of a channel network, fixed channel width, and spatially uniform deposition. Our model is useful for studying long-term progradation rates of any generic micro-tidal delta system with specification of: characteristic grain size, input water and sediment discharges and basin morphology. In particular, we relax the assumption of a single, implicit channel sweeping across the delta topset in favor of an implicit channel network. This network, coupled with recent research on channel-forming Shields number, quantitative assessments of the lateral depositional length of sand (corresponding loosely to levees) and length between bifurcations create a spatial web of deposition within the receiving basin. The depositional web includes spatial boundaries for areas infilling with sands carried as bed material load, as well as those filling via passive deposition of washload mud. Our main goal is to identify the minimum features necessary to accurately model the morphodynamics of channel number, width, depth, and overall delta progradation rate in a juvenile delta. We use the Wax Lake Delta in Louisiana as a test site due to its rapid growth in the last 40 years. Field data including topset/island bathymetry, channel bathymetry, topset/island width, channel width, number of channels, and radial topset length are compiled from US Army Corps of Engineers data for 1989, 1998, and 2006. Additional data is extracted from a DEM from 2015. These data are used as benchmarks for the hindcast model runs. The morphology of Wax Lake Delta is also strongly affected by a pre-delta substrate that acts as a lower "bedrock" boundary. Therefore, we also include closures for a bedrock-alluvial transition and an excess shear rate-law incision model to estimate bedrock incision. The model's framework is generic, but inclusion of individual

  9. Mid-level perceptual features distinguish objects of different real-world sizes.

    Science.gov (United States)

    Long, Bria; Konkle, Talia; Cohen, Michael A; Alvarez, George A

    2016-01-01

    Understanding how perceptual and conceptual representations are connected is a fundamental goal of cognitive science. Here, we focus on a broad conceptual distinction that constrains how we interact with objects--real-world size. Although there appear to be clear perceptual correlates for basic-level categories (apples look like other apples, oranges look like other oranges), the perceptual correlates of broader categorical distinctions are largely unexplored, i.e., do small objects look like other small objects? Because there are many kinds of small objects (e.g., cups, keys), there may be no reliable perceptual features that distinguish them from big objects (e.g., cars, tables). Contrary to this intuition, we demonstrated that big and small objects have reliable perceptual differences that can be extracted by early stages of visual processing. In a series of visual search studies, participants found target objects faster when the distractor objects differed in real-world size. These results held when we broadly sampled big and small objects, when we controlled for low-level features and image statistics, and when we reduced objects to texforms--unrecognizable textures that loosely preserve an object's form. However, this effect was absent when we used more basic textures. These results demonstrate that big and small objects have reliably different mid-level perceptual features, and suggest that early perceptual information about broad-category membership may influence downstream object perception, recognition, and categorization processes. (c) 2015 APA, all rights reserved).

  10. Well known outstanding geoid and relief depressions as regular wave woven features on Eartg (Indian geoid minimum), Moon (SPA basin), Phobos (Stickney crater), and Miranda (an ovoid).

    Science.gov (United States)

    Kochemasov, Gennady G.

    2010-05-01

    A very unreliable interpretation of the deepest and large depressions on the Moon and Phobos as the impact features is not synonymous and causes many questions. A real scientific understanding of their origin should take into consideration a fact of their similar tectonic position with that of a comparable depression on so different by size, composition, and density heavenly body as Earth. On Earth as on other celestial bodies there is a fundamental division on two segments - hemispheres produced by an interference of standing warping wave 1 (long 2πR) of four directions [1]. One hemisphere is uplifted (continental, highlands) and the opposite subsided (oceanic, lowlands). Tectonic features made by wave 2 (sectors) adorn this fundamental structure. Thus, on the continental risen segment appear regularly disposed sectors, also uplifted and subsided. On the Earth's eastern continental hemisphere they are grouped around the Pamirs-Hindukush vertex of the structural octahedron made by interfering waves2. Two risen sectors (highly uplifted African and the opposite uplifted Asian) are separated by two fallen sectors (subsided Eurasian and the opposite deeply subsided Indoceanic). The Indoceanic sector with superposed on it subsided Indian tectonic granule (πR/4-structure) produce the deepest geoid minimum of Earth (-112 m). The Moon demonstrates its own geoid minimum of the same relative size and in the similar sectoral tectonic position - the SPA basin [2, 3]. This basin represents a deeply subsided sector of the sectoral structure around the Mare Orientale (one of vertices of the lunar structural octahedron). To this Mare converge four sectors: two subsided - SPA basin and the opposite Procellarum Ocean, and two uplifted - we call them the "Africanda sector" and the opposite "Antiafricanda one" to stress structural similarity with Earth [2]. The highest "Africanda sector" is built with light anorthosites; enrichment with Na makes them even less dense that is required

  11. On the relationship of minimum detectable contrast to dose and lesion size in abdominal CT

    International Nuclear Information System (INIS)

    Zhou, Yifang; Scott, Alexander II; Allahverdian, Janet; Lee, Christina; Kightlinger, Blake; Azizyan, Avetis; Miller, Joseph

    2015-01-01

    CT dose optimization is typically guided by pixel noise or contrast-to-noise ratio that does not delineate low contrast details adequately. We utilized the statistically defined low contrast detectability to study its relationship to dose and lesion size in abdominal CT. A realistically shaped medium sized abdomen phantom was customized to contain a cylindrical void of 4 cm diameter. The void was filled with a low contrast (1% and 2%) insert containing six groups of cylindrical targets ranging from 1.2 mm to 7 mm in size. Helical CT scans were performed using a Siemens 64-slice mCT and a GE Discovery 750 HD at various doses. After the subtractions between adjacent slices, the uniform sections of the filtered backprojection reconstructed images were partitioned to matrices of square elements matching the sizes of the targets. It was verified that the mean values from all the elements in each matrix follow a Gaussian distribution. The minimum detectable contrast (MDC), quantified by the mean signal to background difference equal to the distribution’s standard deviation multiplied by 3.29, corresponding to 95% confidence level, was found to be related to the phantom specific dose and the element size by a power law (R 2   >  0.990). Independent readings on the 5 mm and 7 mm targets were compared to the measured contrast to the MDC ratios. The results showed that 93% of the cases were detectable when the measured contrast exceeds the MDC. The correlation of the MDC to the pixel noise and target size was also identified and the relationship was found to be the same for the scanners in the study. To quantify the impact of iterative reconstructions to the low contrast detectability, the noise structure was studied in a similar manner at different doses and with different ASIR blending fractions. The relationship of the dose to the blending fraction and low contrast detectability is presented. (paper)

  12. On the relationship of minimum detectable contrast to dose and lesion size in abdominal CT

    Science.gov (United States)

    Zhou, Yifang; Scott, Alexander, II; Allahverdian, Janet; Lee, Christina; Kightlinger, Blake; Azizyan, Avetis; Miller, Joseph

    2015-10-01

    CT dose optimization is typically guided by pixel noise or contrast-to-noise ratio that does not delineate low contrast details adequately. We utilized the statistically defined low contrast detectability to study its relationship to dose and lesion size in abdominal CT. A realistically shaped medium sized abdomen phantom was customized to contain a cylindrical void of 4 cm diameter. The void was filled with a low contrast (1% and 2%) insert containing six groups of cylindrical targets ranging from 1.2 mm to 7 mm in size. Helical CT scans were performed using a Siemens 64-slice mCT and a GE Discovery 750 HD at various doses. After the subtractions between adjacent slices, the uniform sections of the filtered backprojection reconstructed images were partitioned to matrices of square elements matching the sizes of the targets. It was verified that the mean values from all the elements in each matrix follow a Gaussian distribution. The minimum detectable contrast (MDC), quantified by the mean signal to background difference equal to the distribution’s standard deviation multiplied by 3.29, corresponding to 95% confidence level, was found to be related to the phantom specific dose and the element size by a power law (R^2  >  0.990). Independent readings on the 5 mm and 7 mm targets were compared to the measured contrast to the MDC ratios. The results showed that 93% of the cases were detectable when the measured contrast exceeds the MDC. The correlation of the MDC to the pixel noise and target size was also identified and the relationship was found to be the same for the scanners in the study. To quantify the impact of iterative reconstructions to the low contrast detectability, the noise structure was studied in a similar manner at different doses and with different ASIR blending fractions. The relationship of the dose to the blending fraction and low contrast detectability is presented.

  13. Interspecific geographic range size-body size relationship and the diversification dynamics of Neotropical furnariid birds.

    Science.gov (United States)

    Inostroza-Michael, Oscar; Hernández, Cristián E; Rodríguez-Serrano, Enrique; Avaria-Llautureo, Jorge; Rivadeneira, Marcelo M

    2018-05-01

    Among the earliest macroecological patterns documented, is the range and body size relationship, characterized by a minimum geographic range size imposed by the species' body size. This boundary for the geographic range size increases linearly with body size and has been proposed to have implications in lineages evolution and conservation. Nevertheless, the macroevolutionary processes involved in the origin of this boundary and its consequences on lineage diversification have been poorly explored. We evaluate the macroevolutionary consequences of the difference (hereafter the distance) between the observed and the minimum range sizes required by the species' body size, to untangle its role on the diversification of a Neotropical species-rich bird clade using trait-dependent diversification models. We show that speciation rate is a positive hump-shaped function of the distance to the lower boundary. The species with highest and lowest distances to minimum range size had lower speciation rates, while species close to medium distances values had the highest speciation rates. Further, our results suggest that the distance to the minimum range size is a macroevolutionary constraint that affects the diversification process responsible for the origin of this macroecological pattern in a more complex way than previously envisioned. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.

  14. Quantitative Research on the Minimum Wage

    Science.gov (United States)

    Goldfarb, Robert S.

    1975-01-01

    The article reviews recent research examining the impact of minimum wage requirements on the size and distribution of teenage employment and earnings. The studies measure income distribution, employment levels and effect on unemployment. (MW)

  15. Design of a minimum emittance nBA lattice

    Science.gov (United States)

    Lee, S. Y.

    1998-04-01

    An attempt to design a minimum emittance n-bend achromat (nBA) lattice has been made. One distinct feature is that dipoles with two different lengths were used. As a multiple bend achromat, five bend achromat lattices with six superperiod were designed. The obtained emittace is three times larger than the theoretical minimum. Tunes were chosen to avoid third order resonances. In order to correct first and second order chromaticities, eight family sextupoles were placed. The obtained emittance of five bend achromat lattices is almost equal to the minimum emittance of five bend achromat lattice consisting of dipoles with equal length.

  16. The contribution of local features to familiarity judgments in music.

    Science.gov (United States)

    Bigand, Emmanuel; Gérard, Yannick; Molin, Paul

    2009-07-01

    The contributions of local and global features to object identification depend upon the context. For example, while local features play an essential role in identification of words and objects, the global features are more influential in face recognition. In order to evaluate the respective strengths of local and global features for face recognition, researchers usually ask participants to recognize human faces (famous or learned) in normal and scrambled pictures. In this paper, we address a similar issue in music. We present the results of an experiment in which musically untrained participants were asked to differentiate famous from unknown musical excerpts that were presented in normal or scrambled ways. Manipulating the size of the temporal window on which the scrambling procedure was applied allowed us to evaluate the minimal length of time necessary for participants to make a familiarity judgment. Quite surprisingly, the minimum duration for differentiation of famous from unknown pieces is extremely short. This finding highlights the contribution of very local features to music memory.

  17. Size and Ultrasound Features Affecting Results of Ultrasound-Guided Fine-Needle Aspiration of Thyroid Nodules.

    Science.gov (United States)

    Dong, YiJie; Mao, MinJing; Zhan, WeiWei; Zhou, JianQiao; Zhou, Wei; Yao, JieJie; Hu, YunYun; Wang, Yan; Ye, TingJun

    2017-11-09

    Our goal was to assess the diagnostic efficacy of ultrasound (US)-guided fine-needle aspiration (FNA) of thyroid nodules according to size and US features. A retrospective correlation was made with 1745 whole thyroidectomy and hemithyroidectomy specimens with preoperative US-guided FNA results. All cases were divided into 5 groups according to nodule size (≤5, 5.1-10, 10.1-15, 15.1-20, and >20 mm). For target nodules, static images and cine clips of conventional US and color Doppler were obtained. Ultrasound images were reviewed and evaluated by two radiologists with at least 5 years US working experience without knowing the results of pathology, and then agreement was achieved. The Bethesda category I rate was higher in nodules larger than 15 mm (P 20 mm) with several US features tended to yield false-negative FNA results. © 2017 by the American Institute of Ultrasound in Medicine.

  18. Tolerance-Based Feature Transforms

    NARCIS (Netherlands)

    Reniers, Dennie; Telea, Alexandru

    2007-01-01

    Tolerance-based feature transforms (TFTs) assign to each pixel in an image not only the nearest feature pixels on the boundary (origins), but all origins from the minimum distance up to a user-defined tolerance. In this paper, we compare four simple-to-implement methods for computing TFTs on binary

  19. What is the optimum minimum segment size used in step and shoot IMRT for prostate cancer?

    International Nuclear Information System (INIS)

    Takahashi, Yutaka; Sumida, Iori; Koizumi, Masahiko

    2010-01-01

    Although the use of small segments in step and shoot intensity modulated radiation therapy (IMRT) provides better dose distribution, extremely small segments decrease treatment accuracy. The purpose of this study was to determine the optimum minimum segment size (MSS) in two-step optimization in prostate step and shoot IMRT with regard to both planning quality and dosimetric accuracy. The XiO treatment planning system and Oncor Impression Plus were used. Results showed that the difference in homogeneity index (HI), defined as the ratio of maximum to minimum doses for planning target volume, between the MSS 1.0 cm and 1.5 cm plans, and 2.0 cm plans, was 0.1%, and 9.6%, respectively. With regard to V107 of planning target volume (PTV), the volume receiving 107% of the prescribed dose of the PTV, the difference between MSS 1.0 cm and 1.5 cm was 2%. However, the value of the MSS 2.0 cm or greater plans was more than 2.5-fold that of the MSS 1.0 cm plan. With regard to maximum rectal dose, a significant difference was seen between the MSS 1.5 cm and 2.0 cm plans, whereas no significant difference was seen between the MSS 1.0 cm and 1.5 cm plans. Composite plan verification revealed a greater than 5% dose difference between planned and measured dose in many regions with the MSS 1.0 cm plan, but in only limited regions in the MSS 1.5 cm plan. Our data suggest that the MSS should be determined with regard to both planning quality and dosimetric accuracy. (author)

  20. Computational intelligence models to predict porosity of tablets using minimum features

    Directory of Open Access Journals (Sweden)

    Khalid MH

    2017-01-01

    Full Text Available Mohammad Hassan Khalid,1 Pezhman Kazemi,1 Lucia Perez-Gandarillas,2 Abderrahim Michrafy,2 Jakub Szlęk,1 Renata Jachowicz,1 Aleksander Mendyk1 1Department of Pharmaceutical Technology and Biopharmaceutics, Faculty of Pharmacy, Jagiellonian University Medical College, Krakow, Poland; 2Centre National de la Recherche Scientifique, Centre RAPSODEE, Mines Albi, Université de Toulouse, Albi, France Abstract: The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD practices. Computational intelligence (CI offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs, and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC (in percentage, granule size fraction (in micrometers, and die compaction force (in kilonewtons as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1% and symbolic regression (NRMSE =4% as the best-performing methods, also exhibiting reliable predictive

  1. Optimal Feature Space Selection in Detecting Epileptic Seizure based on Recurrent Quantification Analysis and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Saleh LAshkari

    2016-06-01

    Full Text Available Selecting optimal features based on nature of the phenomenon and high discriminant ability is very important in the data classification problems. Since it doesn't require any assumption about stationary condition and size of the signal and the noise in Recurrent Quantification Analysis (RQA, it may be useful for epileptic seizure Detection. In this study, RQA was used to discriminate ictal EEG from the normal EEG where optimal features selected by combination of algorithm genetic and Bayesian Classifier. Recurrence plots of hundred samples in each two categories were obtained with five distance norms in this study: Euclidean, Maximum, Minimum, Normalized and Fixed Norm. In order to choose optimal threshold for each norm, ten threshold of ε was generated and then the best feature space was selected by genetic algorithm in combination with a bayesian classifier. The results shown that proposed method is capable of discriminating the ictal EEG from the normal EEG where for Minimum norm and 0.1˂ε˂1, accuracy was 100%. In addition, the sensitivity of proposed framework to the ε and the distance norm parameters was low. The optimal feature presented in this study is Trans which it was selected in most feature spaces with high accuracy.

  2. The minimum wage in the Czech enterprises

    Directory of Open Access Journals (Sweden)

    Eva Lajtkepová

    2010-01-01

    Full Text Available Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007. The aim of this article is to present selected results of two researches of acceptance of the statutory minimum wage by Czech enterprises. The first research makes use of the data collected by questionnaire research in 83 small and medium-sized enterprises in the South Moravia Region in 2005, the second one the data of 116 enterprises in the entire Czech Republic (in 2007. The data have been processed by means of the standard methods of descriptive statistics and of the appropriate methods of the statistical analyses (Spearman correlation coefficient of sequential correlation, Kendall coefficient, χ2 - independence test, Kruskal-Wallis test, and others.

  3. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    Science.gov (United States)

    Trenary, L.; DelSole, T.; Tippett, M. K.; Pegion, K.

    2018-04-01

    The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real-time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real-time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8-10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities.

  4. Deformation and wear of pyramidal, silicon-nitride AFM tips scanning micrometre-size features in contact mode

    NARCIS (Netherlands)

    Bloo, M.; Haitjema, H.; Pril, W.O.

    1999-01-01

    An experimental study was carried out, in order to investigate the deformation and wear taking place on pyramidal silicon-nitride AFM tips. The study focuses on the contact mode scanning of silicon features of micrometre-size. First the deformation and the mechanisms of wear of the tip during

  5. Practical implementation of Channelized Hotelling Observers: Effect of ROI size.

    Science.gov (United States)

    Ferrero, Andrea; Favazza, Christopher P; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H

    2017-03-01

    Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.

  6. 7 CFR 51.344 - Size.

    Science.gov (United States)

    2010-01-01

    ... the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... Standards for Grades of Apples for Processing Size § 51.344 Size. (a) The minimum and maximum sizes or range...

  7. MINIMUM AREAS FOR ELEMENTARY SCHOOL BUILDING FACILITIES.

    Science.gov (United States)

    Pennsylvania State Dept. of Public Instruction, Harrisburg.

    MINIMUM AREA SPACE REQUIREMENTS IN SQUARE FOOTAGE FOR ELEMENTARY SCHOOL BUILDING FACILITIES ARE PRESENTED, INCLUDING FACILITIES FOR INSTRUCTIONAL USE, GENERAL USE, AND SERVICE USE. LIBRARY, CAFETERIA, KITCHEN, STORAGE, AND MULTIPURPOSE ROOMS SHOULD BE SIZED FOR THE PROJECTED ENROLLMENT OF THE BUILDING IN ACCORDANCE WITH THE PROJECTION UNDER THE…

  8. Prostate size and adverse pathologic features in men undergoing radical prostatectomy.

    Science.gov (United States)

    Hong, Sung Kyu; Poon, Bing Ying; Sjoberg, Daniel D; Scardino, Peter T; Eastham, James A

    2014-07-01

    To investigate the relationship between prostate volume measured from preoperative imaging and adverse pathologic features at the time of radical prostatectomy and evaluate the potential effect of clinical stage on such relationship. In 1756 men who underwent preoperative magnetic resonance imaging and radical prostatectomy from 2000 to 2010, we examined associations of magnetic resonance imaging-measured prostate volume with pathologic outcomes using univariate logistic regression and with postoperative biochemical recurrence using Cox proportional hazards models. We also analyzed the effects of clinical stage on the relationship between prostate volume and adverse pathologic features via interaction analyses. In univariate analyses, smaller prostate volume was significantly associated with high pathologic Gleason score (P.05). The association between prostate volume and recurrence was significant in a multivariable analysis adjusting for postoperative variables (P=.031) but missed statistical significance in the preoperative model (P=.053). Addition of prostate volume did not change C-Indices (0.78 and 0.83) of either model. Although prostate size did not enhance the prediction of recurrence, it is associated with aggressiveness of prostate cancer. There is no evidence that this association differs depending on clinical stage. Prospective studies are warranted assessing the effect of initial method of detection on the relationship between volume and outcome. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2013-01-01

    Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.

  10. Surface phase separation, dewetting feature size, and crystal morphology in thin films of polystyrene/poly(ε-caprolactone) blend.

    Science.gov (United States)

    Ma, Meng; He, Zhoukun; Li, Yuhan; Chen, Feng; Wang, Ke; Zhang, Qing; Deng, Hua; Fu, Qiang

    2012-12-01

    Thin films of polystyrene (PS)/poly(ε-caprolactone) (PCL) blends were prepared by spin-coating and characterized by tapping mode force microscopy (AFM). Effects of the relative concentration of PS in polymer solution on the surface phase separation and dewetting feature size of the blend films were systematically studied. Due to the coupling of phase separation, dewetting, and crystallization of the blend films with the evaporation of solvent during spin-coating, different size of PS islands decorated with various PCL crystal structures including spherulite-like, flat-on individual lamellae, and flat-on dendritic crystal were obtained in the blend films by changing the film composition. The average distance of PS islands was shown to increase with the relative concentration of PS in casting solution. For a given ratio of PS/PCL, the feature size of PS appeared to increase linearly with the square of PS concentration while the PCL concentration only determined the crystal morphology of the blend films with no influence on the upper PS domain features. This is explained in terms of vertical phase separation and spinodal dewetting of the PS rich layer from the underlying PCL rich layer, leading to the upper PS dewetting process and the underlying PCL crystalline process to be mutually independent. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Vascularity and grey-scale sonographic features of normal cervical lymph nodes: variations with nodal size

    International Nuclear Information System (INIS)

    Ying, Michael; Ahuja, Anil; Brook, Fiona; Metreweli, Constantine

    2001-01-01

    AIM: This study was undertaken to investigate variations in the vascularity and grey-scale sonographic features of cervical lymph nodes with their size. MATERIALS AND METHODS: High resolution grey-scale sonography and power Doppler sonography were performed in 1133 cervical nodes in 109 volunteers who had a sonographic examination of the neck. Standardized parameters were used in power Doppler sonography. RESULTS: About 90% of lymph nodes with a maximum transverse diameter greater than 5 mm showed vascularity and an echogenic hilus. Smaller nodes were less likely to show vascularity and an echogenic hilus. As the size of the lymph nodes increased, the intranodal blood flow velocity increased significantly (P 0.05). CONCLUSIONS: The findings provide a baseline for grey-scale and power Doppler sonography of normal cervical lymph nodes. Sonologists will find varying vascularity and grey-scale appearances when encountering nodes of different sizes. Ying, M. et al. (2001)

  12. Size dependent compressibility of nano-ceria: Minimum near 33 nm

    International Nuclear Information System (INIS)

    Rodenbough, Philip P.; Song, Junhua; Chan, Siu-Wai; Walker, David; Clark, Simon M.; Kalkan, Bora

    2015-01-01

    We report the crystallite-size-dependency of the compressibility of nanoceria under hydrostatic pressure for a wide variety of crystallite diameters and comment on the size-based trends indicating an extremum near 33 nm. Uniform nano-crystals of ceria were synthesized by basic precipitation from cerium (III) nitrate. Size-control was achieved by adjusting mixing time and, for larger particles, a subsequent annealing temperature. The nano-crystals were characterized by transmission electron microscopy and standard ambient x-ray diffraction (XRD). Compressibility, or its reciprocal, bulk modulus, was measured with high-pressure XRD at LBL-ALS, using helium, neon, or argon as the pressure-transmitting medium for all samples. As crystallite size decreased below 100 nm, the bulk modulus first increased, and then decreased, achieving a maximum near a crystallite diameter of 33 nm. We review earlier work and examine several possible explanations for the peaking of bulk modulus at an intermediate crystallite size

  13. Size dependent compressibility of nano-ceria: Minimum near 33 nm

    Energy Technology Data Exchange (ETDEWEB)

    Rodenbough, Philip P. [Department of Applied Physics and Applied Mathematics, Materials Science and Engineering Program, Columbia University, New York, New York 10027 (United States); Chemistry Department, Columbia University, New York, New York 10027 (United States); Song, Junhua; Chan, Siu-Wai, E-mail: sc174@columbia.edu [Department of Applied Physics and Applied Mathematics, Materials Science and Engineering Program, Columbia University, New York, New York 10027 (United States); Walker, David [Department of Earth and Environmental Sciences, Lamont-Doherty Earth Observatory, Columbia University, Palisades, New York 10964 (United States); Clark, Simon M. [ARC Center of Excellence for Core to Crust Fluid Systems and Department of Earth and Planetary Sciences, Macquarie University, Sydney, New South Wales 2019, Australia and The Bragg Institute, Australian Nuclear Science and Technology Organisation, Kirrawee DC, New South Wales 2232 (Australia); Kalkan, Bora [Department of Physics Engineering, Hacettepe University, 06800 Beytepe, Ankara (Turkey)

    2015-04-20

    We report the crystallite-size-dependency of the compressibility of nanoceria under hydrostatic pressure for a wide variety of crystallite diameters and comment on the size-based trends indicating an extremum near 33 nm. Uniform nano-crystals of ceria were synthesized by basic precipitation from cerium (III) nitrate. Size-control was achieved by adjusting mixing time and, for larger particles, a subsequent annealing temperature. The nano-crystals were characterized by transmission electron microscopy and standard ambient x-ray diffraction (XRD). Compressibility, or its reciprocal, bulk modulus, was measured with high-pressure XRD at LBL-ALS, using helium, neon, or argon as the pressure-transmitting medium for all samples. As crystallite size decreased below 100 nm, the bulk modulus first increased, and then decreased, achieving a maximum near a crystallite diameter of 33 nm. We review earlier work and examine several possible explanations for the peaking of bulk modulus at an intermediate crystallite size.

  14. Minimum Bias Trigger in ATLAS

    International Nuclear Information System (INIS)

    Kwee, Regina

    2010-01-01

    Since the restart of the LHC in November 2009, ATLAS has collected inelastic pp collisions to perform first measurements on charged particle densities. These measurements will help to constrain various models describing phenomenologically soft parton interactions. Understanding the trigger efficiencies for different event types are therefore crucial to minimize any possible bias in the event selection. ATLAS uses two main minimum bias triggers, featuring complementary detector components and trigger levels. While a hardware based first trigger level situated in the forward regions with 2.2 < |η| < 3.8 has been proven to select pp-collisions very efficiently, the Inner Detector based minimum bias trigger uses a random seed on filled bunches and central tracking detectors for the event selection. Both triggers were essential for the analysis of kinematic spectra of charged particles. Their performance and trigger efficiency measurements as well as studies on possible bias sources will be presented. We also highlight the advantage of these triggers for particle correlation analyses. (author)

  15. Allocation of optimal distributed generation using GA for minimum ...

    African Journals Online (AJOL)

    user

    quality of supply and reliability in tern extending equipment maintenance intervals and ... The performance of the method is tested on 33-bus test system and ... minimum real power losses of the system by calculating DG size at different buses.

  16. Minimum size limits for yellow perch (Perca flavescens) in western Lake Erie

    Science.gov (United States)

    Hartman, Wilbur L.; Nepszy, Stephen J.; Scholl, Russell L.

    1980-01-01

    During the 1960's yellow perch (Perca flavescens) of Lake Erie supported a commercial fishery that produced an average annual catch of 23 million pounds, as well as a modest sport fishery. Since 1969, the resource has seriously deteriorated. Commercial landings amounted to only 6 million pounds in 1976, and included proportionally more immature perch than in the 1960's. Moreover, no strong year classes were produced between 1965 and 1975. An interagency technical committee was appointed in 1975 by the Lake Erie Committee of the Great Lakes Fishery Commission to develop an interim management strategy that would provide for greater protection of perch in western Lake Erie, where declines have been the most severe. The committee first determined the age structure, growth and mortality rates, maturation schedule, and length-fecundity relationship for the population, and then applied Ricker-type equilibrium yield models to determine the effects of various minimum length limits on yield, production, average stock weight, potential egg deposition, and the Abrosov spawning frequency indicator (average number of spawning opportunities per female). The committee recommended increasing the minimum length limit of 5.0 inches to at least 8.5 inches. Theoretically, this change would increase the average stock weight by 36% and potential egg deposition by 44%, without significantly decreasing yield. Abrosov's spawning frequency indicator would rise from the existing 0.6 to about 1.2.

  17. Distribution of the minimum path on percolation clusters: A renormalization group calculation

    International Nuclear Information System (INIS)

    Hipsh, Lior.

    1993-06-01

    This thesis uses the renormalization group for the research of the chemical distance or the minimal path on percolation clusters on a 2 dimensional square lattice. Our aims are to calculate analytically (iterative calculation) the fractal dimension of the minimal path. d min. , and the distributions of the minimum paths, l min for different lattice sizes and for different starting densities (including the threshold value p c ). For the distributions. We seek for an analytic form which describes them. The probability to get a minimum path for each linear size L is calculated by iterating the distribution of l min for the basic cell of size 2*2 to the next scale sizes, using the H cell renormalization group. For the threshold value of p and for values near to p c . We confirm a scaling in the form: P(l,L) =f1/l(l/(L d min ). L - the linear size, l - the minimum path. The distribution can be also represented in the Fourier space, so we will try to solve the renormalization group equations in this space. A numerical fitting is produced and compared to existing numerical results. In order to improve the agreement between the renormalization group and the numerical simulations, we also present attempts to generalize the renormalization group by adding more parameters, e.g. correlations between bonds in different directions or finite densities for occupation of bonds and sites. (author) 17 refs

  18. Oxygen minimum zones harbour novel viral communities with low diversity

    NARCIS (Netherlands)

    Cassman, N.; Prieto-Davo, A.; Walsh, K.; Silva, G.G.; Angly, F.; Akhter, S.; Barott, K.; Busch, J.; McDole, T.; Haggerty, J.M.; Willner, D.; Alarcon, G.; Ulloa, O.; DeLong, E.F.; Dutilh, B.E.; Rohwer, F.; Dinsdale, E.A.

    2012-01-01

    Oxygen minimum zones (OMZs) are oceanographic features that affect ocean productivity and biodiversity, and contribute to ocean nitrogen loss and greenhouse gas emissions. Here we describe the viral communities associated with the Eastern Tropical South Pacific (ETSP) OMZ off Iquique, Chile for the

  19. Minimum Financial Outlays for Purchasing Alcohol Brands in the U.S

    Science.gov (United States)

    Albers, Alison Burke; DeJong, William; Naimi, Timothy S.; Siegel, Michael; Shoaff, Jessica Ruhlman; Jernigan, David H.

    2012-01-01

    Background Low alcohol prices are a potent risk factor for excessive drinking, underage drinking, and adverse alcohol-attributable outcomes. Presently, there is little reported information on alcohol prices in the U.S., in particular as it relates to the costs of potentially beneficial amounts of alcohol. Purpose To determine the minimum financial outlay necessary to purchase individual brands of alcohol using online alcohol price data from January through March 2012. Methods The smallest container size and the minimum price at which that size beverage could be purchased in the U.S. in 2012 were determined for 898 brands of alcohol, across 17 different alcoholic beverage types. The analyses were conducted in March 2012. Results The majority of alcoholic beverage categories contain brands that can be purchased in the U.S. for very low minimum financial outlays. Conclusions In the U.S., a wide variety of alcohol brands, across many types of alcohol, are available at very low prices. Given that both alcohol use and abuse are responsive to price, particularly among adolescents, the prevalence of low alcohol prices is concerning. Surveillance of alcohol prices and minimum pricing policies should be considered in the U.S. as part of a public health strategy to reduce excessive alcohol consumption and related harms. PMID:23253652

  20. Digging deeper into platform game level design: session size and sequential features

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian

    2012-01-01

    A recent trend within computational intelligence and games research is to investigate how to affect video game players’ in-game experience by designing and/or modifying aspects of game content. Analysing the relationship between game content, player behaviour and self-reported affective states...... constitutes an important step towards understanding game experience and constructing effective game adaptation mechanisms. This papers reports on further refinement of a method to understand this relationship by analysing data collected from players, building models that predict player experience...... and analysing what features of game and player data predict player affect best. We analyse data from players playing 780 pairs of short game sessions of the platform game Super Mario Bros, investigate the impact of the session size and what part of the level that has the major affect on player experience...

  1. Quantitative Comparison of Tolerance-Based Feature Transforms

    NARCIS (Netherlands)

    Reniers, Dennie; Telea, Alexandru

    2006-01-01

    Tolerance-based feature transforms (TFTs) assign to each pixel in an image not only the nearest feature pixels on the boundary (origins), but all origins from the minimum distance up to a user-defined tolerance. In this paper, we compare four simple-to-implement methods for computing TFTs for binary

  2. Construction of Protograph LDPC Codes with Linear Minimum Distance

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  3. Rate-Compatible LDPC Codes with Linear Minimum Distance

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  4. An integration of minimum local feature representation methods to recognize large variation of foods

    Science.gov (United States)

    Razali, Mohd Norhisham bin; Manshor, Noridayu; Halin, Alfian Abdul; Mustapha, Norwati; Yaakob, Razali

    2017-10-01

    Local invariant features have shown to be successful in describing object appearances for image classification tasks. Such features are robust towards occlusion and clutter and are also invariant against scale and orientation changes. This makes them suitable for classification tasks with little inter-class similarity and large intra-class difference. In this paper, we propose an integrated representation of the Speeded-Up Robust Feature (SURF) and Scale Invariant Feature Transform (SIFT) descriptors, using late fusion strategy. The proposed representation is used for food recognition from a dataset of food images with complex appearance variations. The Bag of Features (BOF) approach is employed to enhance the discriminative ability of the local features. Firstly, the individual local features are extracted to construct two kinds of visual vocabularies, representing SURF and SIFT. The visual vocabularies are then concatenated and fed into a Linear Support Vector Machine (SVM) to classify the respective food categories. Experimental results demonstrate impressive overall recognition at 82.38% classification accuracy based on the challenging UEC-Food100 dataset.

  5. Quiescent and Eruptive Prominences at Solar Minimum: A Statistical Study via an Automated Tracking System

    Science.gov (United States)

    Loboda, I. P.; Bogachev, S. A.

    2015-07-01

    We employ an automated detection algorithm to perform a global study of solar prominence characteristics. We process four months of TESIS observations in the He II 304Å line taken close to the solar minimum of 2008-2009 and mainly focus on quiescent and quiescent-eruptive prominences. We detect a total of 389 individual features ranging from 25×25 to 150×500 Mm2 in size and obtain distributions of many of their spatial characteristics, such as latitudinal position, height, size, and shape. To study their dynamics, we classify prominences as either stable or eruptive and calculate their average centroid velocities, which are found to rarely exceed 3 km/s. In addition, we give rough estimates of mass and gravitational energy for every detected prominence and use these values to estimate the total mass and gravitational energy of all simultaneously existing prominences (1012 - 1014 kg and 1029 - 1031 erg). Finally, we investigate the form of the gravitational energy spectrum of prominences and derive it to be a power-law of index -1.1 ± 0.2.

  6. Minimum Compliance Topology Optimization of Shell-Infill Composites for Additive Manufacturing

    DEFF Research Database (Denmark)

    Wu, Jun; Clausen, Anders; Sigmund, Ole

    2017-01-01

    Additively manufactured parts are often composed of two sub-structures, a solid shell forming their exterior and a porous infill occupying the interior. To account for this feature this paper presents a novel method for generating simultaneously optimized shell and infill in the context of minimum...... interpolation model into a physical density field, upon which the compliance is minimized. Enhanced by an adapted robust formulation for controlling the minimum length scale of the base, our method generates optimized shell-infill composites suitable for additive manufacturing. We demonstrate the effectiveness...

  7. Fine-tuning the feature size of nanoporous silver

    NARCIS (Netherlands)

    Detsi, Eric; Vukovic, Zorica; Punzhin, Sergey; Bronsveld, Paul M.; Onck, Patrick R.; De Hosson, Jeff Th M.

    2012-01-01

    We show that the characteristic ligament size of nanoporous Ag synthesized by chemical dissolution of Al from Ag-Al alloys can be tuned from the current submicrometer size (similar to 100-500 nm) down to a much smaller length scale (similar to 30-60 nm). This is achieved by suppressing the formation

  8. Efficient Minimum-Phase Prefilter Computation Using Fast QL-Factorization

    DEFF Research Database (Denmark)

    Hansen, Morten; Christensen, Lars P.B.

    2009-01-01

    This paper presents a novel approach for computing both the minimum-phase filter and the associated all-pass filter in a computationally efficient way using the fast QL-factorization. A desirable property of this approach is that the complexity is independent on the size of the matrix which is QL...

  9. Fabrication of ordered arrays of micro- and nanoscale features with control over their shape and size via templated solid-state dewetting.

    Science.gov (United States)

    Ye, Jongpil

    2015-05-08

    Templated solid-state dewetting of single-crystal films has been shown to be used to produce regular patterns of various shapes. However, the materials for which this patterning method is applicable, and the size range of the patterns produced are still limited. Here, it is shown that ordered arrays of micro- and nanoscale features can be produced with control over their shape and size via solid-state dewetting of patches patterned from single-crystal palladium and nickel films of different thicknesses and orientations. The shape and size characteristics of the patterns are found to be widely controllable with varying the shape, width, thickness, and orientation of the initial patches. The morphological evolution of the patches is also dependent on the film material, with different dewetting behaviors observed in palladium and nickel films. The mechanisms underlying the pattern formation are explained in terms of the influence on Rayleigh-like instability of the patch geometry and the surface energy anisotropy of the film material. This mechanistic understanding of pattern formation can be used to design patches for the precise fabrication of micro- and nanoscale structures with the desired shapes and feature sizes.

  10. Online feature selection with streaming features.

    Science.gov (United States)

    Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan

    2013-05-01

    We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.

  11. Robust branch-cut-and-price for the Capacitated Minimum Spanning Tree problem over a large extended formulation

    DEFF Research Database (Denmark)

    Uchoa, Eduardo; Fukasawa, Ricardo; Lysgaard, Jens

    2008-01-01

    -polynomial time. Traditional inequalities over the arc formulation, like Capacity Cuts, are also used. Moreover, a novel feature is introduced in such kind of algorithms: powerful new cuts expressed over a very large set of variables are added, without increasing the complexity of the pricing subproblem......This paper presents a robust branch-cut-and-price algorithm for the Capacitated Minimum Spanning Tree Problem (CMST). The variables are associated to q-arbs, a structure that arises from a relaxation of the capacitated prize-collecting arborescence problem in order to make it solvable in pseudo...... or the size of the LPs that are actually solved. Computational results on benchmark instances from the OR-Library show very significant improvements over previous algorithms. Several open instances could be solved to optimality....

  12. SU-F-R-32: Evaluation of MRI Acquisition Parameter Variations On Texture Feature Extraction Using ACR Phantom

    International Nuclear Information System (INIS)

    Xie, Y; Wang, J; Wang, C; Chang, Z

    2016-01-01

    Purpose: To investigate the sensitivity of classic texture features to variations of MRI acquisition parameters. Methods: This study was performed on American College of Radiology (ACR) MRI Accreditation Program Phantom. MR imaging was acquired on a GE 750 3T scanner with XRM explain gradient, employing a T1-weighted images (TR/TE=500/20ms) with the following parameters as the reference standard: number of signal average (NEX) = 1, matrix size = 256×256, flip angle = 90°, slice thickness = 5mm. The effect of the acquisition parameters on texture features with and without non-uniformity correction were investigated respectively, while all the other parameters were kept as reference standard. Protocol parameters were set as follows: (a). NEX = 0.5, 2 and 4; (b).Phase encoding steps = 128, 160 and 192; (c). Matrix size = 128×128, 192×192 and 512×512. 32 classic texture features were generated using the classic gray level run length matrix (GLRLM) and gray level co-occurrence matrix (GLCOM) from each image data set. Normalized range ((maximum-minimum)/mean) was calculated to determine variation among the scans with different protocol parameters. Results: For different NEX, 31 out of 32 texture features’ range are within 10%. For different phase encoding steps, 31 out of 32 texture features’ range are within 10%. For different acquisition matrix size without non-uniformity correction, 14 out of 32 texture features’ range are within 10%; for different acquisition matrix size with non-uniformity correction, 16 out of 32 texture features’ range are within 10%. Conclusion: Initial results indicated that those texture features that range within 10% are less sensitive to variations in T1-weighted MRI acquisition parameters. This might suggest that certain texture features might be more reliable to be used as potential biomarkers in MR quantitative image analysis.

  13. Minimum spanning trees and random resistor networks in d dimensions.

    Science.gov (United States)

    Read, N

    2005-09-01

    We consider minimum-cost spanning trees, both in lattice and Euclidean models, in d dimensions. For the cost of the optimum tree in a box of size L , we show that there is a correction of order L(theta) , where theta or =1 . The arguments all rely on the close relation of Kruskal's greedy algorithm for the minimum spanning tree, percolation, and (for some arguments) random resistor networks. The scaling of the entropy and free energy at small nonzero T , and hence of the number of near-optimal solutions, is also discussed. We suggest that the Steiner tree problem is in the same universality class as the minimum spanning tree in all dimensions, as is the traveling salesman problem in two dimensions. Hence all will have the same value of theta=-3/4 in two dimensions.

  14. Optimum workforce-size model using dynamic programming approach

    African Journals Online (AJOL)

    This paper presents an optimum workforce-size model which determines the minimum number of excess workers (overstaffing) as well as the minimum total recruitment cost during a specified planning horizon. The model is an extension of other existing dynamic programming models for manpower planning in the sense ...

  15. Optimized feature subsets for epileptic seizure prediction studies.

    Science.gov (United States)

    Direito, Bruno; Ventura, Francisco; Teixeira, César; Dourado, António

    2011-01-01

    The reduction of the number of EEG features to give as inputs to epilepsy seizure predictors is a needed step towards the development of a transportable device for real-time warning. This paper presents a comparative study of three feature selection methods, based on Support Vector Machines. Minimum-Redundancy Maximum-Relevance, Recursive Feature Elimination, Genetic Algorithms, show that, for three patients of the European Database on Epilepsy, the most important univariate features are related to spectral information and statistical moments.

  16. Study of the deposition features of the organic dye Rhodamine B on the porous surface of silicon with different pore sizes

    Energy Technology Data Exchange (ETDEWEB)

    Lenshin, A. S., E-mail: lenshinas@phys.vsu.ru; Seredin, P. V.; Kavetskaya, I. V.; Minakov, D. A.; Kashkarov, V. M. [Voronezh State University (Russian Federation)

    2017-02-15

    The deposition features of the organic dye Rhodamine B on the porous surface of silicon with average pore sizes of 50–100 and 100–250 nm are studied. Features of the composition and optical properties of the obtained systems are studied using infrared and photoluminescence spectroscopy. It is found that Rhodamine-B adsorption on the surface of por-Si with various porosities is preferentially physical. The optimal technological parameters of its deposition are determined.

  17. Featureous: A Tool for Feature-Centric Analysis of Java Software

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2010-01-01

    Feature-centric comprehension of source code is necessary for incorporating user-requested modifications during software evolution and maintenance. However, such comprehension is difficult to achieve in case of large object-oriented programs due to the size, complexity, and implicit character...... of mappings between features and source code. To support programmers in overcoming these difficulties, we present a feature-centric analysis tool, Featureous. Our tool extends the NetBeans IDE with mechanisms for efficient location of feature implementations in legacy source code, and an extensive analysis...

  18. A unified definition of a vortex derived from vortical flow and the resulting pressure minimum

    Energy Technology Data Exchange (ETDEWEB)

    Nakayama, K [Department of Mechanical Engineering, Aichi Institute of Technology, Toyota, Aichi 470–0392 (Japan); Sugiyama, K; Takagi, S, E-mail: nakayama@aitech.ac.jp, E-mail: kazuyasu.sugiyama@riken.jp, E-mail: takagi@mech.t.u-tokyo.ac.jp [Department of Mechanical Engineering, School of Engineering, The University of Tokyo, Hongo, Tokyo 113–8656 (Japan)

    2014-10-01

    This paper presents a novel definition of a vortex that integrates the concepts of the invariant swirling motion, the pressure minimum characteristics induced by the swirling motion and the positive Laplacian of the pressure. The current definition specifies a vortex that has a swirling motion and resulting pressure minimum feature in the swirl plane, which is simply represented by the eigenvalues and eigenvectors of the velocity gradient tensor. (paper)

  19. Transverse micro-erosion meter measurements; determining minimum sample size

    Science.gov (United States)

    Trenhaile, Alan S.; Lakhan, V. Chris

    2011-11-01

    Two transverse micro-erosion meter (TMEM) stations were installed in each of four rock slabs, a slate/shale, basalt, phyllite/schist, and sandstone. One station was sprayed each day with fresh water and the other with a synthetic sea water solution (salt water). To record changes in surface elevation (usually downwearing but with some swelling), 100 measurements (the pilot survey), the maximum for the TMEM used in this study, were made at each station in February 2010, and then at two-monthly intervals until February 2011. The data were normalized using Box-Cox transformations and analyzed to determine the minimum number of measurements needed to obtain station means that fall within a range of confidence limits of the population means, and the means of the pilot survey. The effect on the confidence limits of reducing an already small number of measurements (say 15 or less) is much greater than that of reducing a much larger number of measurements (say more than 50) by the same amount. There was a tendency for the number of measurements, for the same confidence limits, to increase with the rate of downwearing, although it was also dependent on whether the surface was treated with fresh or salt water. About 10 measurements often provided fairly reasonable estimates of rates of surface change but with fairly high percentage confidence intervals in slowly eroding rocks; however, many more measurements were generally needed to derive means within 10% of the population means. The results were tabulated and graphed to provide an indication of the approximate number of measurements required for given confidence limits, and the confidence limits that might be attained for a given number of measurements.

  20. Parameters Tuning of Model Free Adaptive Control Based on Minimum Entropy

    Institute of Scientific and Technical Information of China (English)

    Chao Ji; Jing Wang; Liulin Cao; Qibing Jin

    2014-01-01

    Dynamic linearization based model free adaptive control(MFAC) algorithm has been widely used in practical systems, in which some parameters should be tuned before it is successfully applied to process industries. Considering the random noise existing in real processes, a parameter tuning method based on minimum entropy optimization is proposed,and the feature of entropy is used to accurately describe the system uncertainty. For cases of Gaussian stochastic noise and non-Gaussian stochastic noise, an entropy recursive optimization algorithm is derived based on approximate model or identified model. The extensive simulation results show the effectiveness of the minimum entropy optimization for the partial form dynamic linearization based MFAC. The parameters tuned by the minimum entropy optimization index shows stronger stability and more robustness than these tuned by other traditional index,such as integral of the squared error(ISE) or integral of timeweighted absolute error(ITAE), when the system stochastic noise exists.

  1. Comparison of advanced mid-sized reactors regarding passive features, core damage frequencies and core melt retention features

    International Nuclear Information System (INIS)

    Wider, H.

    2005-01-01

    New Light Water Reactors, whose regular safety systems are complemented by passive safety systems, are ready for the market. The special aspect of passive safety features is their actuation and functioning independent of the operator. They add significantly to reduce the core damage frequency (CDF) since the operator continues to play its independent role in actuating the regular safety devices based on modern instrumentation and control (I and C). The latter also has passive features regarding the prevention of accidents. Two reactors with significant passive features that are presently offered on the market are the AP1000 PWR and the SWR 1000 BWR. Their passive features are compared and also their core damage frequencies (CDF). The latter are also compared with those of a VVER-1000. A further discussion about the two passive plants concerns their mitigating features for severe accidents. Regarding core-melt retention both rely on in-vessel cooling of the melt. The new VVER-1000 reactor, on the other hand features a validated ex-vessel concept. (author)

  2. Normalized Minimum Error Entropy Algorithm with Recursive Power Estimation

    Directory of Open Access Journals (Sweden)

    Namyong Kim

    2016-06-01

    Full Text Available The minimum error entropy (MEE algorithm is known to be superior in signal processing applications under impulsive noise. In this paper, based on the analysis of behavior of the optimum weight and the properties of robustness against impulsive noise, a normalized version of the MEE algorithm is proposed. The step size of the MEE algorithm is normalized with the power of input entropy that is estimated recursively for reducing its computational complexity. The proposed algorithm yields lower minimum MSE (mean squared error and faster convergence speed simultaneously than the original MEE algorithm does in the equalization simulation. On the condition of the same convergence speed, its performance enhancement in steady state MSE is above 3 dB.

  3. Selective axonal growth of embryonic hippocampal neurons according to topographic features of various sizes and shapes

    Directory of Open Access Journals (Sweden)

    Christine E Schmidt

    2010-12-01

    Full Text Available David Y Fozdar1*, Jae Y Lee2*, Christine E Schmidt2–6, Shaochen Chen1,3–5,7,1Departments of Mechanical Engineering, 2Chemical Engineering, 3Biomedical Engineering; 4Center for Nano Molecular Science and Technology; 5Texas Materials Institute; 6Institute of Neuroscience; 7Microelectronics Research Center, The University of Texas at Austin, Austin, TX, USA *Contributed equally to this workPurpose: Understanding how surface features influence the establishment and outgrowth of the axon of developing neurons at the single cell level may aid in designing implantable scaffolds for the regeneration of damaged nerves. Past studies have shown that micropatterned ridge-groove structures not only instigate axon polarization, alignment, and extension, but are also preferred over smooth surfaces and even neurotrophic ligands.Methods: Here, we performed axonal-outgrowth competition assays using a proprietary four-quadrant topography grid to determine the capacity of various micropatterned topographies to act as stimuli sequestering axon extension. Each topography in the grid consisted of an array of microscale (approximately 2 µm or submicroscale (approximately 300 nm holes or lines with variable dimensions. Individual rat embryonic hippocampal cells were positioned either between two juxtaposing topographies or at the borders of individual topographies juxtaposing unpatterned smooth surface, cultured for 24 hours, and analyzed with respect to axonal selection using conventional imaging techniques.Results: Topography was found to influence axon formation and extension relative to smooth surface, and the distance of neurons relative to topography was found to impact whether the topography could serve as an effective cue. Neurons were also found to prefer submicroscale over microscale features and holes over lines for a given feature size.Conclusion: The results suggest that implementing physical cues of various shapes and sizes on nerve guidance conduits

  4. Specimen size effect considerations for irradiation studies of SiC/SiC

    Energy Technology Data Exchange (ETDEWEB)

    Youngblood, G.E.; Henager, C.H. Jr.; Jones, R.H. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-10-01

    For characterization of the irradiation performance of SiC/SiC, limited available irradiation volume generally dictates that tests be conducted on a small number of relatively small specimens. Flexure testing of two groups of bars with different sizes cut from the same SiC/SiC plate suggested the following lower limits for flexure specimen number and size: Six samples at a minimum for each condition and a minimum bar size of 30 x 6.0 x 2.0 mm{sup 3}.

  5. Predictive Feature Selection for Genetic Policy Search

    Science.gov (United States)

    2014-05-22

    limited manual intervention are becoming increasingly desirable as more complex tasks in dynamic and high- tempo environments are explored. Reinforcement...states in many domains causes features relevant to the reward variations to be overlooked, which hinders the policy search. 3.4 Parameter Selection PFS...the current feature subset. This local minimum may be “deceptive,” meaning that it does not clearly lead to the global optimal policy ( Goldberg and

  6. Evaluating Stability and Comparing Output of Feature Selectors that Optimize Feature Subset Cardinality

    Czech Academy of Sciences Publication Activity Database

    Somol, Petr; Novovičová, Jana

    2010-01-01

    Roč. 32, č. 11 (2010), s. 1921-1939 ISSN 0162-8828 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593; GA ČR GA102/07/1594 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : feature selection * feature stability * stability measures * similarity measures * sequential search * individual ranking * feature subset-size optimization * high dimensionality * small sample size Subject RIV: BD - Theory of Information Impact factor: 5.027, year: 2010 http://library.utia.cas.cz/separaty/2010/RO/somol-0348726.pdf

  7. The minimum sit-to-stand height test: reliability, responsiveness and relationship to leg muscle strength.

    Science.gov (United States)

    Schurr, Karl; Sherrington, Catherine; Wallbank, Geraldine; Pamphlett, Patricia; Olivetti, Lynette

    2012-07-01

    To determine the reliability of the minimum sit-to-stand height test, its responsiveness and its relationship to leg muscle strength among rehabilitation unit inpatients and outpatients. Reliability study using two measurers and two test occasions. Secondary analysis of data from two clinical trials. Inpatient and outpatient rehabilitation services in three public hospitals. Eighteen hospital patients and five others participated in the reliability study. Seventy-two rehabilitation unit inpatients and 80 outpatients participated in the clinical trials. The minimum sit-to-stand height test was assessed using a standard procedure. For the reliability study, a second tester repeated the minimum sit-to-stand height test on the same day. In the inpatient clinical trial the measures were repeated two weeks later. In the outpatient trial the measures were repeated five weeks later. Knee extensor muscle strength was assessed in the clinical trials using a hand-held dynamometer. The reliability for the minimum sit-to-stand height test was excellent (intraclass correlation coefficient (ICC) 0.91, 95% confidence interval (CI) 0.81-0.96). The standard error of measurement was 34 mm. Responsiveness was moderate in the inpatient trial (effect size: 0.53) but small in the outpatient trial (effect size: 0.16). A small proportion (8-17%) of variability in minimum sit-to-stand height test was explained by knee extensor muscle strength. The minimum sit-to-stand height test has excellent reliability and moderate responsiveness in an inpatient rehabilitation setting. Responsiveness in an outpatient rehabilitation setting requires further investigation. Performance is influenced by factors other than knee extensor muscle strength.

  8. Centered Differential Waveform Inversion with Minimum Support Regularization

    KAUST Repository

    Kazei, Vladimir

    2017-05-26

    Time-lapse full-waveform inversion has two major challenges. The first one is the reconstruction of a reference model (baseline model for most of approaches). The second is inversion for the time-lapse changes in the parameters. Common model approach is utilizing the information contained in all available data sets to build a better reference model for time lapse inversion. Differential (Double-difference) waveform inversion allows to reduce the artifacts introduced into estimates of time-lapse parameter changes by imperfect inversion for the baseline-reference model. We propose centered differential waveform inversion (CDWI) which combines these two approaches in order to benefit from both of their features. We apply minimum support regularization commonly used with electromagnetic methods of geophysical exploration. We test the CDWI method on synthetic dataset with random noise and show that, with Minimum support regularization, it provides better resolution of velocity changes than with total variation and Tikhonov regularizations in time-lapse full-waveform inversion.

  9. Zone-size nonuniformity of 18F-FDG PET regional textural features predicts survival in patients with oropharyngeal cancer.

    Science.gov (United States)

    Cheng, Nai-Ming; Fang, Yu-Hua Dean; Lee, Li-yu; Chang, Joseph Tung-Chieh; Tsan, Din-Li; Ng, Shu-Hang; Wang, Hung-Ming; Liao, Chun-Ta; Yang, Lan-Yan; Hsu, Ching-Han; Yen, Tzu-Chen

    2015-03-01

    The question as to whether the regional textural features extracted from PET images predict prognosis in oropharyngeal squamous cell carcinoma (OPSCC) remains open. In this study, we investigated the prognostic impact of regional heterogeneity in patients with T3/T4 OPSCC. We retrospectively reviewed the records of 88 patients with T3 or T4 OPSCC who had completed primary therapy. Progression-free survival (PFS) and disease-specific survival (DSS) were the main outcome measures. In an exploratory analysis, a standardized uptake value of 2.5 (SUV 2.5) was taken as the cut-off value for the detection of tumour boundaries. A fixed threshold at 42 % of the maximum SUV (SUVmax 42 %) and an adaptive threshold method were then used for validation. Regional textural features were extracted from pretreatment (18)F-FDG PET/CT images using the grey-level run length encoding method and grey-level size zone matrix. The prognostic significance of PET textural features was examined using receiver operating characteristic (ROC) curves and Cox regression analysis. Zone-size nonuniformity (ZSNU) was identified as an independent predictor of PFS and DSS. Its prognostic impact was confirmed using both the SUVmax 42 % and the adaptive threshold segmentation methods. Based on (1) total lesion glycolysis, (2) uniformity (a local scale texture parameter), and (3) ZSNU, we devised a prognostic stratification system that allowed the identification of four distinct risk groups. The model combining the three prognostic parameters showed a higher predictive value than each variable alone. ZSNU is an independent predictor of outcome in patients with advanced T-stage OPSCC, and may improve their prognostic stratification.

  10. Do Minimum Wages Fight Poverty?

    OpenAIRE

    David Neumark; William Wascher

    1997-01-01

    The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...

  11. Relevant test set using feature selection algorithm for early detection ...

    African Journals Online (AJOL)

    The objective of feature selection is to find the most relevant features for classification. Thus, the dimensionality of the information will be reduced and may improve classification's accuracy. This paper proposed a minimum set of relevant questions that can be used for early detection of dyslexia. In this research, we ...

  12. Dissociation between Features and Feature Relations in Infant Memory: Effects of Memory Load.

    Science.gov (United States)

    Bhatt, Ramesh S.; Rovee-Collier, Carolyn

    1997-01-01

    Four experiments examined effects of the number of features and feature relations on learning and long-term memory in 3-month olds. Findings suggested that memory load size selectively constrained infants' long-term memory for relational information, suggesting that in infants, features and relations are psychologically distinct and that memory…

  13. Prediction technique for minimum-heat-flux (MHF)- point condition of saturated pool boiling

    International Nuclear Information System (INIS)

    Nishio, Shigefumi

    1987-01-01

    The temperature-controlled hypothesis for the minimum-heat-flux (MHF)-point condition, in which the MHF-point temperature is regarded as the controlling factor and is expected to be independent of surface configuration and dimensions, is inductively investigated for saturated pool-boiling. In this paper such features of the MHF-point condition are experimentally proved first. Secondly, a correlation of the MHF-point temperature is developed for the effect of system pressure. Finally, a simple technique based on this correlation is presented to estimate the effects of surface configuration, dimensions and system pressure on the minimum heat flux. (author)

  14. Quantitative Comparison of Tolerance-Based Feature Transforms

    OpenAIRE

    Reniers, Dennie; Telea, Alexandru

    2006-01-01

    Tolerance-based feature transforms (TFTs) assign to each pixel in an image not only the nearest feature pixels on the boundary (origins), but all origins from the minimum distance up to a user-defined tolerance. In this paper, we compare four simple-to-implement methods for computing TFTs for binary images. Of these, two are novel methods and two extend existing distance transform algorithms. We quantitatively and qualitatively compare all algorithms on speed and accuracy of both distance and...

  15. Addressing the minimum fleet problem in on-demand urban mobility.

    Science.gov (United States)

    Vazifeh, M M; Santi, P; Resta, G; Strogatz, S H; Ratti, C

    2018-05-01

    Information and communication technologies have opened the way to new solutions for urban mobility that provide better ways to match individuals with on-demand vehicles. However, a fundamental unsolved problem is how best to size and operate a fleet of vehicles, given a certain demand for personal mobility. Previous studies 1-5 either do not provide a scalable solution or require changes in human attitudes towards mobility. Here we provide a network-based solution to the following 'minimum fleet problem', given a collection of trips (specified by origin, destination and start time), of how to determine the minimum number of vehicles needed to serve all the trips without incurring any delay to the passengers. By introducing the notion of a 'vehicle-sharing network', we present an optimal computationally efficient solution to the problem, as well as a nearly optimal solution amenable to real-time implementation. We test both solutions on a dataset of 150 million taxi trips taken in the city of New York over one year 6 . The real-time implementation of the method with near-optimal service levels allows a 30 per cent reduction in fleet size compared to current taxi operation. Although constraints on driver availability and the existence of abnormal trip demands may lead to a relatively larger optimal value for the fleet size than that predicted here, the fleet size remains robust for a wide range of variations in historical trip demand. These predicted reductions in fleet size follow directly from a reorganization of taxi dispatching that could be implemented with a simple urban app; they do not assume ride sharing 7-9 , nor require changes to regulations, business models, or human attitudes towards mobility to become effective. Our results could become even more relevant in the years ahead as fleets of networked, self-driving cars become commonplace 10-14 .

  16. An Improved Minimum Error Interpolator of CNC for General Curves Based on FPGA

    Directory of Open Access Journals (Sweden)

    Jiye HUANG

    2014-05-01

    Full Text Available This paper presents an improved minimum error interpolation algorithm for general curves generation in computer numerical control (CNC. Compared with the conventional interpolation algorithms such as the By-Point Comparison method, the Minimum- Error method and the Digital Differential Analyzer (DDA method, the proposed improved Minimum-Error interpolation algorithm can find a balance between accuracy and efficiency. The new algorithm is applicable for the curves of linear, circular, elliptical and parabolic. The proposed algorithm is realized on a field programmable gate array (FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe. The algorithm has the following advantages: firstly, the maximum interpolation error is only half of the minimum step-size; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  17. A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection

    KAUST Repository

    Park, Chiwoo; Huang, Jianhua Z.; Ding, Yu

    2010-01-01

    A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.

  18. A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection

    KAUST Repository

    Park, Chiwoo

    2010-10-01

    A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.

  19. Finding the Energy Efficient Curve: Gate Sizing for Minimum Power under Delay Constraints

    Directory of Open Access Journals (Sweden)

    Yoni Aizik

    2011-01-01

    Full Text Available A design scenario examined in this paper assumes that a circuit has been designed initially for high speed, and it is redesigned for low power by downsizing of the gates. In recent years, as power consumption has become a dominant issue, new optimizations of circuits are required for saving energy. This is done by trading off some speed in exchange for reduced power. For each feasible speed, an optimization problem is solved in this paper, finding new sizes for the gates such that the circuit satisfies the speed goal while dissipating minimal power. Energy/delay gain (EDG is defined as a metric to quantify the most efficient tradeoff. The EDG of the circuit is evaluated for a range of reduced circuit speeds, and the power-optimal gate sizes are compared with the initial sizes. Most of the energy savings occur at the final stages of the circuits, while the largest relative downsizing occurs in middle stages. Typical tapering factors for power efficient circuits are larger than those for speed-optimal circuits. Signal activity and signal probability affect the optimal gate sizes in the combined optimization of speed and power.

  20. A result-driven minimum blocking method for PageRank parallel computing

    Science.gov (United States)

    Tao, Wan; Liu, Tao; Yu, Wei; Huang, Gan

    2017-01-01

    Matrix blocking is a common method for improving computational efficiency of PageRank, but the blocking rules are hard to be determined, and the following calculation is complicated. In tackling these problems, we propose a minimum blocking method driven by result needs to accomplish a parallel implementation of PageRank algorithm. The minimum blocking just stores the element which is necessary for the result matrix. In return, the following calculation becomes simple and the consumption of the I/O transmission is cut down. We do experiments on several matrixes of different data size and different sparsity degree. The results show that the proposed method has better computational efficiency than traditional blocking methods.

  1. Zone-size nonuniformity of {sup 18}F-FDG PET regional textural features predicts survival in patients with oropharyngeal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Nai-Ming [Chang Gung Memorial Hospital and Chang Gung University, Departments of Nuclear Medicine, Taiyuan (China); Chang Gung Memorial Hospital, Department of Nuclear Medicine, Keelung (China); National Tsing Hua University, Department of Biomedical Engineering and Environmental Sciences, Hsinchu (China); Fang, Yu-Hua Dean [Chang Gung University, Department of Electrical Engineering, Taiyuan (China); Lee, Li-yu [Chang Gung University College of Medicine, Department of Pathology, Chang Gung Memorial Hospital, Taoyuan (China); Chang, Joseph Tung-Chieh; Tsan, Din-Li [Chang Gung University College of Medicine, Department of Radiation Oncology, Chang Gung Memorial Hospital, Taoyuan (China); Ng, Shu-Hang [Chang Gung University College of Medicine, Department of Diagnostic Radiology, Chang Gung Memorial Hospital, Taoyuan (China); Wang, Hung-Ming [Chang Gung University College of Medicine, Division of Hematology/Oncology, Department of Internal Medicine, Chang Gung Memorial Hospital, Taoyuan (China); Liao, Chun-Ta [Chang Gung University College of Medicine, Department of Otolaryngology-Head and Neck Surgery, Chang Gung Memorial Hospital, Taoyuan (China); Yang, Lan-Yan [Chang Gung Memorial Hospital, Biostatistics Unit, Clinical Trial Center, Taoyuan (China); Hsu, Ching-Han [National Tsing Hua University, Department of Biomedical Engineering and Environmental Sciences, Hsinchu (China); Yen, Tzu-Chen [Chang Gung Memorial Hospital and Chang Gung University, Departments of Nuclear Medicine, Taiyuan (China); Chang Gung University College of Medicine, Department of Nuclear Medicine and Molecular Imaging Center, Chang Gung Memorial Hospital, Taipei (China)

    2014-10-23

    The question as to whether the regional textural features extracted from PET images predict prognosis in oropharyngeal squamous cell carcinoma (OPSCC) remains open. In this study, we investigated the prognostic impact of regional heterogeneity in patients with T3/T4 OPSCC. We retrospectively reviewed the records of 88 patients with T3 or T4 OPSCC who had completed primary therapy. Progression-free survival (PFS) and disease-specific survival (DSS) were the main outcome measures. In an exploratory analysis, a standardized uptake value of 2.5 (SUV 2.5) was taken as the cut-off value for the detection of tumour boundaries. A fixed threshold at 42 % of the maximum SUV (SUV{sub max} 42 %) and an adaptive threshold method were then used for validation. Regional textural features were extracted from pretreatment {sup 18}F-FDG PET/CT images using the grey-level run length encoding method and grey-level size zone matrix. The prognostic significance of PET textural features was examined using receiver operating characteristic (ROC) curves and Cox regression analysis. Zone-size nonuniformity (ZSNU) was identified as an independent predictor of PFS and DSS. Its prognostic impact was confirmed using both the SUV{sub max} 42 % and the adaptive threshold segmentation methods. Based on (1) total lesion glycolysis, (2) uniformity (a local scale texture parameter), and (3) ZSNU, we devised a prognostic stratification system that allowed the identification of four distinct risk groups. The model combining the three prognostic parameters showed a higher predictive value than each variable alone. ZSNU is an independent predictor of outcome in patients with advanced T-stage OPSCC, and may improve their prognostic stratification. (orig.)

  2. Rising above the Minimum Wage.

    Science.gov (United States)

    Even, William; Macpherson, David

    An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…

  3. Feature-Based Retinal Image Registration Using D-Saddle Feature

    Directory of Open Access Journals (Sweden)

    Roziana Ramli

    2017-01-01

    Full Text Available Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%, Harris-PIIFD (4%, H-M (16%, and Saddle (16%. Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle.

  4. Do detailed simulations with size-resolved microphysics reproduce basic features of observed cirrus ice size distributions?

    Science.gov (United States)

    Fridlind, A. M.; Atlas, R.; van Diedenhoven, B.; Ackerman, A. S.; Rind, D. H.; Harrington, J. Y.; McFarquhar, G. M.; Um, J.; Jackson, R.; Lawson, P.

    2017-12-01

    It has recently been suggested that seeding synoptic cirrus could have desirable characteristics as a geoengineering approach, but surprisingly large uncertainties remain in the fundamental parameters that govern cirrus properties, such as mass accommodation coefficient, ice crystal physical properties, aggregation efficiency, and ice nucleation rate from typical upper tropospheric aerosol. Only one synoptic cirrus model intercomparison study has been published to date, and studies that compare the shapes of observed and simulated ice size distributions remain sparse. Here we amend a recent model intercomparison setup using observations during two 2010 SPARTICUS campaign flights. We take a quasi-Lagrangian column approach and introduce an ensemble of gravity wave scenarios derived from collocated Doppler cloud radar retrievals of vertical wind speed. We use ice crystal properties derived from in situ cloud particle images, for the first time allowing smoothly varying and internally consistent treatments of nonspherical ice capacitance, fall speed, gravitational collection, and optical properties over all particle sizes in our model. We test two new parameterizations for mass accommodation coefficient as a function of size, temperature and water vapor supersaturation, and several ice nucleation scenarios. Comparison of results with in situ ice particle size distribution data, corrected using state-of-the-art algorithms to remove shattering artifacts, indicate that poorly constrained uncertainties in the number concentration of crystals smaller than 100 µm in maximum dimension still prohibit distinguishing which parameter combinations are more realistic. When projected area is concentrated at such sizes, the only parameter combination that reproduces observed size distribution properties uses a fixed mass accommodation coefficient of 0.01, on the low end of recently reported values. No simulations reproduce the observed abundance of such small crystals when the

  5. Assessment of the facial features and chin development of fetuses with use of serial three-dimensional sonography and the mandibular size monogram in a Chinese population.

    Science.gov (United States)

    Tsai, Meng-Yin; Lan, Kuo-Chung; Ou, Chia-Yo; Chen, Jen-Huang; Chang, Shiuh-Young; Hsu, Te-Yao

    2004-02-01

    Our purpose was to evaluate whether the application of serial three-dimensional (3D) sonography and the mandibular size monogram can allow observation of dynamic changes in facial features, as well as chin development in utero. The mandibular size monogram has been established through a cross-sectional study involving 183 fetal images. The serial changes of facial features and chin development are assessed in a cohort study involving 40 patients. The monogram reveals that the Biparietal distance (BPD)/Mandibular body length (MBL) ratio is gradually decreased with the advance of gestational age. The cohort study conducted with serial 3D sonography shows the same tendency. Both the images and the results of paired-samples t test (Pmonogram display disproportionate growth of the fetal head and chin that leads to changes in facial features in late gestation. This fact must be considered when we evaluate fetuses at risk for development of micrognathia.

  6. [Specific features in realization of the principle of minimum energy dissipation during individual development].

    Science.gov (United States)

    Zotin, A A

    2012-01-01

    Realization of the principle of minimum energy dissipation (Prigogine's theorem) during individual development has been analyzed. This analysis has suggested the following reformulation of this principle for living objects: when environmental conditions are constant, the living system evolves to a current steady state in such a way that the difference between entropy production and entropy flow (psi(u) function) is positive and constantly decreases near the steady state, approaching zero. In turn, the current steady state tends to a final steady state in such a way that the difference between the specific entropy productions in an organism and its environment tends to be minimal. In general, individual development completely agrees with the law of entropy increase (second law of thermodynamics).

  7. Employment effects of minimum wages

    OpenAIRE

    Neumark, David

    2014-01-01

    The potential benefits of higher minimum wages come from the higher wages for affected workers, some of whom are in low-income families. The potential downside is that a higher minimum wage may discourage employers from using the low-wage, low-skill workers that minimum wages are intended to help. Research findings are not unanimous, but evidence from many countries suggests that minimum wages reduce the jobs available to low-skill workers.

  8. Feature Selection, Flaring Size and Time-to-Flare Prediction Using Support Vector Regression, and Automated Prediction of Flaring Behavior Based on Spatio-Temporal Measures Using Hidden Markov Models

    Science.gov (United States)

    Al-Ghraibah, Amani

    Solar flares release stored magnetic energy in the form of radiation and can have significant detrimental effects on earth including damage to technological infrastructure. Recent work has considered methods to predict future flare activity on the basis of quantitative measures of the solar magnetic field. Accurate advanced warning of solar flare occurrence is an area of increasing concern and much research is ongoing in this area. Our previous work 111] utilized standard pattern recognition and classification techniques to determine (classify) whether a region is expected to flare within a predictive time window, using a Relevance Vector Machine (RVM) classification method. We extracted 38 features which describing the complexity of the photospheric magnetic field, the result classification metrics will provide the baseline against which we compare our new work. We find a true positive rate (TPR) of 0.8, true negative rate (TNR) of 0.7, and true skill score (TSS) of 0.49. This dissertation proposes three basic topics; the first topic is an extension to our previous work [111, where we consider a feature selection method to determine an appropriate feature subset with cross validation classification based on a histogram analysis of selected features. Classification using the top five features resulting from this analysis yield better classification accuracies across a large unbalanced dataset. In particular, the feature subsets provide better discrimination of the many regions that flare where we find a TPR of 0.85, a TNR of 0.65 sightly lower than our previous work, and a TSS of 0.5 which has an improvement comparing with our previous work. In the second topic, we study the prediction of solar flare size and time-to-flare using support vector regression (SVR). When we consider flaring regions only, we find an average error in estimating flare size of approximately half a GOES class. When we additionally consider non-flaring regions, we find an increased average

  9. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  10. Allowable minimum upper shelf toughness for nuclear reactor pressure vessels

    Energy Technology Data Exchange (ETDEWEB)

    Zahoor, A.

    1988-05-01

    The paper develops methodology and procedure for determining the allowable minimum upper shelf toughness for continued safe operation of nuclear reactor pressure vessels. Elastic-plastic fracture mechanics analysis method based on the J-integral tearing modulus (J/T) approach is used. Closed from expressions for the applied J and tearing modulus are presented for finite length, part-throughwall axial flaw with aspect ratio of 1/6. Solutions are then presented for Section III, Appendix G flaw. A simple flaw evaluation procedure that can be applied quickly by utility engineers is presented. An attractive feature of the simple procedure is that tearing modulus calculations are not required by the user, and a solution for the slope of the applied J/T line is provided. Results for the allowable minimum upper shelf toughness are presented for a range of reactor pressure vessel thickness and heatup/cooldown rates.

  11. Allowable minimum upper shelf toughness for nuclear reactor pressure vessels

    International Nuclear Information System (INIS)

    Zahoor, A.

    1988-01-01

    The paper develops methodology and procedure for determining the allowable minimum upper shelf toughness for continued safe operation of nuclear reactor pressure vessels. Elastic-plastic fracture mechanics analysis method based on the J-integral tearing modulus (J/T) approach is used. Closed from expressions for the applied J and tearing modulus are presented for finite length, part-throughwall axial flaw with aspect ratio of 1/6. Solutions are then presented for Section III, Appendix G flaw. A simple flaw evaluation procedure that can be applied quickly by utility engineers is presented. An attractive feature of the simple procedure is that tearing modulus calculations are not required by the user, and a solution for the slope of the applied J/T line is provided. Results for the allowable minimum upper shelf toughness are presented for a range of reactor pressure vessel thickness and heatup/cooldown rates. (orig.)

  12. Influence of temperature and grain size on the tensile ductility of AISI 316 stainless steel

    International Nuclear Information System (INIS)

    Mannan, S.L.; Samuel, K.G.; Rodriguez, P.

    1985-01-01

    The influence of tmeperature and grain size on the tensile ductility of AISI 316 stainless steel has been examined in the temperature range 300-1223 K for specimens with grain sizes varying from 0.025 to 0.650 mm at a nominal strain rate of 3 X 10 -4 s -1 . The percentage total elongation and reduction in area at fracture show minimum ductility at an intermediate temperature, and the temperature corresponding to this ductility minimum has been found to increase with increase in grain size. The total elongation is found to decrease with increase in grain size at high temperatures where failures are essentially intergranular in nature. At 300 K, both uniform and total elongation increase with increase in grain size and then show a small decrease for a very coarse grain size. The high ductility observed at low temperatures (300 K) is consistent with the observation of characteristic dimples associated with transgranular ductile fracture. The ductility minimum with respect to temperature is associated with the occurrence of intergranular fracture, as evidenced by optical and scanning electron microscopy. The present results support the suggestion that the ductility minimum coincides with the maximum amount of grain boundary sliding; at temperatures beyond the ductility minimum, grain boundary separation by cavitation is retarded by the occurrence of grain boundary migration, as evidenced by the grain boundary cusps. In tests conducted at various strain rates in the range 10 -3 -10 -6 s -1 at 873 K the ductility was found to decrease with decreasing strain rate, emphasizing the increased importance of grain boundary sliding at lower strain rates. (Auth.)

  13. Minimum Wages and Poverty

    OpenAIRE

    Fields, Gary S.; Kanbur, Ravi

    2005-01-01

    Textbook analysis tells us that in a competitive labor market, the introduction of a minimum wage above the competitive equilibrium wage will cause unemployment. This paper makes two contributions to the basic theory of the minimum wage. First, we analyze the effects of a higher minimum wage in terms of poverty rather than in terms of unemployment. Second, we extend the standard textbook model to allow for incomesharing between the employed and the unemployed. We find that there are situation...

  14. Minimum triplet covers of binary phylogenetic X-trees.

    Science.gov (United States)

    Huber, K T; Moulton, V; Steel, M

    2017-12-01

    Trees with labelled leaves and with all other vertices of degree three play an important role in systematic biology and other areas of classification. A classical combinatorial result ensures that such trees can be uniquely reconstructed from the distances between the leaves (when the edges are given any strictly positive lengths). Moreover, a linear number of these pairwise distance values suffices to determine both the tree and its edge lengths. A natural set of pairs of leaves is provided by any 'triplet cover' of the tree (based on the fact that each non-leaf vertex is the median vertex of three leaves). In this paper we describe a number of new results concerning triplet covers of minimum size. In particular, we characterize such covers in terms of an associated graph being a 2-tree. Also, we show that minimum triplet covers are 'shellable' and thereby provide a set of pairs for which the inter-leaf distance values will uniquely determine the underlying tree and its associated branch lengths.

  15. Nonlinear dimension reduction and clustering by Minimum Curvilinearity unfold neuropathic pain and tissue embryological classes.

    Science.gov (United States)

    Cannistraci, Carlo Vittorio; Ravasi, Timothy; Montevecchi, Franco Maria; Ideker, Trey; Alessio, Massimo

    2010-09-15

    Nonlinear small datasets, which are characterized by low numbers of samples and very high numbers of measures, occur frequently in computational biology, and pose problems in their investigation. Unsupervised hybrid-two-phase (H2P) procedures-specifically dimension reduction (DR), coupled with clustering-provide valuable assistance, not only for unsupervised data classification, but also for visualization of the patterns hidden in high-dimensional feature space. 'Minimum Curvilinearity' (MC) is a principle that-for small datasets-suggests the approximation of curvilinear sample distances in the feature space by pair-wise distances over their minimum spanning tree (MST), and thus avoids the introduction of any tuning parameter. MC is used to design two novel forms of nonlinear machine learning (NML): Minimum Curvilinear embedding (MCE) for DR, and Minimum Curvilinear affinity propagation (MCAP) for clustering. Compared with several other unsupervised and supervised algorithms, MCE and MCAP, whether individually or combined in H2P, overcome the limits of classical approaches. High performance was attained in the visualization and classification of: (i) pain patients (proteomic measurements) in peripheral neuropathy; (ii) human organ tissues (genomic transcription factor measurements) on the basis of their embryological origin. MC provides a valuable framework to estimate nonlinear distances in small datasets. Its extension to large datasets is prefigured for novel NMLs. Classification of neuropathic pain by proteomic profiles offers new insights for future molecular and systems biology characterization of pain. Improvements in tissue embryological classification refine results obtained in an earlier study, and suggest a possible reinterpretation of skin attribution as mesodermal. https://sites.google.com/site/carlovittoriocannistraci/home.

  16. 75 FR 6151 - Minimum Capital

    Science.gov (United States)

    2010-02-08

    ... capital and reserve requirements to be issued by order or regulation with respect to a product or activity... minimum capital requirements. Section 1362(a) establishes a minimum capital level for the Enterprises... entities required under this section.\\6\\ \\3\\ The Bank Act's current minimum capital requirements apply to...

  17. A Pareto-Improving Minimum Wage

    OpenAIRE

    Eliav Danziger; Leif Danziger

    2014-01-01

    This paper shows that a graduated minimum wage, in contrast to a constant minimum wage, can provide a strict Pareto improvement over what can be achieved with an optimal income tax. The reason is that a graduated minimum wage requires high-productivity workers to work more to earn the same income as low-productivity workers, which makes it more difficult for the former to mimic the latter. In effect, a graduated minimum wage allows the low-productivity workers to benefit from second-degree pr...

  18. Urban-rural migration: uncertainty and the effect of a change in the minimum wage.

    Science.gov (United States)

    Ingene, C A; Yu, E S

    1989-01-01

    "This paper extends the neoclassical, Harris-Todaro model of urban-rural migration to the case of production uncertainty in the agricultural sector. A unique feature of the Harris-Todaro model is an exogenously determined minimum wage in the urban sector that exceeds the rural wage. Migration occurs until the rural wage equals the expected urban wage ('expected' due to employment uncertainty). The effects of a change in the minimum wage upon regional outputs, resource allocation, factor rewards, expected profits, and expected national income are explored, and the influence of production uncertainty upon the obtained results are delineated." The geographical focus is on developing countries. excerpt

  19. Feedback brake distribution control for minimum pitch

    Science.gov (United States)

    Tavernini, Davide; Velenis, Efstathios; Longo, Stefano

    2017-06-01

    The distribution of brake forces between front and rear axles of a vehicle is typically specified such that the same level of brake force coefficient is imposed at both front and rear wheels. This condition is known as 'ideal' distribution and it is required to deliver the maximum vehicle deceleration and minimum braking distance. For subcritical braking conditions, the deceleration demand may be delivered by different distributions between front and rear braking forces. In this research we show how to obtain the optimal distribution which minimises the pitch angle of a vehicle and hence enhances driver subjective feel during braking. A vehicle model including suspension geometry features is adopted. The problem of the minimum pitch brake distribution for a varying deceleration level demand is solved by means of a model predictive control (MPC) technique. To address the problem of the undesirable pitch rebound caused by a full-stop of the vehicle, a second controller is designed and implemented independently from the braking distribution in use. An extended Kalman filter is designed for state estimation and implemented in a high fidelity environment together with the MPC strategy. The proposed solution is compared with the reference 'ideal' distribution as well as another previous feed-forward solution.

  20. The Italian primary school-size distribution and the city-size: a complex nexus

    Science.gov (United States)

    Belmonte, Alessandro; di Clemente, Riccardo; Buldyrev, Sergey V.

    2014-06-01

    We characterize the statistical law according to which Italian primary school-size distributes. We find that the school-size can be approximated by a log-normal distribution, with a fat lower tail that collects a large number of very small schools. The upper tail of the school-size distribution decreases exponentially and the growth rates are distributed with a Laplace PDF. These distributions are similar to those observed for firms and are consistent with a Bose-Einstein preferential attachment process. The body of the distribution features a bimodal shape suggesting some source of heterogeneity in the school organization that we uncover by an in-depth analysis of the relation between schools-size and city-size. We propose a novel cluster methodology and a new spatial interaction approach among schools which outline the variety of policies implemented in Italy. Different regional policies are also discussed shedding lights on the relation between policy and geographical features.

  1. Can interface features affect aggression resulting from violent video game play? An examination of realistic controller and large screen size.

    Science.gov (United States)

    Kim, Ki Joon; Sundar, S Shyam

    2013-05-01

    Aggressiveness attributed to violent video game play is typically studied as a function of the content features of the game. However, can interface features of the game also affect aggression? Guided by the General Aggression Model (GAM), we examine the controller type (gun replica vs. mouse) and screen size (large vs. small) as key technological aspects that may affect the state aggression of gamers, with spatial presence and arousal as potential mediators. Results from a between-subjects experiment showed that a realistic controller and a large screen display induced greater aggression, presence, and arousal than a conventional mouse and a small screen display, respectively, and confirmed that trait aggression was a significant predictor of gamers' state aggression. Contrary to GAM, however, arousal showed no effects on aggression; instead, presence emerged as a significant mediator.

  2. Minimum critical mass systems

    International Nuclear Information System (INIS)

    Dam, H. van; Leege, P.F.A. de

    1987-01-01

    An analysis is presented of thermal systems with minimum critical mass, based on the use of materials with optimum neutron moderating and reflecting properties. The optimum fissile material distributions in the systems are obtained by calculations with standard computer codes, extended with a routine for flat fuel importance search. It is shown that in the minimum critical mass configuration a considerable part of the fuel is positioned in the reflector region. For 239 Pu a minimum critical mass of 87 g is found, which is the lowest value reported hitherto. (author)

  3. Effect of footwear on minimum foot clearance, heel slippage and spatiotemporal measures of gait in older women.

    Science.gov (United States)

    Davis, Annette M; Galna, Brook; Murphy, Anna T; Williams, Cylie M; Haines, Terry P

    2016-02-01

    Footwear has been implicated as a factor in falls, which is a major issue affecting the health of older adults. This study investigated the effect of footwear with dorsal fixation, slippers and bare feet on minimum foot clearance, heel slippage and spatiotemporal variables of gait in community dwelling older women. Thirty women participated (mean age (SD) 69.1 (5.1) years) in a gait assessment using the GaitRITE and Vicon 612 motion analysis system. Conditions included footwear with dorsal fixation, slippers or bare feet. Footwear with dorsal fixation resulted in improved minimum foot clearance compared to the slippers and bare feet conditions and less heel slippage than slippers and an increase in double support. These features lend weight to the argument that older women should be supported to make footwear choices with optimal fitting features including dorsal fixation. Recommendations of particular styles and features of footwear may assist during falls prevention education to reduce the incidence of foot trips and falls. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Unit size limitations in smaller power systems

    International Nuclear Information System (INIS)

    McConnach, J.S.

    1975-01-01

    The developing nations have generally found it an economic necessity to accept the minimum commercial size limit of 600 MWe. Smaller reactor sizes tendered as 'one off' specials carry high specific cost penalties which considerably weaken the competitiveness of nuclear versus conventional thermal plants. The revised IAEA market survey for nuclear power in developing countries (1974 edition) which takes account of the recent heavy escalation in oil prices, indicates a reasonable market for smaller size reactors in the range 150 MWe to 400 MWe, but until this market is approached seriously by manufacturers, the commercial availability and economic viability of smaller size reactors remains uncertain. (orig.) [de

  5. Optimal Sizing and Placement of Battery Energy Storage in Distribution System Based on Solar Size for Voltage Regulation

    Energy Technology Data Exchange (ETDEWEB)

    Nazaripouya, Hamidreza [Univ. of California, Los Angeles, CA (United States); Wang, Yubo [Univ. of California, Los Angeles, CA (United States); Chu, Peter [Univ. of California, Los Angeles, CA (United States); Pota, Hemanshu R. [Univ. of California, Los Angeles, CA (United States); Gadh, Rajit [Univ. of California, Los Angeles, CA (United States)

    2016-07-26

    This paper proposes a new strategy to achieve voltage regulation in distributed power systems in the presence of solar energy sources and battery storage systems. The goal is to find the minimum size of battery storage and its corresponding location in the network based on the size and place of the integrated solar generation. The proposed method formulates the problem by employing the network impedance matrix to obtain an analytical solution instead of using a recursive algorithm such as power flow. The required modifications for modeling the slack and PV buses (generator buses) are utilized to increase the accuracy of the approach. The use of reactive power control to regulate the voltage regulation is not always an optimal solution as in distribution systems R/X is large. In this paper the minimum size and the best place of battery storage is achieved by optimizing the amount of both active and reactive power exchanged by battery storage and its gridtie inverter (GTI) based on the network topology and R/X ratios in the distribution system. Simulation results for the IEEE 14-bus system verify the effectiveness of the proposed approach.

  6. Local Times of Galactic Cosmic Ray Intensity Maximum and Minimum in the Diurnal Variation

    Directory of Open Access Journals (Sweden)

    Su Yeon Oh

    2006-06-01

    Full Text Available The Diurnal variation of galactic cosmic ray (GCR flux intensity observed by the ground Neutron Monitor (NM shows a sinusoidal pattern with the amplitude of 1sim 2 % of daily mean. We carried out a statistical study on tendencies of the local times of GCR intensity daily maximum and minimum. To test the influences of the solar activity and the location (cut-off rigidity on the distribution in the local times of maximum and minimum GCR intensity, we have examined the data of 1996 (solar minimum and 2000 (solar maximum at the low-latitude Haleakala (latitude: 20.72 N, cut-off rigidity: 12.91 GeV and the high-latitude Oulu (latitude: 65.05 N, cut-off rigidity: 0.81 GeV NM stations. The most frequent local times of the GCR intensity daily maximum and minimum come later about 2sim3 hours in the solar activity maximum year 2000 than in the solar activity minimum year 1996. Oulu NM station whose cut-off rigidity is smaller has the most frequent local times of the GCR intensity maximum and minimum later by 2sim3 hours from those of Haleakala station. This feature is more evident at the solar maximum. The phase of the daily variation in GCR is dependent upon the interplanetary magnetic field varying with the solar activity and the cut-off rigidity varying with the geographic latitude.

  7. 5 CFR 551.301 - Minimum wage.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...

  8. Layout Optimization of Structures with Finite-size Features using Multiresolution Analysis

    DEFF Research Database (Denmark)

    Chellappa, S.; Diaz, A. R.; Bendsøe, Martin P.

    2004-01-01

    A scheme for layout optimization in structures with multiple finite-sized heterogeneities is presented. Multiresolution analysis is used to compute reduced operators (stiffness matrices) representing the elastic behavior of material distributions with heterogeneities of sizes that are comparable...

  9. Characteristic size and mass of galaxies in the Bose–Einstein condensate dark matter model

    Directory of Open Access Journals (Sweden)

    Jae-Weon Lee

    2016-05-01

    Full Text Available We study the characteristic length scale of galactic halos in the Bose–Einstein condensate (or scalar field dark matter model. Considering the evolution of the density perturbation we show that the average background matter density determines the quantum Jeans mass and hence the spatial size of galaxies at a given epoch. In this model the minimum size of galaxies increases while the minimum mass of the galaxies decreases as the universe expands. The observed values of the mass and the size of the dwarf galaxies are successfully reproduced with the dark matter particle mass m≃5×10−22 eV. The minimum size is about 6×10−3m/Hλc and the typical rotation velocity of the dwarf galaxies is O(H/m c, where H is the Hubble parameter and λc is the Compton wave length of the particle. We also suggest that ultra compact dwarf galaxies are the remnants of the dwarf galaxies formed in the early universe.

  10. 50 CFR 635.20 - Size limits.

    Science.gov (United States)

    2010-10-01

    ... review of landings, the period of time remaining in the current fishing year, current and historical..., 2011. For the convenience of the user, the added and revised text is set forth as follows: § 635.20..., current and historical landing trends, and any other relevant factors. NMFS will adjust the minimum size...

  11. HedgeHOGS: A Rapid Nuclear Hedge Sizing and Analysis Tool

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, Adam F. [United States Military Academy, West Point, NY (United States); Steinfeldt, Bradley Alexander [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lafleur, Jarret Marshall [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Hawley, Marilyn F. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Shannon, Lisa M. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-07-01

    The U.S. nuclear stockpile hedge is an inventory of non-deployed nuclear warheads and a force structure capable of deploying those warheads. Current guidance is to retain this hedge to mitigate the risk associated with the technical failure of any single warhead type or adverse geopolitical developments that could require augmentation of the force. The necessary size of the hedge depends on the composition of the nuclear stockpile and assumed constraints. Knowing the theoretical minimum hedge given certain constraints is useful when considering future weapons policy. HedgeHOGS, an Excel-based tool, was developed to enable rapid calculation of the minimum hedge size associated with varying active stockpile composition and hedging strategies.

  12. Nonlinear dimension reduction and clustering by Minimum Curvilinearity unfold neuropathic pain and tissue embryological classes

    KAUST Repository

    Cannistraci, Carlo

    2010-09-01

    Motivation: Nonlinear small datasets, which are characterized by low numbers of samples and very high numbers of measures, occur frequently in computational biology, and pose problems in their investigation. Unsupervised hybrid-two-phase (H2P) procedures-specifically dimension reduction (DR), coupled with clustering-provide valuable assistance, not only for unsupervised data classification, but also for visualization of the patterns hidden in high-dimensional feature space. Methods: \\'Minimum Curvilinearity\\' (MC) is a principle that-for small datasets-suggests the approximation of curvilinear sample distances in the feature space by pair-wise distances over their minimum spanning tree (MST), and thus avoids the introduction of any tuning parameter. MC is used to design two novel forms of nonlinear machine learning (NML): Minimum Curvilinear embedding (MCE) for DR, and Minimum Curvilinear affinity propagation (MCAP) for clustering. Results: Compared with several other unsupervised and supervised algorithms, MCE and MCAP, whether individually or combined in H2P, overcome the limits of classical approaches. High performance was attained in the visualization and classification of: (i) pain patients (proteomic measurements) in peripheral neuropathy; (ii) human organ tissues (genomic transcription factor measurements) on the basis of their embryological origin. Conclusion: MC provides a valuable framework to estimate nonlinear distances in small datasets. Its extension to large datasets is prefigured for novel NMLs. Classification of neuropathic pain by proteomic profiles offers new insights for future molecular and systems biology characterization of pain. Improvements in tissue embryological classification refine results obtained in an earlier study, and suggest a possible reinterpretation of skin attribution as mesodermal. © The Author(s) 2010. Published by Oxford University Press.

  13. Features of Random Metal Nanowire Networks with Application in Transparent Conducting Electrodes

    KAUST Repository

    Maloth, Thirupathi

    2017-05-01

    Among the alternatives to conventional Indium Tin Oxide (ITO) used in making transparent conducting electrodes, the random metal nanowire (NW) networks are considered to be superior offering performance at par with ITO. The performance is measured in terms of sheet resistance and optical transmittance. However, as the electrical properties of such random networks are achieved thanks to a percolation network, a minimum size of the electrodes is needed so it actually exceeds the representative volume element (RVE) of the material and the macroscopic electrical properties are achieved. There is not much information about the compatibility of this minimum RVE size with the resolution actually needed in electronic devices. Furthermore, the efficiency of NWs in terms of electrical conduction is overlooked. In this work, we address the above industrially relevant questions - 1) The minimum size of electrodes that can be made based on the dimensions of NWs and the material coverage. For this, we propose a morphology based classification in defining the RVE size and we also compare the same with that is based on macroscopic electrical properties stabilization. 2) The amount of NWs that do not participate in electrical conduction, hence of no practical use. The results presented in this thesis are a design guide to experimentalists to design transparent electrodes with more optimal usage of the material.

  14. Ice Shell Thickness and Endogenic Processes on Europa from Mapping and Topographic Analyses of Pits, Uplifts and Small Chaos Features (Invited)

    Science.gov (United States)

    Singer, K. N.; McKinnon, W. B.; Schenk, P.

    2013-12-01

    Constraining the thickness of the ice shell on Europa and the geological processes occurring in it are keys to understanding this icy world and its potential habitability. We focus on circular-to-subcircular features generally agreed to have been created by endogenic processes in Europa's ice shell or ocean: pits, uplifts, and subcircular chaos. Pits and uplifts are defined by their negative or positive topographic expression, respectively. Pits and uplifts generally retain pre-existing surface structures such as ridges, while chaos specifically refers to areas where the surface is broken up, in some cases to the point of destroying all original surface topography. We have mapped all features plausibly created by upwellings or other endogenic processes in the size range of 1 to 50 km in diameter, and incorporated previously unavailable topographic data as an aid to mapping and characterization of features. Topography was derived from albedo-controlled photoclinometry and crosschecked with stereo data where possible. Mapping was carried out over the medium-resolution Galileo regional maps (RegMaps) covering approximately 9% of Europa's surface, as well as over available high-resolution regions. While limited in extent, the latter are extremely valuable for detecting smaller features and for overall geomorphological analysis. Results of this new mapping show decreasing numbers of small features, and a peak in the size distribution for all features at approximately 5-6 km in diameter. No pits smaller than 3.3 km in diameter were found in high resolution imagery. Topography was used to find the depths and heights of pits and uplifts in the mapped regions. A general trend of increasing pit depth with increasing pit size was found, a correlation more easily understood in the context of a diapiric hypothesis for feature formation (as opposed to purely non-diapiric, melt-through models). Based on isostasy, maximum pit depths of ~0.3-to-0.48 km imply a minimum shell

  15. SU-F-T-78: Minimum Data Set of Measurements for TG 71 Based Electron Monitor-Unit Calculations

    International Nuclear Information System (INIS)

    Xu, H; Guerrero, M; Prado, K; Yi, B

    2016-01-01

    Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors, cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.

  16. SU-F-T-78: Minimum Data Set of Measurements for TG 71 Based Electron Monitor-Unit Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Xu, H; Guerrero, M; Prado, K; Yi, B [University of Maryland School of Medicine, Baltimore, MD (United States)

    2016-06-15

    Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors, cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.

  17. Variation in clutch size in relation to nest size in birds.

    Science.gov (United States)

    Møller, Anders P; Adriaensen, Frank; Artemyev, Alexandr; Bańbura, Jerzy; Barba, Emilio; Biard, Clotilde; Blondel, Jacques; Bouslama, Zihad; Bouvier, Jean-Charles; Camprodon, Jordi; Cecere, Francesco; Charmantier, Anne; Charter, Motti; Cichoń, Mariusz; Cusimano, Camillo; Czeszczewik, Dorota; Demeyrier, Virginie; Doligez, Blandine; Doutrelant, Claire; Dubiec, Anna; Eens, Marcel; Eeva, Tapio; Faivre, Bruno; Ferns, Peter N; Forsman, Jukka T; García-Del-Rey, Eduardo; Goldshtein, Aya; Goodenough, Anne E; Gosler, Andrew G; Góźdź, Iga; Grégoire, Arnaud; Gustafsson, Lars; Hartley, Ian R; Heeb, Philipp; Hinsley, Shelley A; Isenmann, Paul; Jacob, Staffan; Järvinen, Antero; Juškaitis, Rimvydas; Korpimäki, Erkki; Krams, Indrikis; Laaksonen, Toni; Leclercq, Bernard; Lehikoinen, Esa; Loukola, Olli; Lundberg, Arne; Mainwaring, Mark C; Mänd, Raivo; Massa, Bruno; Mazgajski, Tomasz D; Merino, Santiago; Mitrus, Cezary; Mönkkönen, Mikko; Morales-Fernaz, Judith; Morin, Xavier; Nager, Ruedi G; Nilsson, Jan-Åke; Nilsson, Sven G; Norte, Ana C; Orell, Markku; Perret, Philippe; Pimentel, Carla S; Pinxten, Rianne; Priedniece, Ilze; Quidoz, Marie-Claude; Remeš, Vladimir; Richner, Heinz; Robles, Hugo; Rytkönen, Seppo; Senar, Juan Carlos; Seppänen, Janne T; da Silva, Luís P; Slagsvold, Tore; Solonen, Tapio; Sorace, Alberto; Stenning, Martyn J; Török, János; Tryjanowski, Piotr; van Noordwijk, Arie J; von Numers, Mikael; Walankiewicz, Wiesław; Lambrechts, Marcel M

    2014-09-01

    Nests are structures built to support and protect eggs and/or offspring from predators, parasites, and adverse weather conditions. Nests are mainly constructed prior to egg laying, meaning that parent birds must make decisions about nest site choice and nest building behavior before the start of egg-laying. Parent birds should be selected to choose nest sites and to build optimally sized nests, yet our current understanding of clutch size-nest size relationships is limited to small-scale studies performed over short time periods. Here, we quantified the relationship between clutch size and nest size, using an exhaustive database of 116 slope estimates based on 17,472 nests of 21 species of hole and non-hole-nesting birds. There was a significant, positive relationship between clutch size and the base area of the nest box or the nest, and this relationship did not differ significantly between open nesting and hole-nesting species. The slope of the relationship showed significant intraspecific and interspecific heterogeneity among four species of secondary hole-nesting species, but also among all 116 slope estimates. The estimated relationship between clutch size and nest box base area in study sites with more than a single size of nest box was not significantly different from the relationship using studies with only a single size of nest box. The slope of the relationship between clutch size and nest base area in different species of birds was significantly negatively related to minimum base area, and less so to maximum base area in a given study. These findings are consistent with the hypothesis that bird species have a general reaction norm reflecting the relationship between nest size and clutch size. Further, they suggest that scientists may influence the clutch size decisions of hole-nesting birds through the provisioning of nest boxes of varying sizes.

  18. An Analysis of Audio Features to Develop a Human Activity Recognition Model Using Genetic Algorithms, Random Forests, and Neural Networks

    Directory of Open Access Journals (Sweden)

    Carlos E. Galván-Tejada

    2016-01-01

    Full Text Available This work presents a human activity recognition (HAR model based on audio features. The use of sound as an information source for HAR models represents a challenge because sound wave analyses generate very large amounts of data. However, feature selection techniques may reduce the amount of data required to represent an audio signal sample. Some of the audio features that were analyzed include Mel-frequency cepstral coefficients (MFCC. Although MFCC are commonly used in voice and instrument recognition, their utility within HAR models is yet to be confirmed, and this work validates their usefulness. Additionally, statistical features were extracted from the audio samples to generate the proposed HAR model. The size of the information is necessary to conform a HAR model impact directly on the accuracy of the model. This problem also was tackled in the present work; our results indicate that we are capable of recognizing a human activity with an accuracy of 85% using the HAR model proposed. This means that minimum computational costs are needed, thus allowing portable devices to identify human activities using audio as an information source.

  19. Detection of minimum-ionizing particles in hydrogenated amorphous silicon

    International Nuclear Information System (INIS)

    Kaplan, S.N.; Fujieda, I.; Perez-Mendez, V.; Qureshi, S.; Ward, W.; Street, R.A.

    1987-09-01

    Based on previously-reported results of the successful detection of alpha particles and 1- and 2-MeV protons with hydrogenated amorphous silicon (a-Si : H) diodes, detection of a single minimum-ionizing particle will require a total sensitive thickness of approximately 100 to 150 μm, either in the form of a single thick diode, or as a stack of several thinner diodes. Signal saturation at high dE/dx makes it necessary to simulate minimum ionization in order to evaluate present detectors. Two techniques, using pulsed infrared light, and pulsed x-rays, give single-pulse signals large enough for direct measurements. A third, using beta rays, requires multiple-transit signal averaging to produce signals measurable above noise. Signal amplitudes from the a-Si : H limit at 60% of the signal size from Si crystals extrapolated to the same thickness. This is consistent with an a-Si : H radiation ionization energy, W = 6 eV/electron-hole pair. Beta-ray signals are observed at the expected amplitude

  20. Automatic feature-based grouping during multiple object tracking.

    Science.gov (United States)

    Erlikhman, Gennady; Keane, Brian P; Mettler, Everett; Horowitz, Todd S; Kellman, Philip J

    2013-12-01

    Contour interpolation automatically binds targets with distractors to impair multiple object tracking (Keane, Mettler, Tsoi, & Kellman, 2011). Is interpolation special in this regard or can other features produce the same effect? To address this question, we examined the influence of eight features on tracking: color, contrast polarity, orientation, size, shape, depth, interpolation, and a combination (shape, color, size). In each case, subjects tracked 4 of 8 objects that began as undifferentiated shapes, changed features as motion began (to enable grouping), and returned to their undifferentiated states before halting. We found that intertarget grouping improved performance for all feature types except orientation and interpolation (Experiment 1 and Experiment 2). Most importantly, target-distractor grouping impaired performance for color, size, shape, combination, and interpolation. The impairments were, at times, large (>15% decrement in accuracy) and occurred relative to a homogeneous condition in which all objects had the same features at each moment of a trial (Experiment 2), and relative to a "diversity" condition in which targets and distractors had different features at each moment (Experiment 3). We conclude that feature-based grouping occurs for a variety of features besides interpolation, even when irrelevant to task instructions and contrary to the task demands, suggesting that interpolation is not unique in promoting automatic grouping in tracking tasks. Our results also imply that various kinds of features are encoded automatically and in parallel during tracking.

  1. Efficient perovskite light-emitting diodes featuring nanometre-sized crystallites

    Science.gov (United States)

    Xiao, Zhengguo; Kerner, Ross A.; Zhao, Lianfeng; Tran, Nhu L.; Lee, Kyung Min; Koh, Tae-Wook; Scholes, Gregory D.; Rand, Barry P.

    2017-01-01

    Organic-inorganic hybrid perovskite materials are emerging as highly attractive semiconductors for use in optoelectronics. In addition to their use in photovoltaics, perovskites are promising for realizing light-emitting diodes (LEDs) due to their high colour purity, low non-radiative recombination rates and tunable bandgap. Here, we report highly efficient perovskite LEDs enabled through the formation of self-assembled, nanometre-sized crystallites. Large-group ammonium halides added to the perovskite precursor solution act as a surfactant that dramatically constrains the growth of 3D perovskite grains during film forming, producing crystallites with dimensions as small as 10 nm and film roughness of less than 1 nm. Coating these nanometre-sized perovskite grains with longer-chain organic cations yields highly efficient emitters, resulting in LEDs that operate with external quantum efficiencies of 10.4% for the methylammonium lead iodide system and 9.3% for the methylammonium lead bromide system, with significantly improved shelf and operational stability.

  2. Minimum income protection in the Netherlands

    NARCIS (Netherlands)

    van Peijpe, T.

    2009-01-01

    This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its

  3. Hedge math: Theoretical limits on minimum stockpile size across nuclear hedging strategies

    Energy Technology Data Exchange (ETDEWEB)

    Lafleur, Jarret Marshall [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Roesler, Alexander W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-01

    In June 2013, the Department of Defense published a congressionally mandated, unclassified update on the U.S. Nuclear Employment Strategy. Among the many updates in this document are three key ground rules for guiding the sizing of the non-deployed U.S. nuclear stockpile. Furthermore, these ground rules form an important and objective set of criteria against which potential future stockpile hedging strategies can be evaluated.

  4. Working memory for visual features and conjunctions in schizophrenia.

    Science.gov (United States)

    Gold, James M; Wilk, Christopher M; McMahon, Robert P; Buchanan, Robert W; Luck, Steven J

    2003-02-01

    The visual working memory (WM) storage capacity of patients with schizophrenia was investigated using a change detection paradigm. Participants were presented with 2, 3, 4, or 6 colored bars with testing of both single feature (color, orientation) and feature conjunction conditions. Patients performed significantly worse than controls at all set sizes but demonstrated normal feature binding. Unlike controls, patient WM capacity declined at set size 6 relative to set size 4. Impairments with subcapacity arrays suggest a deficit in task set maintenance: Greater impairment for supercapacity set sizes suggests a deficit in the ability to selectively encode information for WM storage. Thus, the WM impairment in schizophrenia appears to be a consequence of attentional deficits rather than a reduction in storage capacity.

  5. MRI features of peripheral traumatic neuromas

    Energy Technology Data Exchange (ETDEWEB)

    Ahlawat, Shivani [Johns Hopkins University School of Medicine, Musculoskeletal Radiology Section, The Russell H. Morgan Department of Radiology and Radiological Science, Baltimore, MD (United States); Belzberg, Allan J. [The Johns Hopkins Hospital, Department of Neurosurgery, Baltimore, MD (United States); Montgomery, Elizabeth A. [The Johns Hopkins Hospital, Pathology, Oncology and Orthopedic Surgery, Baltimore, MD (United States); Fayad, Laura M. [Department of Orthopedic Surgery, Department of Radiology and Radiological Science, Musculoskeletal Imaging Section Chief, The Johns Hopkins Medical Institutions, Baltimore, MD (United States); The Johns Hopkins Medical Institutions, Department of Orthopedic Surgery, Baltimore, MD (United States)

    2016-04-15

    To describe the MRI appearance of traumatic neuromas on non-contrast and contrast-enhanced MRI sequences. This IRB-approved, HIPAA-compliant study retrospectively reviewed 13 subjects with 20 neuromas. Two observers reviewed pre-operative MRIs for imaging features of neuroma (size, margin, capsule, signal intensity, heterogeneity, enhancement, neurogenic features and denervation) and the nerve segment distal to the traumatic neuroma. Descriptive statistics were reported. Pearson's correlation was used to examine the relationship between size of neuroma and parent nerve. Of 20 neuromas, 13 were neuromas-in-continuity and seven were end-bulb neuromas. Neuromas had a mean size of 1.5 cm (range 0.6-4.8 cm), 100 % (20/20) had indistinct margins and 0 % (0/20) had a capsule. Eighty-eight percent (7/8) showed enhancement. All 100 % (20/20) had tail sign; 35 % (7/20) demonstrated discontinuity from the parent nerve. None showed a target sign. There was moderate positive correlation (r = 0.68, p = 0.001) with larger neuromas arising from larger parent nerves. MRI evaluation of the nerve segment distal to the neuroma showed increased size (mean size 0.5 cm ± 0.4 cm) compared to the parent nerve (mean size 0.3 cm ± 0.2 cm). Since MRI features of neuromas include enhancement, intravenous contrast medium cannot be used to distinguish neuromas from peripheral nerve sheath tumours. The clinical history of trauma with the lack of a target sign are likely the most useful clues. (orig.)

  6. Optimum allocation of imaging time and minimum detectable activity in dual isotope blood pool subtraction indium-111 platelet imaging

    International Nuclear Information System (INIS)

    Machac, J.; Horowitz, S.F.; Goldsmith, S.J.; Fuster, V.

    1984-01-01

    Indium-111 labeled platelet imaging is a tool for detection of thrombus formation in vascular spaces. Dual isotope blood pool subtraction may help differentiate focal platelet accumulation from blood pool activity. This study used a computer model to calculate the minimum excess-to-blood pool platelet ratio (EX/BP) and the optimum dual isotope imaging times under varied conditions of lesion size. The model simulated usual human imaging doses of 500 μCi of In-111 platelets and 5mCi of Tc-99m labeled RBCs giving a reference cardiac blood pool region (100cc) of 10000 cpm for Tc-99m and 500 cpm for In-111. The total imaging time was fixed at 20 minutes, while the two isotope imaging times (TIn/TTc) were varied, as were the simulated lesion size (cc) and EX/BP. The relative error of the excess counts was calculated using propagation of error theory. At the critical level of detection, where the excess lesion counts equal 3 times the standard deviation, the optimum TIn/TTc and minimum Ex/BP were determined for each lesion size. For the smallest lesion size (0.1cc), the minimum detectable EX/BP ratio was 1.6, with the best TIn/TTC ratio of 18/2 minutes, and for large lesions, an EX/BP of 0.1, with a TIn/TTc of 16/4. This model provides an estimate of the sensitivity and optimizes imaging times in dual isotope subtraction platelet imaging. The model is adaptable to varying isotope doses, total imaging times and lesion size. This information will be helpful in future in- vivo imaging studies of intravascular thrombi in humans

  7. Sample size of the reference sample in a case-augmented study.

    Science.gov (United States)

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. A database of linear codes over F_13 with minimum distance bounds and new quasi-twisted codes from a heuristic search algorithm

    Directory of Open Access Journals (Sweden)

    Eric Z. Chen

    2015-01-01

    Full Text Available Error control codes have been widely used in data communications and storage systems. One central problem in coding theory is to optimize the parameters of a linear code and construct codes with best possible parameters. There are tables of best-known linear codes over finite fields of sizes up to 9. Recently, there has been a growing interest in codes over $\\mathbb{F}_{13}$ and other fields of size greater than 9. The main purpose of this work is to present a database of best-known linear codes over the field $\\mathbb{F}_{13}$ together with upper bounds on the minimum distances. To find good linear codes to establish lower bounds on minimum distances, an iterative heuristic computer search algorithm is employed to construct quasi-twisted (QT codes over the field $\\mathbb{F}_{13}$ with high minimum distances. A large number of new linear codes have been found, improving previously best-known results. Tables of $[pm, m]$ QT codes over $\\mathbb{F}_{13}$ with best-known minimum distances as well as a table of lower and upper bounds on the minimum distances for linear codes of length up to 150 and dimension up to 6 are presented.

  9. The Czech Wage Distribution and the Minimum Wage Impacts: the Empirical Analysis

    Directory of Open Access Journals (Sweden)

    Kateřina Duspivová

    2013-06-01

    Full Text Available A well-fi tting wage distribution is a crucial precondition for economic modeling of the labour market processes.In the fi rst part, this paper provides the evidence that – as for wages in the Czech Republic – the most oft enused log-normal distribution failed and the best-fi tting one is the Dagum distribution. Th en we investigatethe role of wage distribution in the process of the economic modeling. By way of an example of the minimumwage impacts on the Czech labour market, we examine the response of Meyer and Wise’s (1983 model to theDagum and log-normal distributions. Th e results suggest that the wage distribution has important implicationsfor the eff ects of the minimum wage on the shape of the lower tail of the measured wage distribution andis thus an important feature for interpreting the eff ects of minimum wages.

  10. Understanding the Minimum Wage: Issues and Answers.

    Science.gov (United States)

    Employment Policies Inst. Foundation, Washington, DC.

    This booklet, which is designed to clarify facts regarding the minimum wage's impact on marketplace economics, contains a total of 31 questions and answers pertaining to the following topics: relationship between minimum wages and poverty; impacts of changes in the minimum wage on welfare reform; and possible effects of changes in the minimum wage…

  11. Youth minimum wages and youth employment

    NARCIS (Netherlands)

    Marimpi, Maria; Koning, Pierre

    2018-01-01

    This paper performs a cross-country level analysis on the impact of the level of specific youth minimum wages on the labor market performance of young individuals. We use information on the use and level of youth minimum wages, as compared to the level of adult minimum wages as well as to the median

  12. Pediatric Program Director Minimum Milestone Expectations before Allowing Supervision of Others and Unsupervised Practice.

    Science.gov (United States)

    Li, Su-Ting T; Tancredi, Daniel J; Schwartz, Alan; Guillot, Ann; Burke, Ann E; Trimm, R Franklin; Guralnick, Susan; Mahan, John D; Gifford, Kimberly

    2018-04-25

    The Accreditation Council for Graduate Medical Education requires semiannual Milestone reporting on all residents. Milestone expectations of performance are unknown. Determine pediatric program director (PD) minimum Milestone expectations for residents prior to being ready to supervise and prior to being ready to graduate. Mixed methods survey of pediatric PDs on their programs' Milestone expectations before residents are ready to supervise and before they are ready to graduate, and in what ways PDs use Milestones to make supervision and graduation decisions. If programs had no established Milestone expectations, PDs indicated expectations they considered for use in their program. Mean minimum Milestone level expectations adjusted for program size, region, and clustering of Milestone expectations by program were calculated for prior to supervise and prior to graduate. Free-text questions were analyzed using thematic analysis. The response rate was 56.8% (113/199). Most programs had no required minimum Milestone level before residents are ready to supervise (80%; 76/95) or ready to graduate (84%; 80/95). For readiness to supervise, minimum Milestone expectations PDs considered establishing for their program were highest for humanism (2.46, 95% CI: 2.21-2.71) and professionalization (2.37, 2.15-2.60). Minimum Milestone expectations for graduates were highest for help-seeking (3.14, 2.83-3.46). Main themes included the use of Milestones in combination with other information to assess learner performance and Milestones are not equally weighted when making advancement decisions. Most PDs have not established program minimum Milestones, but would vary such expectations by competency. Copyright © 2018. Published by Elsevier Inc.

  13. Effect of grain size on the high temperature mechanical properties of type 316LN stainless steel

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. W.; Lee, Y. S.; Ryu, W. S.; Jang, J. S.; Kim, S. H.; Kim, W. G.; Cho, H. D.; Han, C. H

    2001-02-01

    Nitrogen increases the high temeprature mechanical properties and decreases grain size. The effect of nitrogen on the high temperature mechanical properties was investigated in the viewpoint of grain size. Tensile strength increases with the decrease of grain size and agrees with the Hall-Petch relationship. Effect of grain size on the low cycle fatigue life properties were investigated as measuring the fatigue life from the results which had been obtained by the constant strain rate and various strain range. There was no effect on the low cycle fatigue properties by the grain size. The time to rupture decreased with the increase of grain size. The steady state creep rate decreased to a minimum and then increased as the grain size increased. This result agrees with the result predicted from Garofalo equation. The rupture elongation at the intermediate grain size showed a minimum due to the cavity formed easily by carbide precipitates in the grain boundaries.

  14. Discretization of space and time: determining the values of minimum length and minimum time

    OpenAIRE

    Roatta , Luca

    2017-01-01

    Assuming that space and time can only have discrete values, we obtain the expression of the minimum length and the minimum time interval. These values are found to be exactly coincident with the Planck's length and the Planck's time but for the presence of h instead of ħ .

  15. Minimum wage development in the Russian Federation

    OpenAIRE

    Bolsheva, Anna

    2012-01-01

    The aim of this paper is to analyze the effectiveness of the minimum wage policy at the national level in Russia and its impact on living standards in the country. The analysis showed that the national minimum wage in Russia does not serve its original purpose of protecting the lowest wage earners and has no substantial effect on poverty reduction. The national subsistence minimum is too low and cannot be considered an adequate criterion for the setting of the minimum wage. The minimum wage d...

  16. Feature selection in classification of eye movements using electrooculography for activity recognition.

    Science.gov (United States)

    Mala, S; Latha, K

    2014-01-01

    Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition.

  17. Functional size of vacuolar H+ pumps: Estimates from radiation inactivation studies

    International Nuclear Information System (INIS)

    Sarafian, V.; Poole, R.J.

    1991-01-01

    The PPase and the ATPase from red beet (Beta vulgaris) vacuolar membranes were subjected to radiation inactivation by a 60 Co source in both the native tonoplast and detergent-solubilized states, in order to determine their target molecular sizes. Analysis of the residual phosphohydrolytic and proton transport activities, after exposure to varying doses of radiation, yielded exponential relationships between the activities and radiation doses. The deduced target molecular sizes for PPase activity in native and solubilized membranes were 125kD and 259kD respectively and 327kD for H + -transport. This suggests that the minimum number of subunits of 67kD for PPi hydrolysis is two in the native state and four after Triton X-100 solubilization. At least four subunits would be required for H + -translocation. Analysis of the ATPase inactivation patterns revealed target sizes of 384kD and 495kD for ATP hydrolysis in native and solubilized tonoplast respectively and 430kD for H + -transport. These results suggest that the minimum size for hydrolytic or transport functions is relatively constant for the ATPase

  18. Minimum emittance of three-bend achromats

    International Nuclear Information System (INIS)

    Li Xiaoyu; Xu Gang

    2012-01-01

    The calculation of the minimum emittance of three-bend achromats (TBAs) made by Mathematical software can ignore the actual magnets lattice in the matching condition of dispersion function in phase space. The minimum scaling factors of two kinds of widely used TBA lattices are obtained. Then the relationship between the lengths and the radii of the three dipoles in TBA is obtained and so is the minimum scaling factor, when the TBA lattice achieves its minimum emittance. The procedure of analysis and the results can be widely used in achromats lattices, because the calculation is not restricted by the actual lattice. (authors)

  19. X-ray tube focal spot sizes: comprehensive studies of their measurement and effect of measured size in angiography

    International Nuclear Information System (INIS)

    Doi, K.; Loo, L.N.; Chan, H.P.

    1982-01-01

    Thirty-two focal spot sizes of four x-ray tubes were measured by the pinhole, star pattern, slit, and root-mean-square (RMS) methods under various exposure conditions. The modulation transfer functions (MTFs) and line spread functions (LSFs) were also determined. The star pattern focal spot sizes agreed with the effective sizes calculated from the frequencies at the first minimum of the MTF within 0.04 mm for large focal spots and within 0.01 mm for small focal spots. The focal spot size determined by the slit method was approximately equal to the width of the LSF at the cutoff level of 0.15 +/- 0.06 of the peak value. The RMS method provided the best correlation between the measured focal spot sizes and the corresponding image distributions of blood vessels. The pinhole and slit methods tended to overestimate the focal spot size, but the star pattern method tended to underestimate it. For approximately 90% of the focal spots, the average of the star and slit (or pinhole) focal spot sizes agreed with the RMS focal spot size within +/- 0.1 mm

  20. Design and expected performance of the new SLS beam size monitor

    CERN Document Server

    Milas, N.; Saa Hernandez, A.; Schlott, V.; Streun, A.; Andersson, A.; Breunlin, J.

    2012-01-01

    The vertical emittance minimization campaign at SLS, realized in the context of the TIARA WP6, has already achieved the world's smallest vertical beam size of 3.6 μm, corresponding to a vertical emittance of 0.9 pm, in a synchrotron light source. The minimum value reached for the vertical emittance is only about five times larger than the quantum limit of 0.2 pm. However, the resolution limit of the present SLS emittance monitor has also been reached during this campaign, thus, to further continue the emittance minimization program the construction of an improved second monitor is necessary. In this paper we present the design and studies on the performance of this new monitor based on the image formation method using vertically polarized synchrotron radiation in the visible and UV spectral ranges. This new monitor includes an additional feature, providing the possibility of performing full interferometric measurement by the use of a set of vertical obstacles that can be driven on the light path. Simulations...

  1. Effects of habitat features on size-biased predation on salmon by bears.

    Science.gov (United States)

    Andersson, Luke C; Reynolds, John D

    2017-05-01

    Predators can drive trait divergence among populations of prey by imposing differential selection on prey traits. Habitat characteristics can mediate predator selectivity by providing refuge for prey. We quantified the effects of stream characteristics on biases in the sizes of spawning salmon caught by bears (Ursus arctos and U. americanus) on the central coast of British Columbia, Canada by measuring size-biased predation on spawning chum (Oncorhynchus keta) and pink (O. gorbuscha) salmon in 12 streams with varying habitat characteristics. We tested the hypotheses that bears would catch larger than average salmon (size-biased predation) and that this bias toward larger fish would be higher in streams that provide less protection to spawning salmon from predation (e.g., less pools, wood, undercut banks). We then we tested for how such size biases in turn translate into differences among populations in the sizes of the fish. Bears caught larger-than-average salmon as the spawning season progressed and as predicted, this was most pronounced in streams with fewer refugia for the fish (i.e., wood and undercut banks). Salmon were marginally smaller in streams with more pronounced size-biased predation but this predictor was less reliable than physical characteristics of streams, with larger fish in wider, deeper streams. These results support the hypothesis that selective forces imposed by predators can be mediated by habitat characteristics, with potential consequences for physical traits of prey.

  2. 30 CFR 57.19021 - Minimum rope strength.

    Science.gov (United States)

    2010-07-01

    ... feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0. (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0. (c) Tail...

  3. 30 CFR 56.19021 - Minimum rope strength.

    Science.gov (United States)

    2010-07-01

    ... feet: Minimum Value=Static Load×(7.0-0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0-0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0 (c) Tail ropes...

  4. Minimum dimension of an ITER like Tokamak with a given Q

    Energy Technology Data Exchange (ETDEWEB)

    Johner, J

    2004-07-01

    The minimum dimension of an ITER like tokamak with a given amplification factor Q is calculated for two values of the maximum magnetic field in the superconducting toroidal field coils. For ITERH-98P(y,2) scaling of the energy confinement time, it is shown that for a sufficiently large tokamak, the maximum Q is obtained for the operating point situated both at the maximum density and at the minimum margin with respect to the H-L transition. We have shown that increasing the maximum magnetic field in the toroidal field coils from the present 11.8 T to 16 T would result in a strong reduction of the machine size but has practically no effect on the fusion power. Values obtained for {beta}{sub N} are found to be below 2. Peak fluxes on the divertor plates with an ITER like divertor and a multi-machine expression for the power radiated in the plasma mantle, are below 10 MW/m{sup 2}.

  5. Patch-based image segmentation of satellite imagery using minimum spanning tree construction

    Energy Technology Data Exchange (ETDEWEB)

    Skurikhin, Alexei N [Los Alamos National Laboratory

    2010-01-01

    We present a method for hierarchical image segmentation and feature extraction. This method builds upon the combination of the detection of image spectral discontinuities using Canny edge detection and the image Laplacian, followed by the construction of a hierarchy of segmented images of successively reduced levels of details. These images are represented as sets of polygonized pixel patches (polygons) attributed with spectral and structural characteristics. This hierarchy forms the basis for object-oriented image analysis. To build fine level-of-detail representation of the original image, seed partitions (polygons) are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on the top of the detected spectral discontinuities that form a network of constraints for the Delaunay triangulation. A polygonized image is represented as a spatial network in the form of a graph with vertices which correspond to the polygonal partitions and graph edges reflecting pairwise partitions relations. Image graph partitioning is based on the iterative graph oontraction using Boruvka's Minimum Spanning Tree algorithm. An important characteristic of the approach is that the agglomeration of partitions is constrained by the detected spectral discontinuities; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects.

  6. RR Tel: Determination of Dust Properties During Minimum Obscuration

    Directory of Open Access Journals (Sweden)

    Jurkić T.

    2012-06-01

    Full Text Available the ISO infrared spectra and the SAAO long-term JHKL photometry of RR Tel in the epochs during minimum obscuration are studied in order to construct a circumstellar dust model. the spectral energy distribution in the near- and the mid-IR spectral range (1–15 μm was obtained for an epoch without the pronounced dust obscuration. the DUSTY code was used to solve the radiative transfer through the dust and to determine the circumstellar dust properties of the inner dust regions around the Mira component. Dust temperature, maximum grain size, dust density distribution, mass-loss rate, terminal wind velocity and optical depth are determined. the spectral energy distribution and the long-term JHKL photometry during an epoch of minimum obscuration show almost unattenuated stellar source and strong dust emission which cannot be explained by a single dust shell model. We propose a two-component model consisting of an optically thin circmustellar dust shell and optically thick dust outside the line of sight in some kind of a flattened geometry, which is responsible for most of the observed dust thermal emission.

  7. Statistical-techniques-based computer-aided diagnosis (CAD) using texture feature analysis: application in computed tomography (CT) imaging to fatty liver disease

    Science.gov (United States)

    Chung, Woon-Kwan; Park, Hyong-Hu; Im, In-Chul; Lee, Jae-Seung; Goo, Eun-Hoe; Dong, Kyung-Rae

    2012-09-01

    This paper proposes a computer-aided diagnosis (CAD) system based on texture feature analysis and statistical wavelet transformation technology to diagnose fatty liver disease with computed tomography (CT) imaging. In the target image, a wavelet transformation was performed for each lesion area to set the region of analysis (ROA, window size: 50 × 50 pixels) and define the texture feature of a pixel. Based on the extracted texture feature values, six parameters (average gray level, average contrast, relative smoothness, skewness, uniformity, and entropy) were determined to calculate the recognition rate for a fatty liver. In addition, a multivariate analysis of the variance (MANOVA) method was used to perform a discriminant analysis to verify the significance of the extracted texture feature values and the recognition rate for a fatty liver. According to the results, each texture feature value was significant for a comparison of the recognition rate for a fatty liver ( p fatty liver had the same scale as that for the F-value, showing 100% (average gray level) at the maximum and 80% (average contrast) at the minimum. Therefore, the recognition rate is believed to be a useful clinical value for the automatic detection and computer-aided diagnosis (CAD) using the texture feature value. Nevertheless, further study on various diseases and singular diseases will be needed in the future.

  8. 30 CFR 77.1431 - Minimum rope strength.

    Science.gov (United States)

    2010-07-01

    ... feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0 (c) Tail ropes...

  9. Feature singletons attract spatial attention independently of feature priming.

    Science.gov (United States)

    Yashar, Amit; White, Alex L; Fang, Wanghaoming; Carrasco, Marisa

    2017-08-01

    People perform better in visual search when the target feature repeats across trials (intertrial feature priming [IFP]). Here, we investigated whether repetition of a feature singleton's color modulates stimulus-driven shifts of spatial attention by presenting a probe stimulus immediately after each singleton display. The task alternated every two trials between a probe discrimination task and a singleton search task. We measured both stimulus-driven spatial attention (via the distance between the probe and singleton) and IFP (via repetition of the singleton's color). Color repetition facilitated search performance (IFP effect) when the set size was small. When the probe appeared at the singleton's location, performance was better than at the opposite location (stimulus-driven attention effect). The magnitude of this attention effect increased with the singleton's set size (which increases its saliency) but did not depend on whether the singleton's color repeated across trials, even when the previous singleton had been attended as a search target. Thus, our findings show that repetition of a salient singleton's color affects performance when the singleton is task relevant and voluntarily attended (as in search trials). However, color repetition does not affect performance when the singleton becomes irrelevant to the current task, even though the singleton does capture attention (as in probe trials). Therefore, color repetition per se does not make a singleton more salient for stimulus-driven attention. Rather, we suggest that IFP requires voluntary selection of color singletons in each consecutive trial.

  10. A Phosphate Minimum in the Oxygen Minimum Zone (OMZ) off Peru

    Science.gov (United States)

    Paulmier, A.; Giraud, M.; Sudre, J.; Jonca, J.; Leon, V.; Moron, O.; Dewitte, B.; Lavik, G.; Grasse, P.; Frank, M.; Stramma, L.; Garcon, V.

    2016-02-01

    The Oxygen Minimum Zone (OMZ) off Peru is known to be associated with the advection of Equatorial SubSurface Waters (ESSW), rich in nutrients and poor in oxygen, through the Peru-Chile UnderCurrent (PCUC), but this circulation remains to be refined within the OMZ. During the Pelágico cruise in November-December 2010, measurements of phosphate revealed the presence of a phosphate minimum (Pmin) in various hydrographic stations, which could not be explained so far and could be associated with a specific water mass. This Pmin, localized at a relatively constant layer ( 20minimum with a mean vertical phosphate decrease of 0.6 µM but highly variable between 0.1 and 2.2 µM. In average, these Pmin are associated with a predominant mixing of SubTropical Under- and Surface Waters (STUW and STSW: 20 and 40%, respectively) within ESSW ( 25%), complemented evenly by overlying (ESW, TSW: 8%) and underlying waters (AAIW, SPDW: 7%). The hypotheses and mechanisms leading to the Pmin formation in the OMZ are further explored and discussed, considering the physical regional contribution associated with various circulation pathways ventilating the OMZ and the local biogeochemical contribution including the potential diazotrophic activity.

  11. Photovoltaic system sizing and performance by the comparison of demand and expected radiations

    Energy Technology Data Exchange (ETDEWEB)

    Lasnier, France; Sivoththaman, S [Asian Inst. of Tech., Bangkok (TH). Div. of Energy Technology

    1990-01-01

    Two models have been developed and proposed for the calculation of required radiation and for the prediction of available radiation (with a certain probability). These can be used for the sizing and performance prediction of stand-alone photovoltaic (PV) systems in a specified region. The first model computes the minimum daily average radiation required for the system to survive without failure, given the load and consecutive days-of-run. The component ratings of the system, PV panel size and battery size are observed to have great influence on the necessary radiation. The second model calculates the probable minimum radiation in the future, given the number of consecutive run-days and the percentage probability with which the values are to be minimized. The five-year radiation data for Bangkok (1983-1987) were statistically processed for use in the model as data. The output of the two models, when superimposed on each other, gives a clear idea about the system performance and about the optimum sizing. (author).

  12. Minimum area requirements for an at-risk butterfly based on movement and demography.

    Science.gov (United States)

    Brown, Leone M; Crone, Elizabeth E

    2016-02-01

    Determining the minimum area required to sustain populations has a long history in theoretical and conservation biology. Correlative approaches are often used to estimate minimum area requirements (MARs) based on relationships between area and the population size required for persistence or between species' traits and distribution patterns across landscapes. Mechanistic approaches to estimating MAR facilitate prediction across space and time but are few. We used a mechanistic MAR model to determine the critical minimum patch size (CMP) for the Baltimore checkerspot butterfly (Euphydryas phaeton), a locally abundant species in decline along its southern range, and sister to several federally listed species. Our CMP is based on principles of diffusion, where individuals in smaller patches encounter edges and leave with higher probability than those in larger patches, potentially before reproducing. We estimated a CMP for the Baltimore checkerspot of 0.7-1.5 ha, in accordance with trait-based MAR estimates. The diffusion rate on which we based this CMP was broadly similar when estimated at the landscape scale (comparing flight path vs. capture-mark-recapture data), and the estimated population growth rate was consistent with observed site trends. Our mechanistic approach to estimating MAR is appropriate for species whose movement follows a correlated random walk and may be useful where landscape-scale distributions are difficult to assess, but demographic and movement data are obtainable from a single site or the literature. Just as simple estimates of lambda are often used to assess population viability, the principles of diffusion and CMP could provide a starting place for estimating MAR for conservation. © 2015 Society for Conservation Biology.

  13. Microbial eukaryote diversity in the marine oxygen minimum zone off northern Chile

    OpenAIRE

    Parris, Darren J.; Ganesh, Sangita; Edgcomb, Virginia P.; Stewart, Frank J.; DeLong, Edward

    2014-01-01

    Molecular surveys are revealing diverse eukaryotic assemblages in oxygen-limited ocean waters. These communities may play pivotal ecological roles through autotrophy, feeding, and a wide range of symbiotic associations with prokaryotes. We used 18S rRNA gene sequencing to provide the first snapshot of pelagic microeukaryotic community structure in two cellular size fractions (0.2-1.6 µm, >1.6 µm) from seven depths through the anoxic oxygen minimum zone (OMZ) off northern Chile. Sequencing ...

  14. UNTANGLING THE NEAR-IR SPECTRAL FEATURES IN THE PROTOPLANETARY ENVIRONMENT OF KH 15D

    Energy Technology Data Exchange (ETDEWEB)

    Arulanantham, Nicole A.; Herbst, William; Gilmore, Martha S.; Cauley, P. Wilson [Astronomy Department, Wesleyan University, Middletown, CT 06459 (United States); Leggett, S. K., E-mail: nicole.arulanantham@colorado.edu [Gemini Observatory (North), Hilo, HI 96720 (United States)

    2017-01-10

    We report on Gemini/GNIRS observations of the binary T Tauri system V582 Mon (KH 15D) at three orbital phases. These spectra allow us to untangle five components of the system: the photosphere and magnetosphere of star B, the jet, scattering properties of the ring material, and excess near-infrared (near-IR) radiation previously attributed to a possible self-luminous planet. We confirm an early-K subgiant classification for star B and show that the magnetospheric He i emission line is variable, possibly indicating increased mass accretion at certain times. As expected, the H{sub 2} emission features associated with the inner part of the jet show no variation with orbital phase. We show that the reflectance spectrum for the scattered light has a distinctive blue slope and spectral features consistent with scattering and absorption by a mixture of water and methane ice grains in the 1–50 μ m size range. This suggests that the methane frost line is closer than ∼5 au in this system, requiring that the grains be shielded from direct radiation. After correcting for features from the scattered light, jet, magnetosphere, and photosphere, we confirm the presence of leftover near-IR light from an additional source, detectable near minimum brightness. A spectral emission feature matching the model spectrum of a 10 M {sub J}, 1 Myr old planet is found in the excess flux, but other expected features from this model are not seen. Our observations, therefore, tentatively support the picture that a luminous planet is present within the system, although they cannot yet be considered definitive.

  15. Minimum BER Receiver Filters with Block Memory for Uplink DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Debbah Mérouane

    2008-01-01

    Full Text Available Abstract The problem of synchronous multiuser receiver design in the case of direct-sequence single-antenna code division multiple access (DS-CDMA uplink networks is studied over frequency selective fading channels. An exact expression for the bit error rate (BER is derived in the case of BPSK signaling. Moreover, an algorithm is proposed for finding the finite impulse response (FIR receiver filters with block memory such that the exact BER of the active users is minimized. Several properties of the minimum BER FIR filters with block memory are identified. The algorithm performance is found for scenarios with different channel qualities, spreading code lengths, receiver block memory size, near-far effects, and channel mismatch. For the BPSK constellation, the proposed FIR receiver structure with block memory has significant better BER with respect to and near-far resistance than the corresponding minimum mean square error (MMSE filters with block memory.

  16. Binary cluster collision dynamics and minimum energy conformations

    Energy Technology Data Exchange (ETDEWEB)

    Muñoz, Francisco [Max Planck Institute of Microstructure Physics, Weinberg 2, 06120 Halle (Germany); Departamento de Física, Facultad de Ciencias, Universidad de Chile, Santiago (Chile); Centro para el Desarrollo de la Nanociencia y Nanotecnología, CEDENNA, Avenida Ecuador 3493, Santiago (Chile); Rogan, José; Valdivia, J.A. [Departamento de Física, Facultad de Ciencias, Universidad de Chile, Santiago (Chile); Centro para el Desarrollo de la Nanociencia y Nanotecnología, CEDENNA, Avenida Ecuador 3493, Santiago (Chile); Varas, A. [Departamento de Física, Facultad de Ciencias, Universidad de Chile, Santiago (Chile); Nano-Bio Spectroscopy Group, ETSF Scientific Development Centre, Departamento de Física de Materiales, Universidad del País Vasco UPV/EHU, Av. Tolosa 72, E-20018 San Sebastián (Spain); Kiwi, Miguel, E-mail: m.kiwi.t@gmail.com [Departamento de Física, Facultad de Ciencias, Universidad de Chile, Santiago (Chile); Centro para el Desarrollo de la Nanociencia y Nanotecnología, CEDENNA, Avenida Ecuador 3493, Santiago (Chile)

    2013-10-15

    The collision dynamics of one Ag or Cu atom impinging on a Au{sub 12} cluster is investigated by means of DFT molecular dynamics. Our results show that the experimentally confirmed 2D to 3D transition of Au{sub 12}→Au{sub 13} is mostly preserved by the resulting planar Au{sub 12}Ag and Au{sub 12}Cu minimum energy clusters, which is quite remarkable in view of the excess energy, well larger than the 2D–3D potential barrier height. The process is accompanied by a large s−d hybridization and charge transfer from Au to Ag or Cu. The dynamics of the collision process mainly yields fusion of projectile and target, however scattering and cluster fragmentation also occur for large energies and large impact parameters. While Ag projectiles favor fragmentation, Cu favors scattering due to its smaller mass. The projectile size does not play a major role in favoring the fragmentation or scattering channels. By comparing our collision results with those obtained by an unbiased minimum energy search of 4483 Au{sub 12}Ag and 4483 Au{sub 12}Cu configurations obtained phenomenologically, we find that there is an extra bonus: without increase of computer time collisions yield the planar lower energy structures that are not feasible to obtain using semi-classical potentials. In fact, we conclude that phenomenological potentials do not even provide adequate seeds for the search of global energy minima for planar structures. Since the fabrication of nanoclusters is mainly achieved by synthesis or laser ablation, the set of local minima configurations we provide here, and their distribution as a function of energy, are more relevant than the global minimum to analyze experimental results obtained at finite temperatures, and is consistent with the dynamical coexistence of 2D and 3D liquid Au clusters conformations obtained previously.

  17. Minimum deterrence and regional security. Section 1. Europe

    International Nuclear Information System (INIS)

    Gnesotto, N.

    1993-01-01

    The impact of regional security in Europe on the minimum nuclear deterrence is analyzed. There are four factors that enable definition of specific features of European security. Europe is the only theatre in which four of the five nuclear Powers coexist, where three states, Ukraine, Belarus and Kazakhstan, represent a new type of proliferation. It is therefore the strategic region with the heaviest concentration of nuclear weapons in the world. Finally it is a theatre in which regional wars are again a possibility. In other words, the end of cold war meant return of real wars in Europe on one hand, and on the other, a combination of absolutely massive and essential nuclear capability and over-increasing economic, political and diplomatic instability. In spite of these circumstances nuclear deterrence in Europe is inevitable and desirable

  18. 75 FR 77561 - Regulations Issued Under the Export Grape and Plum Act; Revision to the Minimum Requirements

    Science.gov (United States)

    2010-12-13

    ... amount of product sold and an increase in returns to producers, shippers, exporters, and carriers... additional 2 percent tolerance for sealed berry cracks on the Exotic grape variety. This action was... minimum size and quality requirements for export shipments of any variety of vinifera species table grapes...

  19. Application of the minimum fuel neural network to music signals

    DEFF Research Database (Denmark)

    Harbo, Anders La-Cour

    2004-01-01

    ) for finding sparse representations of music signals. This method is a set of two ordinary differential equations. We argue that the most important parameter for optimal use of this method is the discretization step size, and we demonstrate that this can be a priori determined. This significantly speeds up......Finding an optimal representation of a signal in an over-complete dictionary is often quite difficult. Since general results in this field are not very application friendly it truly helps to specify the framework as much as possible. We investigate the method Minimum Fuel Neural Network (MFNN...

  20. 19 mm sized bileaflet valve prostheses' flow field investigated by bidimensional laser Doppler anemometry (part I: velocity profiles).

    Science.gov (United States)

    Barbaro, V; Grigioni, M; Daniele, C; D'Avenio, G; Boccanera, G

    1997-11-01

    The investigation of the flow field downstream of a cardiac valve prosthesis is a well established task. In particular turbulence generation is of interest if damage to blood constituents is to be assessed. Several prosthetic valve flow studies are available in literature but they generally concern large-sized prostheses. The FDA draft guidance requires the study of the maximum Reynolds number conditions for a cardiac valve model to assess the worst case in turbulence by choosing both the minimum valve diameter and a high cardiac output value as protocol set up. Within the framework of a national research project regarding the characterization of cardiovascular endoprostheses, the Laboratory of Biomedical Engineering is currently conducting an in-depth study of turbulence generated downstream of bileaflet cardiac valves. Four models of 19 mm sized bileaflet valve prostheses, namely St Jude Medical HP Edwards Tekna, Sorin Bicarbon, and CarboMedics, were studied in aortic position. The prostheses were selected for the nominal annulus diameter reported by the manufacturers without any assessment of the valve sizing method. The hemodynamic function was investigated using a bidimensional LDA system. Results concern velocity profiles during the peak flow systolic phase, at high cardiac output regime, highlighting the different flow field features downstream of the four small-sized cardiac valves.

  1. Automated identification and tracking of polar-cap plasma patches at solar minimum

    Directory of Open Access Journals (Sweden)

    R. Burston

    2014-03-01

    Full Text Available A method of automatically identifying and tracking polar-cap plasma patches, utilising data inversion and feature-tracking methods, is presented. A well-established and widely used 4-D ionospheric imaging algorithm, the Multi-Instrument Data Assimilation System (MIDAS, inverts slant total electron content (TEC data from ground-based Global Navigation Satellite System (GNSS receivers to produce images of the free electron distribution in the polar-cap ionosphere. These are integrated to form vertical TEC maps. A flexible feature-tracking algorithm, TRACK, previously used extensively in meteorological storm-tracking studies is used to identify and track maxima in the resulting 2-D data fields. Various criteria are used to discriminate between genuine patches and "false-positive" maxima such as the continuously moving day-side maximum, which results from the Earth's rotation rather than plasma motion. Results for a 12-month period at solar minimum, when extensive validation data are available, are presented. The method identifies 71 separate structures consistent with patch motion during this time. The limitations of solar minimum and the consequent small number of patches make climatological inferences difficult, but the feasibility of the method for patches larger than approximately 500 km in scale is demonstrated and a larger study incorporating other parts of the solar cycle is warranted. Possible further optimisation of discrimination criteria, particularly regarding the definition of a patch in terms of its plasma concentration enhancement over the surrounding background, may improve results.

  2. New aids for the non-invasive prenatal diagnosis of achondroplasia: dysmorphic features, charts of fetal size and molecular confirmation using cell-free fetal DNA in maternal plasma

    NARCIS (Netherlands)

    Chitty, L. S.; Griffin, D. R.; Meaney, C.; Barrett, A.; Khalil, A.; Pajkrt, E.; Cole, T. J.

    2011-01-01

    To improve the prenatal diagnosis of achondroplasia by constructing charts of fetal size, defining frequency of sonographic features and exploring the role of non-invasive molecular diagnosis based on cell-free fetal deoxyribonucleic acid (DNA) in maternal plasma. Data on fetuses with a confirmed

  3. Refinement of the deletion in 8q22.2-q22.3: the minimum deletion size at 8q22.3 related to intellectual disability and epilepsy.

    Science.gov (United States)

    Kuroda, Yukiko; Ohashi, Ikuko; Saito, Toshiyuki; Nagai, Jun-ichi; Ida, Kazumi; Naruto, Takuya; Iai, Mizue; Kurosawa, Kenji

    2014-08-01

    Kuechler et al. [2011] reported five patients with interstitial deletions in 8q22.2-q22.3 who had intellectual disability, epilepsy, and dysmorphic features. We report on a new patient with the smallest overlapping de novo deletion in 8q22.3 and refined the phenotype. The proposita was an 8-year-old girl, who developed seizures at 10 months, and her epileptic seizure became severe and difficult to control with antiepileptic drugs. She also exhibited developmental delay and walked alone at 24 months. She was referred to us for evaluation for developmental delay and epilepsy at the age of 8 years. She had intellectual disability (IQ 37 at 7 years) and autistic behavior, and spoke two word sentences at 8 years. She had mild dysmorphic features, including telecanthus and thick vermilion of the lips. Array comparative genomic hybridization detected a 1.36 Mb deletion in 8q22.3 that encompassed RRM2B and NCALD, which encode the small subunit of p53-inducible ribonucleotide reductase and neurocalcin delta in the neuronal calcium sensor family of calcium-binding proteins, respectively. The minimum overlapping region between the present and previously reported patients is considered to be a critical region for the phenotype of the deletion in 8q22.3. We suggest that the deletion in 8q22.3 may represent a clinically recognizable condition, which is characterized by intellectual disability and epilepsy. © 2014 Wiley Periodicals, Inc.

  4. 12 CFR 564.4 - Minimum appraisal standards.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Minimum appraisal standards. 564.4 Section 564.4 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY APPRAISALS § 564.4 Minimum appraisal standards. For federally related transactions, all appraisals shall, at a minimum: (a...

  5. The minimum wage in the Czech enterprises

    OpenAIRE

    Eva Lajtkepová

    2010-01-01

    Although the statutory minimum wage is not a new category, in the Czech Republic we encounter the definition and regulation of a minimum wage for the first time in the 1990 amendment to Act No. 65/1965 Coll., the Labour Code. The specific amount of the minimum wage and the conditions of its operation were then subsequently determined by government regulation in February 1991. Since that time, the value of minimum wage has been adjusted fifteenth times (the last increase was in January 2007). ...

  6. Minimum Wages and Regional Disparity: An analysis on the evolution of price-adjusted minimum wages and their effects on firm profitability (Japanese)

    OpenAIRE

    MORIKAWA Masayuki

    2013-01-01

    This paper, using prefecture level panel data, empirically analyzes 1) the recent evolution of price-adjusted regional minimum wages and 2) the effects of minimum wages on firm profitability. As a result of rapid increases in minimum wages in the metropolitan areas since 2007, the regional disparity of nominal minimum wages has been widening. However, the disparity of price-adjusted minimum wages has been shrinking. According to the analysis of the effects of minimum wages on profitability us...

  7. Improved superficial brain hemorrhage visualization in susceptibility weighted images by constrained minimum intensity projection

    Science.gov (United States)

    Castro, Marcelo A.; Pham, Dzung L.; Butman, John

    2016-03-01

    Minimum intensity projection is a technique commonly used to display magnetic resonance susceptibility weighted images, allowing the observer to better visualize hemorrhages and vasculature. The technique displays the minimum intensity in a given projection within a thick slab, allowing different connectivity patterns to be easily revealed. Unfortunately, the low signal intensity of the skull within the thick slab can mask superficial tissues near the skull base and other regions. Because superficial microhemorrhages are a common feature of traumatic brain injury, this effect limits the ability to proper diagnose and follow up patients. In order to overcome this limitation, we developed a method to allow minimum intensity projection to properly display superficial tissues adjacent to the skull. Our approach is based on two brain masks, the largest of which includes extracerebral voxels. The analysis of the rind within both masks containing the actual brain boundary allows reclassification of those voxels initially missed in the smaller mask. Morphological operations are applied to guarantee accuracy and topological correctness, and the mean intensity within the mask is assigned to all outer voxels. This prevents bone from dominating superficial regions in the projection, enabling superior visualization of cortical hemorrhages and vessels.

  8. SU-F-T-574: MLC Based SRS Beam Commissioning - Minimum Target Size Investigation

    Energy Technology Data Exchange (ETDEWEB)

    Zakikhani, R [Florida Cancer Specialists - Largo, Largo, FL (United States); Able, C [Florida Cancer Specialists - New Port Richey, New Port Richey, FL (United States)

    2016-06-15

    Purpose: To implement a MLC accelerator based SRS program using small fields down to 1 cm × 1 cm and to determine the smallest target size safe for clinical treatment. Methods: Computerized beam scanning was performed in water using a diode detector and a linac-head attached transmission ion chamber to characterize the small field dosimetric aspects of a 6 MV photon beam (Trilogy-Varian Medical Systems, Inc.). The output factors, PDD and profiles of field sizes 1, 2, 3, 4, and 10 cm{sup 2} were measured and utilized to create a new treatment planning system (TPS) model (AAA ver 11021). Static MLC SRS treatment plans were created and delivered to a homogeneous phantom (Cube 20, CIRS, Inc.) for a 1.0 cm and 1.5 cm “PTV” target. A 12 field DMLC plan was created for a 2.1 cm target. Radiochromic film (EBT3, Ashland Inc.) was used to measure the planar dose in the axial, coronal and sagittal planes. A micro ion chamber (0.007 cc) was used to measure the dose at isocenter for each treatment delivery. Results: The new TPS model was validated by using a tolerance criteria of 2% dose and 2 mm distance to agreement. For fields ≤ 3 cm{sup 2}, the max PDD, Profile and OF difference was 0.9%, 2%/2mm and 1.4% respectively. The measured radiochromic film planar dose distributions had gamma scores of 95.3% or higher using a 3%/2mm criteria. Ion chamber measurements for all 3 test plans effectively met our goal of delivering the dose accurately to within 5% when compared to the expected dose reported by the TPS (1 cm plan Δ= −5.2%, 1.5 cm plan Δ= −2.0%, 2 cm plan Δ= 1.5%). Conclusion: End to end testing confirmed that MLC defined SRS for target sizes ≥ 1.0 cm can be safely planned and delivered.

  9. Asteroid size distributions for the main belt and for asteroid families

    Science.gov (United States)

    Kazantzev, A.; Kazantzeva, L.

    2017-12-01

    The asteroid-size distribution for he Eos family was constructed. The WISE database containing the albedo p and the size D of over 80,000 asteroids was used. The b parameter of the power-law dependence has a minimum at some average values of the asteroid size of the family. A similar dependence b(D) exists for the whole asteroid belt. An assumption on the possible similarity of the formation mechanisms of the asteroid belt as a whole and separate families is made.

  10. 41 CFR 50-201.1101 - Minimum wages.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Minimum wages. 50-201... Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 201-GENERAL REGULATIONS § 50-201.1101 Minimum wages. Determinations of prevailing minimum wages or changes therein will be published in the Federal Register by the...

  11. Minimum Wage Laws and the Distribution of Employment.

    Science.gov (United States)

    Lang, Kevin

    The desirability of raising the minimum wage long revolved around just one question: the effect of higher minimum wages on the overall level of employment. An even more critical effect of the minimum wage rests on the composition of employment--who gets the minimum wage job. An examination of employment in eating and drinking establishments…

  12. 29 CFR 505.3 - Prevailing minimum compensation.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Prevailing minimum compensation. 505.3 Section 505.3 Labor... HUMANITIES § 505.3 Prevailing minimum compensation. (a)(1) In the absence of an alternative determination...)(2) of this section, the prevailing minimum compensation required to be paid under the Act to the...

  13. The Consequences of Indexing the Minimum Wage to Average Wages in the U.S. Economy.

    Science.gov (United States)

    Macpherson, David A.; Even, William E.

    The consequences of indexing the minimum wage to average wages in the U.S. economy were analyzed. The study data were drawn from the 1974-1978 May Current Population Survey (CPS) and the 180 monthly CPS Outgoing Rotation Group files for 1979-1993 (approximate annual sample sizes of 40,000 and 180,000, respectively). The effects of indexing on the…

  14. The minimum area requirements (MAR) for giant panda: an empirical study.

    Science.gov (United States)

    Qing, Jing; Yang, Zhisong; He, Ke; Zhang, Zejun; Gu, Xiaodong; Yang, Xuyu; Zhang, Wen; Yang, Biao; Qi, Dunwu; Dai, Qiang

    2016-12-08

    Habitat fragmentation can reduce population viability, especially for area-sensitive species. The Minimum Area Requirements (MAR) of a population is the area required for the population's long-term persistence. In this study, the response of occupancy probability of giant pandas against habitat patch size was studied in five of the six mountain ranges inhabited by giant panda, which cover over 78% of the global distribution of giant panda habitat. The probability of giant panda occurrence was positively associated with habitat patch area, and the observed increase in occupancy probability with patch size was higher than that due to passive sampling alone. These results suggest that the giant panda is an area-sensitive species. The MAR for giant panda was estimated to be 114.7 km 2 based on analysis of its occupancy probability. Giant panda habitats appear more fragmented in the three southern mountain ranges, while they are large and more continuous in the other two. Establishing corridors among habitat patches can mitigate habitat fragmentation, but expanding habitat patch sizes is necessary in mountain ranges where fragmentation is most intensive.

  15. Attention has memory: priming for the size of the attentional focus.

    Science.gov (United States)

    Fuggetta, Giorgio; Lanfranchi, Silvia; Campana, Gianluca

    2009-01-01

    Repeating the same target's features or spatial position, as well as repeating the same context (e.g. distractor sets) in visual search leads to a decrease of reaction times. This modulation can occur on a trial by trial basis (the previous trial primes the following one), but can also occur across multiple trials (i.e. performance in the current trial can benefit from features, position or context seen several trials earlier), and includes inhibition of different features, position or contexts besides facilitation of the same ones. Here we asked whether a similar implicit memory mechanism exists for the size of the attentional focus. By manipulating the size of the attentional focus with the repetition of search arrays with the same vs. different size, we found both facilitation for the same array size and inhibition for a different array size, as well as a progressive improvement in performance with increasing the number of repetition of search arrays with the same size. These results show that implicit memory for the size of the attentional focus can guide visual search even in the absence of feature or position priming, or distractor's contextual effects.

  16. Experimental verification of nanoparticle jet minimum quantity lubrication effectiveness in grinding

    International Nuclear Information System (INIS)

    Jia, Dongzhou; Li, Changhe; Zhang, Dongkun; Zhang, Yanbin; Zhang, Xiaowei

    2014-01-01

    In our experiment, K-P36 precision numerical control surface grinder was used for dry grinding, minimum quantity lubrication (MQL) grinding, nanoparticle jet MQL grinding, and traditional flood grinding of hardened 45 steel. A three-dimensional dynamometer was used to measure grinding force in the experiment. In this research, experiments were conducted to measure and calculate specific tangential grinding force, frictional coefficient, and specific grinding energy, thus verifying the lubrication performance of nanoparticles in surface grinding. Findings present that compared with dry grinding, the specific tangential grinding force of MQL grinding, nanoparticle jet MQL grinding, and flood grinding decreased by 45.88, 62.34, and 69.33 %, respectively. Their frictional coefficient was reduced by 11.22, 29.21, and 32.18 %, and the specific grinding energy declined by 45.89, 62.34, and 69.45 %, respectively. Nanoparticle jet MQL presented ideal lubrication effectiveness, which was attributed to the friction oil film with strong antifriction and anti-wear features formed by nanoparticles on the grinding wheel/workpiece interface. Moreover, lubricating properties of nanoparticles of the same size (50 nm) but different types were verified through experimentation. In our experiment, ZrO 2 nanoparticles, polycrystal diamond (PCD) nanoparticles, and MoS 2 nanoparticles were used in the comparison of nanoparticle jet MQL grinding. The experimental results manifest that MoS 2 nanoparticles exhibited the optimal lubricating effectiveness, followed by PCD nanoparticles. Our research also integrated the properties of different nanoparticles to analyze the lubrication mechanisms of different nanoparticles. The experiment further verified the impact of nanoparticle concentration on the effectiveness of nanoparticle jet MQL in grinding. The experimental results demonstrate that when the nanoparticle mass fraction was 6 %, the minimum specific tangential grinding force

  17. Minimum BER Receiver Filters with Block Memory for Uplink DS-CDMA Systems

    Directory of Open Access Journals (Sweden)

    Mérouane Debbah

    2008-05-01

    Full Text Available The problem of synchronous multiuser receiver design in the case of direct-sequence single-antenna code division multiple access (DS-CDMA uplink networks is studied over frequency selective fading channels. An exact expression for the bit error rate (BER is derived in the case of BPSK signaling. Moreover, an algorithm is proposed for finding the finite impulse response (FIR receiver filters with block memory such that the exact BER of the active users is minimized. Several properties of the minimum BER FIR filters with block memory are identified. The algorithm performance is found for scenarios with different channel qualities, spreading code lengths, receiver block memory size, near-far effects, and channel mismatch. For the BPSK constellation, the proposed FIR receiver structure with block memory has significant better BER with respect to Eb/N0 and near-far resistance than the corresponding minimum mean square error (MMSE filters with block memory.

  18. High variation in manufacturer-declared serving size of packaged discretionary foods in Australia.

    Science.gov (United States)

    Haskelberg, Hila; Neal, Bruce; Dunford, Elizabeth; Flood, Victoria; Rangan, Anna; Thomas, Beth; Cleanthous, Xenia; Trevena, Helen; Zheng, Jazzmin Miaobing; Louie, Jimmy Chun Yu; Gill, Timothy; Wu, Jason H Y

    2016-05-28

    Despite the potential of declared serving size to encourage appropriate portion size consumption, most countries including Australia have not developed clear reference guidelines for serving size. The present study evaluated variability in manufacturer-declared serving size of discretionary food and beverage products in Australia, and how declared serving size compared with the 2013 Australian Dietary Guideline (ADG) standard serve (600 kJ). Serving sizes were obtained from the Nutrition Information Panel for 4466 packaged, discretionary products in 2013 at four large supermarkets in Sydney, Australia, and categorised into fifteen categories in line with the 2013 ADG. For unique products that were sold in multiple package sizes, the percentage difference between the minimum and the maximum serving size across different package sizes was calculated. A high variation in serving size was found within the majority of food and beverage categories - for example, among 347 non-alcoholic beverages (e.g. soft drinks), the median for serving size was 250 (interquartile range (IQR) 250, 355) ml (range 100-750 ml). Declared serving size for unique products that are available in multiple package sizes also showed high variation, particularly for chocolate-based confectionery, with median percentage difference between minimum and maximum serving size of 183 (IQR 150) %. Categories with a high proportion of products that exceeded the 600 kJ ADG standard serve included cakes and muffins, pastries and desserts (≥74 % for each). High variability in declared serving size may confound interpretation and understanding of consumers interested in standardising and controlling their portion selection. Future research is needed to assess if and how standardising declared serving size might affect consumer behaviour.

  19. Do Some Workers Have Minimum Wage Careers?

    Science.gov (United States)

    Carrington, William J.; Fallick, Bruce C.

    2001-01-01

    Most workers who begin their careers in minimum-wage jobs eventually gain more experience and move on to higher paying jobs. However, more than 8% of workers spend at least half of their first 10 working years in minimum wage jobs. Those more likely to have minimum wage careers are less educated, minorities, women with young children, and those…

  20. Does the Minimum Wage Affect Welfare Caseloads?

    Science.gov (United States)

    Page, Marianne E.; Spetz, Joanne; Millar, Jane

    2005-01-01

    Although minimum wages are advocated as a policy that will help the poor, few studies have examined their effect on poor families. This paper uses variation in minimum wages across states and over time to estimate the impact of minimum wage legislation on welfare caseloads. We find that the elasticity of the welfare caseload with respect to the…

  1. 29 CFR 4.159 - General minimum wage.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true General minimum wage. 4.159 Section 4.159 Labor Office of... General minimum wage. The Act, in section 2(b)(1), provides generally that no contractor or subcontractor... a contract less than the minimum wage specified under section 6(a)(1) of the Fair Labor Standards...

  2. New aids for the non-invasive prenatal diagnosis of achondroplasia: dysmorphic features, charts of fetal size and molecular confirmation using cell-free fetal DNA in maternal plasma.

    Science.gov (United States)

    Chitty, L S; Griffin, D R; Meaney, C; Barrett, A; Khalil, A; Pajkrt, E; Cole, T J

    2011-03-01

    To improve the prenatal diagnosis of achondroplasia by constructing charts of fetal size, defining frequency of sonographic features and exploring the role of non-invasive molecular diagnosis based on cell-free fetal deoxyribonucleic acid (DNA) in maternal plasma. Data on fetuses with a confirmed diagnosis of achondroplasia were obtained from our databases, records reviewed, sonographic features and measurements determined and charts of fetal size constructed using the LMS (lambda-mu-sigma) method and compared with charts used in normal pregnancies. Cases referred to our regional genetics laboratory for molecular diagnosis using cell-free fetal DNA were identified and results reviewed. Twenty-six cases were scanned in our unit. Fetal size charts showed that femur length was usually on or below the 3(rd) centile by 25 weeks' gestation, and always below the 3(rd) by 30 weeks. Head circumference was above the 50(th) centile, increasing to above the 95(th) when compared with normal for the majority of fetuses. The abdominal circumference was also increased but to a lesser extent. Commonly reported sonographic features were bowing of the femora, frontal bossing, short fingers, a small chest and polyhydramnios. Analysis of cell-free fetal DNA in six pregnancies confirmed the presence of the c.1138G > A mutation in the FGRF3 gene in four cases with achondroplasia, but not the two subsequently found to be growth restricted. These data should improve the accuracy of diagnosis of achondroplasia based on sonographic findings, and have implications for targeted molecular confirmation that can reliably and safely be carried out using cell-free fetal DNA. Copyright © 2011 ISUOG. Published by John Wiley & Sons, Ltd.

  3. Prominent feature extraction for review analysis: an empirical study

    Science.gov (United States)

    Agarwal, Basant; Mittal, Namita

    2016-05-01

    Sentiment analysis (SA) research has increased tremendously in recent times. SA aims to determine the sentiment orientation of a given text into positive or negative polarity. Motivation for SA research is the need for the industry to know the opinion of the users about their product from online portals, blogs, discussion boards and reviews and so on. Efficient features need to be extracted for machine-learning algorithm for better sentiment classification. In this paper, initially various features are extracted such as unigrams, bi-grams and dependency features from the text. In addition, new bi-tagged features are also extracted that conform to predefined part-of-speech patterns. Furthermore, various composite features are created using these features. Information gain (IG) and minimum redundancy maximum relevancy (mRMR) feature selection methods are used to eliminate the noisy and irrelevant features from the feature vector. Finally, machine-learning algorithms are used for classifying the review document into positive or negative class. Effects of different categories of features are investigated on four standard data-sets, namely, movie review and product (book, DVD and electronics) review data-sets. Experimental results show that composite features created from prominent features of unigram and bi-tagged features perform better than other features for sentiment classification. mRMR is a better feature selection method as compared with IG for sentiment classification. Boolean Multinomial Naïve Bayes) algorithm performs better than support vector machine classifier for SA in terms of accuracy and execution time.

  4. Evolution of genome size and chromosome number in the carnivorous plant genus Genlisea (Lentibulariaceae), with a new estimate of the minimum genome size in angiosperms

    Science.gov (United States)

    Fleischmann, Andreas; Michael, Todd P.; Rivadavia, Fernando; Sousa, Aretuza; Wang, Wenqin; Temsch, Eva M.; Greilhuber, Johann; Müller, Kai F.; Heubl, Günther

    2014-01-01

    Background and Aims Some species of Genlisea possess ultrasmall nuclear genomes, the smallest known among angiosperms, and some have been found to have chromosomes of diminutive size, which may explain why chromosome numbers and karyotypes are not known for the majority of species of the genus. However, other members of the genus do not possess ultrasmall genomes, nor do most taxa studied in related genera of the family or order. This study therefore examined the evolution of genome sizes and chromosome numbers in Genlisea in a phylogenetic context. The correlations of genome size with chromosome number and size, with the phylogeny of the group and with growth forms and habitats were also examined. Methods Nuclear genome sizes were measured from cultivated plant material for a comprehensive sampling of taxa, including nearly half of all species of Genlisea and representing all major lineages. Flow cytometric measurements were conducted in parallel in two laboratories in order to compare the consistency of different methods and controls. Chromosome counts were performed for the majority of taxa, comparing different staining techniques for the ultrasmall chromosomes. Key Results Genome sizes of 15 taxa of Genlisea are presented and interpreted in a phylogenetic context. A high degree of congruence was found between genome size distribution and the major phylogenetic lineages. Ultrasmall genomes with 1C values of sections of the genus. The smallest known plant genomes were not found in G. margaretae, as previously reported, but in G. tuberosa (1C ≈ 61 Mbp) and some strains of G. aurea (1C ≈ 64 Mbp). Conclusions Genlisea is an ideal candidate model organism for the understanding of genome reduction as the genus includes species with both relatively large (∼1700 Mbp) and ultrasmall (∼61 Mbp) genomes. This comparative, phylogeny-based analysis of genome sizes and karyotypes in Genlisea provides essential data for selection of suitable species for comparative

  5. Experimental investigations of the minimum ignition energy and the minimum ignition temperature of inert and combustible dust cloud mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Addai, Emmanuel Kwasi, E-mail: emmanueladdai41@yahoo.com; Gabel, Dieter; Krause, Ulrich

    2016-04-15

    Highlights: • Ignition sensitivity of a highly flammable dust decreases upon addition of inert dust. • Minimum ignition temperature of a highly flammable dust increases when inert concentration increase. • Minimum ignition energy of a highly flammable dust increases when inert concentration increase. • The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%. - Abstract: The risks associated with dust explosions still exist in industries that either process or handle combustible dust. This explosion risk could be prevented or mitigated by applying the principle of inherent safety (moderation). This is achieved by adding an inert material to a highly combustible material in order to decrease the ignition sensitivity of the combustible dust. The presented paper deals with the experimental investigation of the influence of adding an inert dust on the minimum ignition energy and the minimum ignition temperature of the combustible/inert dust mixtures. The experimental investigation was done in two laboratory scale equipment: the Hartmann apparatus and the Godbert-Greenwald furnace for the minimum ignition energy and the minimum ignition temperature test respectively. This was achieved by mixing various amounts of three inert materials (magnesium oxide, ammonium sulphate and sand) and six combustible dusts (brown coal, lycopodium, toner, niacin, corn starch and high density polyethylene). Generally, increasing the inert materials concentration increases the minimum ignition energy as well as the minimum ignition temperatures until a threshold is reached where no ignition was obtained. The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%.

  6. Experimental investigations of the minimum ignition energy and the minimum ignition temperature of inert and combustible dust cloud mixtures

    International Nuclear Information System (INIS)

    Addai, Emmanuel Kwasi; Gabel, Dieter; Krause, Ulrich

    2016-01-01

    Highlights: • Ignition sensitivity of a highly flammable dust decreases upon addition of inert dust. • Minimum ignition temperature of a highly flammable dust increases when inert concentration increase. • Minimum ignition energy of a highly flammable dust increases when inert concentration increase. • The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%. - Abstract: The risks associated with dust explosions still exist in industries that either process or handle combustible dust. This explosion risk could be prevented or mitigated by applying the principle of inherent safety (moderation). This is achieved by adding an inert material to a highly combustible material in order to decrease the ignition sensitivity of the combustible dust. The presented paper deals with the experimental investigation of the influence of adding an inert dust on the minimum ignition energy and the minimum ignition temperature of the combustible/inert dust mixtures. The experimental investigation was done in two laboratory scale equipment: the Hartmann apparatus and the Godbert-Greenwald furnace for the minimum ignition energy and the minimum ignition temperature test respectively. This was achieved by mixing various amounts of three inert materials (magnesium oxide, ammonium sulphate and sand) and six combustible dusts (brown coal, lycopodium, toner, niacin, corn starch and high density polyethylene). Generally, increasing the inert materials concentration increases the minimum ignition energy as well as the minimum ignition temperatures until a threshold is reached where no ignition was obtained. The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%.

  7. Selection of morphological features of pollen grains for chosen tree taxa

    Directory of Open Access Journals (Sweden)

    Agnieszka Kubik-Komar

    2018-05-01

    Full Text Available The basis of aerobiological studies is to monitor airborne pollen concentrations and pollen season timing. This task is performed by appropriately trained staff and is difficult and time consuming. The goal of this research is to select morphological characteristics of grains that are the most discriminative for distinguishing between birch, hazel and alder taxa and are easy to determine automatically from microscope images. This selection is based on the split attributes of the J4.8 classification trees built for different subsets of features. Determining the discriminative features by this method, we provide specific rules for distinguishing between individual taxa, at the same time obtaining a high percentage of correct classification. The most discriminative among the 13 morphological characteristics studied are the following: number of pores, maximum axis, minimum axis, axes difference, maximum oncus width, and number of lateral pores. The classification result of the tree based on this subset is better than the one built on the whole feature set and it is almost 94%. Therefore, selection of attributes before tree building is recommended. The classification results for the features easiest to obtain from the image, i.e. maximum axis, minimum axis, axes difference, and number of lateral pores, are only 2.09 pp lower than those obtained for the complete set, but 3.23 pp lower than the results obtained for the selected most discriminating attributes only.

  8. Minimum Cycle Basis and All-Pairs Min Cut of a Planar Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    2009-01-01

    equivalent to the minimum cycle basis problem for planar graphs. We also obtain O(n3/2 log n) time and O(n3/2) space algorithms for finding, respectively, the weight vector and a Gomory-Hu tree of G. The previous best time and space bound for these two problems was quadratic. From our Gomory-Hu tree...... show that this is optimal if an explicit represen- tation of the basis is required. We then present an O(n3/2 log n) time and O(n3/2) space algorithm that computes a minimum cycle basis implicitly. From this result, we obtain an output-sensitive algorithm that explicitly computes a minimum cycle basis...... in O(n3/2 log n + C) time and O(n3/2 + C) space, where C is the total size (number of edges and vertices) of the cycles in the basis. These bounds reduce to O(n3/2 log n) and O(n3/2), respectively, when G is unweighted. We get similar results for the all-pairs min cut problem since it is dual...

  9. Features of Random Metal Nanowire Networks with Application in Transparent Conducting Electrodes

    KAUST Repository

    Maloth, Thirupathi

    2017-01-01

    in terms of sheet resistance and optical transmittance. However, as the electrical properties of such random networks are achieved thanks to a percolation network, a minimum size of the electrodes is needed so it actually exceeds the representative volume

  10. New Minimum Wage Research: A Symposium.

    Science.gov (United States)

    Ehrenberg, Ronald G.; And Others

    1992-01-01

    Includes "Introduction" (Ehrenberg); "Effect of the Minimum Wage [MW] on the Fast-Food Industry" (Katz, Krueger); "Using Regional Variation in Wages to Measure Effects of the Federal MW" (Card); "Do MWs Reduce Employment?" (Card); "Employment Effects of Minimum and Subminimum Wages" (Neumark,…

  11. Teaching the Minimum Wage in Econ 101 in Light of the New Economics of the Minimum Wage.

    Science.gov (United States)

    Krueger, Alan B.

    2001-01-01

    Argues that the recent controversy over the effect of the minimum wage on employment offers an opportunity for teaching introductory economics. Examines eight textbooks to determine topic coverage but finds little consensus. Describes how minimum wage effects should be taught. (RLH)

  12. 30 CFR 75.1431 - Minimum rope strength.

    Science.gov (United States)

    2010-07-01

    ..., including rotation resistant). For rope lengths less than 3,000 feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet...

  13. Sizing procedures for sun-tracking PV system with batteries

    Directory of Open Access Journals (Sweden)

    Gerek Ömer Nezih

    2017-01-01

    Full Text Available Deciding optimum number of PV panels, wind turbines and batteries (i.e. a complete renewable energy system for minimum cost and complete energy balance is a challenging and interesting problem. In the literature, some rough data models or limited recorded data together with low resolution hourly averaged meteorological values are used to test the sizing strategies. In this study, active sun tracking and fixed PV solar power generation values of ready-to-serve commercial products are recorded throughout 2015–2016. Simultaneously several outdoor parameters (solar radiation, temperature, humidity, wind speed/direction, pressure are recorded with high resolution. The hourly energy consumption values of a standard 4-person household, which is constructed in our campus in Eskisehir, Turkey, are also recorded for the same period. During sizing, novel parametric random process models for wind speed, temperature, solar radiation, energy demand and electricity generation curves are achieved and it is observed that these models provide sizing results with lower LLP through Monte Carlo experiments that consider average and minimum performance cases. Furthermore, another novel cost optimization strategy is adopted to show that solar tracking PV panels provide lower costs by enabling reduced number of installed batteries. Results are verified over real recorded data.

  14. Sizing procedures for sun-tracking PV system with batteries

    Science.gov (United States)

    Nezih Gerek, Ömer; Başaran Filik, Ümmühan; Filik, Tansu

    2017-11-01

    Deciding optimum number of PV panels, wind turbines and batteries (i.e. a complete renewable energy system) for minimum cost and complete energy balance is a challenging and interesting problem. In the literature, some rough data models or limited recorded data together with low resolution hourly averaged meteorological values are used to test the sizing strategies. In this study, active sun tracking and fixed PV solar power generation values of ready-to-serve commercial products are recorded throughout 2015-2016. Simultaneously several outdoor parameters (solar radiation, temperature, humidity, wind speed/direction, pressure) are recorded with high resolution. The hourly energy consumption values of a standard 4-person household, which is constructed in our campus in Eskisehir, Turkey, are also recorded for the same period. During sizing, novel parametric random process models for wind speed, temperature, solar radiation, energy demand and electricity generation curves are achieved and it is observed that these models provide sizing results with lower LLP through Monte Carlo experiments that consider average and minimum performance cases. Furthermore, another novel cost optimization strategy is adopted to show that solar tracking PV panels provide lower costs by enabling reduced number of installed batteries. Results are verified over real recorded data.

  15. PENAFSIRAN HAKIM TERHADAP KETENTUAN PIDANA MINIMUM KHUSUS DALAM UNDANG-UNDANG TINDAK PIDANA KORUPSI

    Directory of Open Access Journals (Sweden)

    Ismail Rumadan

    2013-11-01

    provision in the formulation of minimum deliknya against perpetrators of corruption . It is certainly different from the general criminal provisions in the draft Criminal Law (Penal Code which is more familiar maximum penal provision . The results showed that the minimum pinadana special provisions in the law of corruption can bebreached so long as the judge has the legal resening or residenti proper ratio to a corruption case by looking at the size scale of the corruption case with consideration and interpretation of the patterns perspective, social - justice, moral justice and community justice decision was taken to drop the minimum punishment. Criminal punishment under the criminal provisions of the special minimum in some court decisions can be made by several criteria into consideration the provisions of the criminal judges deviate minimum , the criteria of the element of state assets or state economy as a result of the acts of corruption tiundak and criteria of the role and position of the defendant in acts of corruption.

  16. 46 CFR 76.23-10 - Quantity, pipe sizes, and discharge rates.

    Science.gov (United States)

    2010-10-01

    ... to the number of heads served. Minimum pipe sizes shall be as given in table 76.23-10(c). Table 76.23... the same time to deliver water from the two highest fire hose outlets in a manner similar to that...

  17. Parabolic features and the erosion rate on Venus

    Science.gov (United States)

    Strom, Robert G.

    1993-01-01

    The impact cratering record on Venus consists of 919 craters covering 98 percent of the surface. These craters are remarkably well preserved, and most show pristine structures including fresh ejecta blankets. Only 35 craters (3.8 percent) have had their ejecta blankets embayed by lava and most of these occur in the Atla-Beta Regio region; an area thought to be recently active. parabolic features are associated with 66 of the 919 craters. These craters range in size from 6 to 105 km diameter. The parabolic features are thought to be the result of the deposition of fine-grained ejecta by winds in the dense venusian atmosphere. The deposits cover about 9 percent of the surface and none appear to be embayed by younger volcanic materials. However, there appears to be a paucity of these deposits in the Atla-Beta Regio region, and this may be due to the more recent volcanism in this area of Venus. Since parabolic features are probably fine-grain, wind-deposited ejecta, then all impact craters on Venus probably had these deposits at some time in the past. The older deposits have probably been either eroded or buried by eolian processes. Therefore, the present population of these features is probably associated with the most recent impact craters on the planet. Furthermore, the size/frequency distribution of craters with parabolic features is virtually identical to that of the total crater population. This suggests that there has been little loss of small parabolic features compared to large ones, otherwise there should be a significant and systematic paucity of craters with parabolic features with decreasing size compared to the total crater population. Whatever is erasing the parabolic features apparently does so uniformly regardless of the areal extent of the deposit. The lifetime of parabolic features and the eolian erosion rate on Venus can be estimated from the average age of the surface and the present population of parabolic features.

  18. Radiographic features of periapical cysts and granulomas

    OpenAIRE

    Zain, R. B.; Roswati, N.; Ismail, K.

    1989-01-01

    Many studies have been reported on radiographic lesion sizes of periapical lesions. However no studies have been reported on prevalences of subjective radiographic features in these lesions except for the early assumption that a periapical cyst usually exhibit a radiopaque cortex. This study is conducted to evaluate the prevalences of several subjective radiographic features of periapical cysts and granulomas in the hope to identify features that maybe suggestive of either diagnosis. The resu...

  19. Cuticular features as indicators of environmental pollution

    Science.gov (United States)

    G. K. Sharma

    1976-01-01

    Several leaf cuticular features such as stomatal frequency, stomatal size, trichome length, type, and frequency, and subsidiary cell complex respond to environmental pollution in different ways and hence can be used as indicators of environmental pollution in an area. Several modifications in cuticular features under polluted environments seem to indicate ecotypic or...

  20. 30 CFR 281.30 - Minimum royalty.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Minimum royalty. 281.30 Section 281.30 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR OFFSHORE LEASING OF MINERALS OTHER THAN OIL, GAS, AND SULPHUR IN THE OUTER CONTINENTAL SHELF Financial Considerations § 281.30 Minimum royalty...

  1. State cigarette minimum price laws - United States, 2009.

    Science.gov (United States)

    2010-04-09

    Cigarette price increases reduce the demand for cigarettes and thereby reduce smoking prevalence, cigarette consumption, and youth initiation of smoking. Excise tax increases are the most effective government intervention to increase the price of cigarettes, but cigarette manufacturers use trade discounts, coupons, and other promotions to counteract the effects of these tax increases and appeal to price-sensitive smokers. State cigarette minimum price laws, initiated by states in the 1940s and 1950s to protect tobacco retailers from predatory business practices, typically require a minimum percentage markup to be added to the wholesale and/or retail price. If a statute prohibits trade discounts from the minimum price calculation, these laws have the potential to counteract discounting by cigarette manufacturers. To assess the status of cigarette minimum price laws in the United States, CDC surveyed state statutes and identified those states with minimum price laws in effect as of December 31, 2009. This report summarizes the results of that survey, which determined that 25 states had minimum price laws for cigarettes (median wholesale markup: 4.00%; median retail markup: 8.00%), and seven of those states also expressly prohibited the use of trade discounts in the minimum retail price calculation. Minimum price laws can help prevent trade discounting from eroding the positive effects of state excise tax increases and higher cigarette prices on public health.

  2. Engineering features of ISX

    International Nuclear Information System (INIS)

    Lousteau, D.C.; Jernigan, T.C.; Schaffer, M.J.; Hussung, R.O.

    1975-01-01

    ISX, an Impurity Study Experiment, is presently being designed at Oak Ridge National Laboratory as a joint scientific effort between ORNL and General Atomic Company. ISX is a moderate size tokamak dedicated to the study of impurity production, diffusion, and control. The significant engineering features of this device are discussed

  3. Quark bag coupling to finite size pions

    International Nuclear Information System (INIS)

    De Kam, J.; Pirner, H.J.

    1982-01-01

    A standard approximation in theories of quark bags coupled to a pion field is to treat the pion as an elementary field ignoring its substructure and finite size. A difficulty associated with these treatments in the lack of stability of the quark bag due to the rapid increase of the pion pressure on the bad as the bag size diminishes. We investigate the effects of the finite size of the qanti q pion on the pion quark bag coupling by means of a simple nonlocal pion quark interaction. With this amendment the pion pressure on the bag vanishes if the bag size goes to zero. No stability problems are encountered in this description. Furthermore, for extended pions, no longer a maximum is set to the bag parameter B. Therefore 'little bag' solutions may be found provided that B is large enough. We also discuss the possibility of a second minimum in the bag energy function. (orig.)

  4. Methods for obtaining true particle size distributions from cross section measurements

    Energy Technology Data Exchange (ETDEWEB)

    Lord, Kristina Alyse [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    Sectioning methods are frequently used to measure grain sizes in materials. These methods do not provide accurate grain sizes for two reasons. First, the sizes of features observed on random sections are always smaller than the true sizes of solid spherical shaped objects, as noted by Wicksell [1]. This is the case because the section very rarely passes through the center of solid spherical shaped objects randomly dispersed throughout a material. The sizes of features observed on random sections are inversely related to the distance of the center of the solid object from the section [1]. Second, on a plane section through the solid material, larger sized features are more frequently observed than smaller ones due to the larger probability for a section to come into contact with the larger sized portion of the spheres than the smaller sized portion. As a result, it is necessary to find a method that takes into account these reasons for inaccurate particle size measurements, while providing a correction factor for accurately determining true particle size measurements. I present a method for deducing true grain size distributions from those determined from specimen cross sections, either by measurement of equivalent grain diameters or linear intercepts.

  5. TreeBASIS Feature Descriptor and Its Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Spencer Fowers

    2014-01-01

    Full Text Available This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and the effectively descriptive basis dictionary image at a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.

  6. 9 CFR 147.51 - Authorized laboratory minimum requirements.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Authorized laboratory minimum requirements. 147.51 Section 147.51 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE... Authorized Laboratories and Approved Tests § 147.51 Authorized laboratory minimum requirements. These minimum...

  7. Experimental investigations of the minimum ignition energy and the minimum ignition temperature of inert and combustible dust cloud mixtures.

    Science.gov (United States)

    Addai, Emmanuel Kwasi; Gabel, Dieter; Krause, Ulrich

    2016-04-15

    The risks associated with dust explosions still exist in industries that either process or handle combustible dust. This explosion risk could be prevented or mitigated by applying the principle of inherent safety (moderation). This is achieved by adding an inert material to a highly combustible material in order to decrease the ignition sensitivity of the combustible dust. The presented paper deals with the experimental investigation of the influence of adding an inert dust on the minimum ignition energy and the minimum ignition temperature of the combustible/inert dust mixtures. The experimental investigation was done in two laboratory scale equipment: the Hartmann apparatus and the Godbert-Greenwald furnace for the minimum ignition energy and the minimum ignition temperature test respectively. This was achieved by mixing various amounts of three inert materials (magnesium oxide, ammonium sulphate and sand) and six combustible dusts (brown coal, lycopodium, toner, niacin, corn starch and high density polyethylene). Generally, increasing the inert materials concentration increases the minimum ignition energy as well as the minimum ignition temperatures until a threshold is reached where no ignition was obtained. The permissible range for the inert mixture to minimize the ignition risk lies between 60 to 80%. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Hierarchical complexity and the size limits of life.

    Science.gov (United States)

    Heim, Noel A; Payne, Jonathan L; Finnegan, Seth; Knope, Matthew L; Kowalewski, Michał; Lyons, S Kathleen; McShea, Daniel W; Novack-Gottshall, Philip M; Smith, Felisa A; Wang, Steve C

    2017-06-28

    Over the past 3.8 billion years, the maximum size of life has increased by approximately 18 orders of magnitude. Much of this increase is associated with two major evolutionary innovations: the evolution of eukaryotes from prokaryotic cells approximately 1.9 billion years ago (Ga), and multicellular life diversifying from unicellular ancestors approximately 0.6 Ga. However, the quantitative relationship between organismal size and structural complexity remains poorly documented. We assessed this relationship using a comprehensive dataset that includes organismal size and level of biological complexity for 11 172 extant genera. We find that the distributions of sizes within complexity levels are unimodal, whereas the aggregate distribution is multimodal. Moreover, both the mean size and the range of size occupied increases with each additional level of complexity. Increases in size range are non-symmetric: the maximum organismal size increases more than the minimum. The majority of the observed increase in organismal size over the history of life on the Earth is accounted for by two discrete jumps in complexity rather than evolutionary trends within levels of complexity. Our results provide quantitative support for an evolutionary expansion away from a minimal size constraint and suggest a fundamental rescaling of the constraints on minimal and maximal size as biological complexity increases. © 2017 The Author(s).

  9. Generalised Brown Clustering and Roll-up Feature Generation

    DEFF Research Database (Denmark)

    Derczynski, Leon; Chester, Sean

    2016-01-01

    active set size. Moreover, the generalisation permits a novel approach to feature selection from Brown clusters: We show that the standard approach of shearing the Brown clustering output tree at arbitrary bitlengths is lossy and that features should be chosen instead by rolling up Generalised Brown...

  10. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition

    Science.gov (United States)

    Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan

    2017-01-01

    Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition. PMID:28937987

  11. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition.

    Science.gov (United States)

    Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan

    2017-01-01

    Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition.

  12. Minimum Price Guarantees In a Consumer Search Model

    NARCIS (Netherlands)

    M.C.W. Janssen (Maarten); A. Parakhonyak (Alexei)

    2009-01-01

    textabstractThis paper is the first to examine the effect of minimum price guarantees in a sequential search model. Minimum price guarantees are not advertised and only known to consumers when they come to the shop. We show that in such an environment, minimum price guarantees increase the value of

  13. Wage inequality, minimum wage effects and spillovers

    OpenAIRE

    Stewart, Mark B.

    2011-01-01

    This paper investigates possible spillover effects of the UK minimum wage. The halt in the growth in inequality in the lower half of the wage distribution (as measured by the 50:10 percentile ratio) since the mid-1990s, in contrast to the continued inequality growth in the upper half of the distribution, suggests the possibility of a minimum wage effect and spillover effects on wages above the minimum. This paper analyses individual wage changes, using both a difference-in-differences estimat...

  14. Minimum Variance Portfolios in the Brazilian Equity Market

    Directory of Open Access Journals (Sweden)

    Alexandre Rubesam

    2013-03-01

    Full Text Available We investigate minimum variance portfolios in the Brazilian equity market using different methods to estimate the covariance matrix, from the simple model of using the sample covariance to multivariate GARCH models. We compare the performance of the minimum variance portfolios to those of the following benchmarks: (i the IBOVESPA equity index, (ii an equally-weighted portfolio, (iii the maximum Sharpe ratio portfolio and (iv the maximum growth portfolio. Our results show that the minimum variance portfolio has higher returns with lower risk compared to the benchmarks. We also consider long-short 130/30 minimum variance portfolios and obtain similar results. The minimum variance portfolio invests in relatively few stocks with low βs measured with respect to the IBOVESPA index, being easily replicable by individual and institutional investors alike.

  15. Characteristics of low-latitude ionospheric depletions and enhancements during solar minimum

    Science.gov (United States)

    Haaser, R. A.; Earle, G. D.; Heelis, R. A.; Klenzing, J.; Stoneback, R.; Coley, W. R.; Burrell, A. G.

    2012-10-01

    Under the waning solar minimum conditions during 2009 and 2010, the Ion Velocity Meter, part of the Coupled Ion Neutral Dynamics Investigation aboard the Communication/Navigation Outage Forecasting System satellite, is used to measure in situ nighttime ion densities and drifts at altitudes between 400 and 550 km during the hours 21:00-03:00 solar local time. A new approach to detecting and classifying well-formed ionospheric plasma depletions and enhancements (bubbles and blobs) with scale sizes between 50 and 500 km is used to develop geophysical statistics for the summer, winter, and equinox seasons during the quiet solar conditions. Some diurnal and seasonal geomagnetic distribution characteristics confirm previous work on equatorial irregularities and scintillations, while other elements reveal new behaviors that will require further investigation before they may be fully understood. Events identified in the study reveal very different and often opposite behaviors of bubbles and blobs during solar minimum. In particular, more bubbles demonstrating deeper density fluctuations and faster perturbation plasma drifts typically occur earlier near the magnetic equator, while blobs of similar magnitude occur more often far away from the geomagnetic equator closer to midnight.

  16. Minimum Covers of Fixed Cardinality in Weighted Graphs.

    Science.gov (United States)

    White, Lee J.

    Reported is the result of research on combinatorial and algorithmic techniques for information processing. A method is discussed for obtaining minimum covers of specified cardinality from a given weighted graph. By the indicated method, it is shown that the family of minimum covers of varying cardinality is related to the minimum spanning tree of…

  17. Who Benefits from a Minimum Wage Increase?

    OpenAIRE

    John W. Lopresti; Kevin J. Mumford

    2015-01-01

    This paper addresses the question of how a minimum wage increase affects the wages of low-wage workers. Most studies assume that there is a simple mechanical increase in the wage for workers earning a wage between the old and the new minimum wage, with some studies allowing for spillovers to workers with wages just above this range. Rather than assume that the wages of these workers would have remained constant, this paper estimates how a minimum wage increase impacts a low-wage worker's wage...

  18. Representative volume element size of a polycrystalline aggregate with embedded short crack

    International Nuclear Information System (INIS)

    Simonovski, I.; Cizelj, L.

    2007-01-01

    A random polycrystalline aggregate model is proposed for evaluation of a representative volume element size (RVE) of a 316L stainless steel with embedded surface crack. RVE size is important since it defines the size of specimen where the influence of local microstructural features averages out, resulting in the same macroscopic response for geometrically similar specimen. On the other hand macroscopic responses of specimen with size smaller than RVE will, due to the microstructural features, differ significantly. Different sizes and orientations of grains, inclusions, voids,... etc are examples of such microstructural features. If a specimen size is above RVE size, classical continuum mechanics can be applied. On the other hand, advanced material models should be used for specimen with size below RVE. This paper proposes one such model, where random size, shape and orientation of grains are explicitly modeled. Crystal plasticity constitutive model is used to account for slip in the grains. RVE size is estimated by calculating the crack tip opening displacements of aggregates with different grain numbers. Progressively larger number of grains are included in the aggregates until the crack tip displacements for two consecutive aggregates of increasing size differ less than 1 %. At this point the model has reached RVE size. (author)

  19. Clustering based gene expression feature selection method: A computational approach to enrich the classifier efficiency of differentially expressed genes

    KAUST Repository

    Abusamra, Heba

    2016-07-20

    The native nature of high dimension low sample size of gene expression data make the classification task more challenging. Therefore, feature (gene) selection become an apparent need. Selecting a meaningful and relevant genes for classifier not only decrease the computational time and cost, but also improve the classification performance. Among different approaches of feature selection methods, however most of them suffer from several problems such as lack of robustness, validation issues etc. Here, we present a new feature selection technique that takes advantage of clustering both samples and genes. Materials and methods We used leukemia gene expression dataset [1]. The effectiveness of the selected features were evaluated by four different classification methods; support vector machines, k-nearest neighbor, random forest, and linear discriminate analysis. The method evaluate the importance and relevance of each gene cluster by summing the expression level for each gene belongs to this cluster. The gene cluster consider important, if it satisfies conditions depend on thresholds and percentage otherwise eliminated. Results Initial analysis identified 7120 differentially expressed genes of leukemia (Fig. 15a), after applying our feature selection methodology we end up with specific 1117 genes discriminating two classes of leukemia (Fig. 15b). Further applying the same method with more stringent higher positive and lower negative threshold condition, number reduced to 58 genes have be tested to evaluate the effectiveness of the method (Fig. 15c). The results of the four classification methods are summarized in Table 11. Conclusions The feature selection method gave good results with minimum classification error. Our heat-map result shows distinct pattern of refines genes discriminating between two classes of leukemia.

  20. How unprecedented a solar minimum was it?

    Science.gov (United States)

    Russell, C T; Jian, L K; Luhmann, J G

    2013-05-01

    The end of the last solar cycle was at least 3 years late, and to date, the new solar cycle has seen mainly weaker activity since the onset of the rising phase toward the new solar maximum. The newspapers now even report when auroras are seen in Norway. This paper is an update of our review paper written during the deepest part of the last solar minimum [1]. We update the records of solar activity and its consequent effects on the interplanetary fields and solar wind density. The arrival of solar minimum allows us to use two techniques that predict sunspot maximum from readings obtained at solar minimum. It is clear that the Sun is still behaving strangely compared to the last few solar minima even though we are well beyond the minimum phase of the cycle 23-24 transition.

  1. Topside measurements at Jicamarca during solar minimum

    Directory of Open Access Journals (Sweden)

    D. L. Hysell

    2009-01-01

    Full Text Available Long-pulse topside radar data acquired at Jicamarca and processed using full-profile analysis are compared to data processed using more conventional, range-gated approaches and with analytic and computational models. The salient features of the topside observations include a dramatic increase in the Te/Ti temperature ratio above the F peak at dawn and a local minimum in the topside plasma temperature in the afternoon. The hydrogen ion fraction was found to exhibit hyperbolic tangent-shaped profiles that become shallow (gradually changing above the O+-H+ transition height during the day. The profile shapes are generally consistent with diffusive equilibrium, although shallowing to the point of changes in inflection can only be accounted for by taking the effects of E×B drifts and meridional winds into account. The SAMI2 model demonstrates this as well as the substantial effect that drifts and winds can have on topside temperatures. Significant quiet-time variability in the topside composition and temperatures may be due to variability in the mechanical forcing. Correlations between topside measurements and magnetometer data at Jicamarca support this hypothesis.

  2. THE CHROMOSPHERIC SOLAR MILLIMETER-WAVE CAVITY ORIGINATES IN THE TEMPERATURE MINIMUM REGION

    Energy Technology Data Exchange (ETDEWEB)

    De la Luz, Victor [Instituto Nacional de Astrofisica, Optica y Electronica, Tonantzintla, Puebla, Mexico, Apdo. Postal 51 y 216, 72000 (Mexico); Raulin, Jean-Pierre [CRAAM, Universidade Presbiteriana Mackenzie, Sao Paulo, SP 01302-907 (Brazil); Lara, Alejandro [Instituto de Geofisica, Universidad Nacional Autonoma de Mexico, Mexico 04510 (Mexico)

    2013-01-10

    We present a detailed theoretical analysis of the local radio emission at the lower part of the solar atmosphere. To accomplish this, we have used a numerical code to simulate the emission and transport of high-frequency electromagnetic waves from 2 GHz up to 10 THz. As initial conditions, we used VALC, SEL05, and C7 solar chromospheric models. In this way, the generated synthetic spectra allow us to study the local emission and absorption processes with high resolution in both altitude and frequency. Associated with the temperature minimum predicted by these models, we found that the local optical depth at millimeter wavelengths remains constant, producing an optically thin layer that is surrounded by two layers of high local emission. We call this structure the Chromospheric Solar Millimeter-wave Cavity (CSMC). The temperature profile, which features temperature minimum layers and a subsequent temperature rise, produces the CSMC phenomenon. The CSMC shows the complexity of the relation between the theoretical temperature profile and the observed brightness temperature and may help us to understand the dispersion of the observed brightness temperature in the millimeter wavelength range.

  3. Minimum-Cost Reachability for Priced Timed Automata

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Fehnker, Ansgar; Hune, Thomas Seidelin

    2001-01-01

    This paper introduces the model of linearly priced timed automata as an extension of timed automata, with prices on both transitions and locations. For this model we consider the minimum-cost reachability problem: i.e. given a linearly priced timed automaton and a target state, determine...... the minimum cost of executions from the initial state to the target state. This problem generalizes the minimum-time reachability problem for ordinary timed automata. We prove decidability of this problem by offering an algorithmic solution, which is based on a combination of branch-and-bound techniques...

  4. Minimum Q Electrically Small Antennas

    DEFF Research Database (Denmark)

    Kim, O. S.

    2012-01-01

    Theoretically, the minimum radiation quality factor Q of an isolated resonance can be achieved in a spherical electrically small antenna by combining TM1m and TE1m spherical modes, provided that the stored energy in the antenna spherical volume is totally suppressed. Using closed-form expressions...... for a multiarm spherical helix antenna confirm the theoretical predictions. For example, a 4-arm spherical helix antenna with a magnetic-coated perfectly electrically conducting core (ka=0.254) exhibits the Q of 0.66 times the Chu lower bound, or 1.25 times the minimum Q....

  5. Enhancing the Performance of LibSVM Classifier by Kernel F-Score Feature Selection

    Science.gov (United States)

    Sarojini, Balakrishnan; Ramaraj, Narayanasamy; Nickolas, Savarimuthu

    Medical Data mining is the search for relationships and patterns within the medical datasets that could provide useful knowledge for effective clinical decisions. The inclusion of irrelevant, redundant and noisy features in the process model results in poor predictive accuracy. Much research work in data mining has gone into improving the predictive accuracy of the classifiers by applying the techniques of feature selection. Feature selection in medical data mining is appreciable as the diagnosis of the disease could be done in this patient-care activity with minimum number of significant features. The objective of this work is to show that selecting the more significant features would improve the performance of the classifier. We empirically evaluate the classification effectiveness of LibSVM classifier on the reduced feature subset of diabetes dataset. The evaluations suggest that the feature subset selected improves the predictive accuracy of the classifier and reduce false negatives and false positives.

  6. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  7. Stochastic variational approach to minimum uncertainty states

    Energy Technology Data Exchange (ETDEWEB)

    Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)

    1995-05-21

    We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)

  8. No support for Heincke's law in hagfish (Myxinidae): lack of an association between body size and the depth of species occurrence.

    Science.gov (United States)

    Schumacher, E L; Owens, B D; Uyeno, T A; Clark, A J; Reece, J S

    2017-08-01

    This study tests for interspecific evidence of Heincke's law among hagfishes and advances the field of research on body size and depth of occurrence in fishes by including a phylogenetic correction and by examining depth in four ways: maximum depth, minimum depth, mean depth of recorded specimens and the average of maximum and minimum depths of occurrence. Results yield no evidence for Heincke's law in hagfishes, no phylogenetic signal for the depth at which species occur, but moderate to weak phylogenetic signal for body size, suggesting that phylogeny may play a role in determining body size in this group. © 2017 The Fisheries Society of the British Isles.

  9. Minimum entropy production principle

    Czech Academy of Sciences Publication Activity Database

    Maes, C.; Netočný, Karel

    2013-01-01

    Roč. 8, č. 7 (2013), s. 9664-9677 ISSN 1941-6016 Institutional support: RVO:68378271 Keywords : MINEP Subject RIV: BE - Theoretical Physics http://www.scholarpedia.org/article/Minimum_entropy_production_principle

  10. Stability of deep features across CT scanners and field of view using a physical phantom

    Science.gov (United States)

    Paul, Rahul; Shafiq-ul-Hassan, Muhammad; Moros, Eduardo G.; Gillies, Robert J.; Hall, Lawrence O.; Goldgof, Dmitry B.

    2018-02-01

    Radiomics is the process of analyzing radiological images by extracting quantitative features for monitoring and diagnosis of various cancers. Analyzing images acquired from different medical centers is confounded by many choices in acquisition, reconstruction parameters and differences among device manufacturers. Consequently, scanning the same patient or phantom using various acquisition/reconstruction parameters as well as different scanners may result in different feature values. To further evaluate this issue, in this study, CT images from a physical radiomic phantom were used. Recent studies showed that some quantitative features were dependent on voxel size and that this dependency could be reduced or removed by the appropriate normalization factor. Deep features extracted from a convolutional neural network, may also provide additional features for image analysis. Using a transfer learning approach, we obtained deep features from three convolutional neural networks pre-trained on color camera images. An we examination of the dependency of deep features on image pixel size was done. We found that some deep features were pixel size dependent, and to remove this dependency we proposed two effective normalization approaches. For analyzing the effects of normalization, a threshold has been used based on the calculated standard deviation and average distance from a best fit horizontal line among the features' underlying pixel size before and after normalization. The inter and intra scanner dependency of deep features has also been evaluated.

  11. THE SPITZER INFRARED SPECTROGRAPH DEBRIS DISK CATALOG. II. SILICATE FEATURE ANALYSIS OF UNRESOLVED TARGETS

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Tushar [Department of Earth and Planetary Science, University of California Berkeley, Berkeley, CA 94720-4767 (United States); Chen, Christine H. [Space Telescope Science Institute, 3700 San Martin Drive Baltimore, MD 21218 (United States); Jang-Condell, Hannah [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Manoj, P. [Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005 (India); Sargent, Benjamin A. [Center for Imaging Science and Laboratory for Multiwavelength Astrophysics, Rochester Institute of Technology, 54 Lomb Memorial Drive, Rochester, NY 14623 (United States); Watson, Dan M. [Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627 (United States); Lisse, Carey M., E-mail: cchen@stsci.edu [Johns Hopkins University Applied Physics Laboratory, 11100 Johns Hopkins Road, Laurel, MD 20723 (United States)

    2015-01-10

    During the Spitzer Space Telescope cryogenic mission, astronomers obtained Infrared Spectrograph (IRS) observations of hundreds of debris disk candidates that have been compiled in the Spitzer IRS Debris Disk Catalog. We have discovered 10 and/or 20 μm silicate emission features toward 120 targets in the catalog and modeled the IRS spectra of these sources, consistent with MIPS 70 μm observations, assuming that the grains are composed of silicates (olivine, pyroxene, forsterite, and enstatite) and are located either in a continuous disk with power-law size and surface density distributions or thin rings that are well-characterized using two separate dust grain temperatures. For systems better fit by the continuous disk model, we find that (1) the dust size distribution power-law index is consistent with that expected from a collisional cascade, q = 3.5-4.0, with a large number of values outside this range, and (2) the minimum grain size, a {sub min}, increases with stellar luminosity, L {sub *}, but the dependence of a {sub min} on L {sub *} is weaker than expected from radiation pressure alone. In addition, we also find that (3) the crystalline fraction of dust in debris disks evolves as a function of time with a large dispersion in crystalline fractions for stars of any particular stellar age or mass, (4) the disk inner edge is correlated with host star mass, and (5) there exists substantial variation in the properties of coeval disks in Sco-Cen, indicating that the observed variation is probably due to stochasticity and diversity in planet formation.

  12. Blade size and weight effects in shovel design.

    Science.gov (United States)

    Freivalds, A; Kim, Y J

    1990-03-01

    The shovel is a basic tool that has undergone only nominal systematic design changes. Although previous studies found shovel-weight and blade-size effects of shovelling, the exact trade-off between the two has not been quantified. Energy expenditure, heart rate, ratings of perceived exertion and shovelling performance were measured on five subjects using five shovels with varying blade sizes and weights to move sand. Energy expenditure, normalised to subject weight and load handled, varied quadratically with the blade-size/shovel-weight (B/W) ratio. Minimum energy cost was at B/W = 0.0676 m2/kg, which for an average subject and average load would require an acceptable 5.16 kcal/min of energy expenditure. Subjects, through the ratings of perceived exertion, also strongly preferred the lighter shovels without regard to blade size. Too large a blade or too heavy a shovel increased energy expenditure beyond acceptable levels, while too small a blade reduced efficiency of the shovelling.

  13. Improvement in minimum detectable activity for low energy gamma by optimization in counting geometry

    Directory of Open Access Journals (Sweden)

    Anil Gupta

    2017-01-01

    Full Text Available Gamma spectrometry for environmental samples of low specific activities demands low minimum detection levels of measurement. An attempt has been made to lower the gamma detection level of measurement by optimizing the sample geometry, without compromising on the sample size. Gamma energy of 50–200 keV range was chosen for the study, since low energy gamma photons suffer the most self-attenuation within matrix. The simulation study was carried out using MCNP based software “EffCalcMC” for silica matrix and cylindrical geometries. A volume of 250 ml sample geometry of 9 cm diameter is optimized as the best suitable geometry for use, against the in-practice 7 cm diameter geometry of same volume. An increase in efficiency of 10%–23% was observed for the 50–200 keV gamma energy range and a corresponding lower minimum detectable activity of 9%–20% could be achieved for the same.

  14. Improved pulmonary nodule classification utilizing quantitative lung parenchyma features.

    Science.gov (United States)

    Dilger, Samantha K N; Uthoff, Johanna; Judisch, Alexandra; Hammond, Emily; Mott, Sarah L; Smith, Brian J; Newell, John D; Hoffman, Eric A; Sieren, Jessica C

    2015-10-01

    Current computer-aided diagnosis (CAD) models for determining pulmonary nodule malignancy characterize nodule shape, density, and border in computed tomography (CT) data. Analyzing the lung parenchyma surrounding the nodule has been minimally explored. We hypothesize that improved nodule classification is achievable by including features quantified from the surrounding lung tissue. To explore this hypothesis, we have developed expanded quantitative CT feature extraction techniques, including volumetric Laws texture energy measures for the parenchyma and nodule, border descriptors using ray-casting and rubber-band straightening, histogram features characterizing densities, and global lung measurements. Using stepwise forward selection and leave-one-case-out cross-validation, a neural network was used for classification. When applied to 50 nodules (22 malignant and 28 benign) from high-resolution CT scans, 52 features (8 nodule, 39 parenchymal, and 5 global) were statistically significant. Nodule-only features yielded an area under the ROC curve of 0.918 (including nodule size) and 0.872 (excluding nodule size). Performance was improved through inclusion of parenchymal (0.938) and global features (0.932). These results show a trend toward increased performance when the parenchyma is included, coupled with the large number of significant parenchymal features that support our hypothesis: the pulmonary parenchyma is influenced differentially by malignant versus benign nodules, assisting CAD-based nodule characterizations.

  15. Improving Music Genre Classification by Short-Time Feature Integration

    DEFF Research Database (Denmark)

    Meng, Anders; Ahrendt, Peter; Larsen, Jan

    2005-01-01

    Many different short-time features, using time windows in the size of 10-30 ms, have been proposed for music segmentation, retrieval and genre classification. However, often the available time frame of the music to make the actual decision or comparison (the decision time horizon) is in the range...... of seconds instead of milliseconds. The problem of making new features on the larger time scale from the short-time features (feature integration) has only received little attention. This paper investigates different methods for feature integration and late information fusion for music genre classification...

  16. Minimum emittance in TBA and MBA lattices

    Science.gov (United States)

    Xu, Gang; Peng, Yue-Mei

    2015-03-01

    For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 31/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design.

  17. Minimum emittance in TBA and MBA lattices

    International Nuclear Information System (INIS)

    Xu Gang; Peng Yuemei

    2015-01-01

    For reaching a small emittance in a modern light source, triple bend achromats (TBA), theoretical minimum emittance (TME) and even multiple bend achromats (MBA) have been considered. This paper derived the necessary condition for achieving minimum emittance in TBA and MBA theoretically, where the bending angle of inner dipoles has a factor of 3 1/3 bigger than that of the outer dipoles. Here, we also calculated the conditions attaining the minimum emittance of TBA related to phase advance in some special cases with a pure mathematics method. These results may give some directions on lattice design. (authors)

  18. Size effects in manufacturing of metallic components

    DEFF Research Database (Denmark)

    Vollertsen, F; Biermann, D; Hansen, Hans Nørgaard

    2009-01-01

    In manufacturing of metallic components, the size of the part plays an important role for the process behaviour. This is due to so called size effects, which lead to changes in the process behaviour even if the relationship between the main geometrical features is kept constant. The aim...... of this paper is to give a systematic review on Such effects and their potential use or remedy. First, the typology of size effects will be explained, followed by a description of size effects on strength and tribology. The last three sections describe size effects on formability, forming processes and cutting...... processes. (C) 2009 CIRP....

  19. Prostate cancer multi-feature analysis using trans-rectal ultrasound images

    International Nuclear Information System (INIS)

    Mohamed, S S; Salama, M M A; Kamel, M; El-Saadany, E F; Rizkalla, K; Chin, J

    2005-01-01

    This note focuses on extracting and analysing prostate texture features from trans-rectal ultrasound (TRUS) images for tissue characterization. One of the principal contributions of this investigation is the use of the information of the images' frequency domain features and spatial domain features to attain a more accurate diagnosis. Each image is divided into regions of interest (ROIs) by the Gabor multi-resolution analysis, a crucial stage, in which segmentation is achieved according to the frequency response of the image pixels. The pixels with a similar response to the same filter are grouped to form one ROI. Next, from each ROI two different statistical feature sets are constructed; the first set includes four grey level dependence matrix (GLDM) features and the second set consists of five grey level difference vector (GLDV) features. These constructed feature sets are then ranked by the mutual information feature selection (MIFS) algorithm. Here, the features that provide the maximum mutual information of each feature and class (cancerous and non-cancerous) and the minimum mutual information of the selected features are chosen, yeilding a reduced feature subset. The two constructed feature sets, GLDM and GLDV, as well as the reduced feature subset, are examined in terms of three different classifiers: the condensed k-nearest neighbour (CNN), the decision tree (DT) and the support vector machine (SVM). The accuracy classification results range from 87.5% to 93.75%, where the performance of the SVM and that of the DT are significantly better than the performance of the CNN. (note)

  20. A multidimensional stability model for predicting shallow landslide size and shape across landscapes.

    Science.gov (United States)

    Milledge, David G; Bellugi, Dino; McKean, Jim A; Densmore, Alexander L; Dietrich, William E

    2014-11-01

    The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but simple enough to be applied over entire watersheds. It accounts for lateral resistance by representing the forces acting on each margin of potential landslides using earth pressure theory and by representing root reinforcement as an exponential function of soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are well constrained by field data. The model predicts failure for the observed scar geometry and finds that larger or smaller conformal shapes are more stable. Numerical experiments demonstrate that friction on the boundaries of a potential landslide increases considerably the magnitude of lateral reinforcement, relative to that due to root cohesion alone. We find that there is a critical depth in both cohesive and cohesionless soils, resulting in a minimum size for failure, which is consistent with observed size-frequency distributions. Furthermore, the differential resistance on the boundaries of a potential landslide is responsible for a critical landslide shape which is longer than it is wide, consistent with observed aspect ratios. Finally, our results show that minimum size increases as approximately the square of failure surface depth, consistent with observed landslide depth-area data.

  1. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition.

    Directory of Open Access Journals (Sweden)

    QingJun Song

    Full Text Available Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB algorithm plus Support vector machine (SVM is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition.

  2. Observations of a potential size-effect in experimental determination of the hydraulic properties of fractures

    International Nuclear Information System (INIS)

    Witherspoon, P.A.; Amick, C.H.; Gale, J.E.; Iwai, K.

    1979-05-01

    In several recent investigations, experimental studies on the effect of normal stress on the hydraulic conductivity of a single fracture were made on three rock specimens ranging in cross-sectional area from 0.02 m 2 to over 1.0 m 2 . At the maximum stress levels that could be attained (10 to 20 MPa), minimum values of the fracture hydraulic conductivity were not the same for each rock specimen. These minimum values increased with specimen size, indicating that the determination of fracture conductivity may be significantly influenced by a size effect. The implications of these results are important. Cores collected in the field are normally not larger than 0.15 m in diameter. However, the results of this work suggest that when this size core is used for laboratory investigations, the results may be nonconservative in that fracture permeabilities will be significantly lower than will be found in the field. 6 figures

  3. 41 CFR 50-202.2 - Minimum wage in all industries.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Minimum wage in all... Public Contracts PUBLIC CONTRACTS, DEPARTMENT OF LABOR 202-MINIMUM WAGE DETERMINATIONS Groups of Industries § 50-202.2 Minimum wage in all industries. In all industries, the minimum wage applicable to...

  4. A Variance Minimization Criterion to Feature Selection Using Laplacian Regularization.

    Science.gov (United States)

    He, Xiaofei; Ji, Ming; Zhang, Chiyuan; Bao, Hujun

    2011-10-01

    In many information processing tasks, one is often confronted with very high-dimensional data. Feature selection techniques are designed to find the meaningful feature subset of the original features which can facilitate clustering, classification, and retrieval. In this paper, we consider the feature selection problem in unsupervised learning scenarios, which is particularly difficult due to the absence of class labels that would guide the search for relevant information. Based on Laplacian regularized least squares, which finds a smooth function on the data manifold and minimizes the empirical loss, we propose two novel feature selection algorithms which aim to minimize the expected prediction error of the regularized regression model. Specifically, we select those features such that the size of the parameter covariance matrix of the regularized regression model is minimized. Motivated from experimental design, we use trace and determinant operators to measure the size of the covariance matrix. Efficient computational schemes are also introduced to solve the corresponding optimization problems. Extensive experimental results over various real-life data sets have demonstrated the superiority of the proposed algorithms.

  5. Associations between spondyloarthritis features and magnetic resonance imaging findings

    DEFF Research Database (Denmark)

    Arnbak, Bodil; Jurik, Anne Grethe; Hørslev-Petersen, Kim

    2016-01-01

    were 1) to estimate the prevalence of magnetic resonance imaging (MRI) findings and clinical features included in the ASAS criteria for SpA and 2) to explore the associations between MRI findings and clinical features. METHODS: We included patients ages 18-40 years with persistent low back pain who had...... been referred to the Spine Centre of Southern Denmark. We collected information on clinical features (including HLA-B27 and high-sensitivity C-reactive protein) and MRI findings in the spine and sacroiliac (SI) joints. RESULTS: Of 1,020 included patients, 537 (53%) had at least 1 of the clinical...... according to the ASAS definition was present in 217 patients (21%). Of those 217 patients, 91 (42%) had the minimum amount of bone marrow edema required according to the ASAS definition (a low bone marrow edema score). The presence of HLA-B27, peripheral arthritis, a good response to NSAIDs, and preceding...

  6. Linear Model for Optimal Distributed Generation Size Predication

    Directory of Open Access Journals (Sweden)

    Ahmed Al Ameri

    2017-01-01

    Full Text Available This article presents a linear model predicting optimal size of Distributed Generation (DG that addresses the minimum power loss. This method is based fundamentally on strong coupling between active power and voltage angle as well as between reactive power and voltage magnitudes. This paper proposes simplified method to calculate the total power losses in electrical grid for different distributed generation sizes and locations. The method has been implemented and tested on several IEEE bus test systems. The results show that the proposed method is capable of predicting approximate optimal size of DG when compared with precision calculations. The method that linearizes a complex model showed a good result, which can actually reduce processing time required. The acceptable accuracy with less time and memory required can help the grid operator to assess power system integrated within large-scale distribution generation.

  7. 29 CFR 525.13 - Renewal of special minimum wage certificates.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Renewal of special minimum wage certificates. 525.13... minimum wage certificates. (a) Applications may be filed for renewal of special minimum wage certificates.... (c) Workers with disabilities may not continue to be paid special minimum wages after notice that an...

  8. Magnetotransport in granular LaMnO3+δ manganite with nano-sized particles

    International Nuclear Information System (INIS)

    Markovich, V; Jung, G; Wu, X; Gorodetsky, G; Fita, I; Wisniewski, A; Puzniak, R; Mogilyansky, D; Titelman, L; Vradman, L; Herskowitz, M; Froumin, N

    2008-01-01

    Transport and magnetic properties of compacted LaMnO 3+δ manganite nanoparticles of an average size of 18 nm have been investigated in the temperature range 5-300 K. The nanoparticles exhibit a paramagnetic-to-ferromagnetic (FM) transition at the Curie temperature T C ∼ 246 K. However, the spontaneous magnetization disappears at a higher temperature of about 270 K. It was found that at low temperatures the FM core occupies about 50% of the particle volume. The temperature dependence of the resistivity shows a metal-insulator transition and a low-temperature upturn below the resistivity minimum at T ∼ 50 K. The transport at low temperatures is controlled by the charging energy and spin-dependent tunnelling through grain boundaries. It has been found that the charging energy decreases monotonically with increasing magnetic field. The low temperature I-V characteristics are well described by an indirect tunnelling model while at higher temperatures both direct and resonant tunnelling dominates. The experimental features are discussed in the framework of a granular ferromagnet model

  9. An Empirical Analysis of the Relationship between Minimum Wage ...

    African Journals Online (AJOL)

    An Empirical Analysis of the Relationship between Minimum Wage, Investment and Economic Growth in Ghana. ... In addition, the ratio of public investment to tax revenue must increase as minimum wage increases since such complementary changes are more likely to lead to economic growth. Keywords: minimum wage ...

  10. 12 CFR 3.6 - Minimum capital ratios.

    Science.gov (United States)

    2010-01-01

    ... should have well-diversified risks, including no undue interest rate risk exposure; excellent control... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Minimum capital ratios. 3.6 Section 3.6 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY MINIMUM CAPITAL RATIOS; ISSUANCE...

  11. 12 CFR 615.5330 - Minimum surplus ratios.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Minimum surplus ratios. 615.5330 Section 615.5330 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM FUNDING AND FISCAL AFFAIRS, LOAN POLICIES AND OPERATIONS, AND FUNDING OPERATIONS Surplus and Collateral Requirements § 615.5330 Minimum...

  12. CT features of renal epithelioid angiomyolipomas

    International Nuclear Information System (INIS)

    Hu Xiaoyun; Fang Xiangming; Hu Chunhong; Chen Hongwei; Cui Lei; Bao Jian; Yao Xuanjun

    2010-01-01

    Objective: To explore the CT and pathological features of renal epithelioid angiomyolipoma (EAML). Methods: Clinical data and CT images from ten cases with EAML proved by surgery and pathology were retrospectively analyzed. All cases were performed with plain and contrast enhanced CT scans. Results: CT features: higher pre-contrasted density than kidney, bulging from kidney, absent of fat, markedly heterogeneous enhancement (quick wash-in and slow wash-out), big size without lobular sign, complete capsule with clear margin and mild necrostic area. Pathological features: diffuse sheets of epithelioid cells were found under microscopy with immunohistochemistrical findings including positivity for HMB-45 and negativity for EMA. Conclusion: Some specific CT features, which is correlated well with the pathological findings, provide helpful information in the primary diagnosis of EAML. (authors)

  13. Association between spondlyloarthritis features and MRI findings in patients with persistent low back pain

    DEFF Research Database (Denmark)

    Arnbak, Bodil; Jurik, Anne Grethe; Hørslev-Petersen, Kim

    2014-01-01

    findings and the association between these two domains. Methods. The study sample included patients aged 18-40 years with persistent low back pain, referred to a public Spine Centre. The prevalence of and associations between clinical SpA features (incl. HLA-B27and CRP) and MRI of the entire spine...... and sacroiliac joints (SIJ) were estimated and analysed. Results. Of the 1020 patients included in the study, 52% had ≥1 clinical SpA feature. The three most common SpA features were; inflammatory back pain, good response to NSAID and family disposition (15-17% each). SIJ bone marrow oedema (BMO) occurred in 21...... of the diagnostic utility of SpA features and the minimum requirements of BMO for defining sacroiliitis....

  14. Simple and cost-effective fabrication of size-tunable zinc oxide architectures by multiple size reduction technique

    Directory of Open Access Journals (Sweden)

    Hyeong-Ho Park, Xin Zhang, Seon-Yong Hwang, Sang Hyun Jung, Semin Kang, Hyun-Beom Shin, Ho Kwan Kang, Hyung-Ho Park, Ross H Hill and Chul Ki Ko

    2012-01-01

    Full Text Available We present a simple size reduction technique for fabricating 400 nm zinc oxide (ZnO architectures using a silicon master containing only microscale architectures. In this approach, the overall fabrication, from the master to the molds and the final ZnO architectures, features cost-effective UV photolithography, instead of electron beam lithography or deep-UV photolithography. A photosensitive Zn-containing sol–gel precursor was used to imprint architectures by direct UV-assisted nanoimprint lithography (UV-NIL. The resulting Zn-containing architectures were then converted to ZnO architectures with reduced feature sizes by thermal annealing at 400 °C for 1 h. The imprinted and annealed ZnO architectures were also used as new masters for the size reduction technique. ZnO pillars of 400 nm diameter were obtained from a silicon master with pillars of 1000 nm diameter by simply repeating the size reduction technique. The photosensitivity and contrast of the Zn-containing precursor were measured as 6.5 J cm−2 and 16.5, respectively. Interesting complex ZnO patterns, with both microscale pillars and nanoscale holes, were demonstrated by the combination of dose-controlled UV exposure and a two-step UV-NIL.

  15. Simple and cost-effective fabrication of size-tunable zinc oxide architectures by multiple size reduction technique

    International Nuclear Information System (INIS)

    Park, Hyeong-Ho; Hwang, Seon-Yong; Jung, Sang Hyun; Kang, Semin; Shin, Hyun-Beom; Kang, Ho Kwan; Ko, Chul Ki; Zhang Xin; Hill, Ross H; Park, Hyung-Ho

    2012-01-01

    We present a simple size reduction technique for fabricating 400 nm zinc oxide (ZnO) architectures using a silicon master containing only microscale architectures. In this approach, the overall fabrication, from the master to the molds and the final ZnO architectures, features cost-effective UV photolithography, instead of electron beam lithography or deep-UV photolithography. A photosensitive Zn-containing sol–gel precursor was used to imprint architectures by direct UV-assisted nanoimprint lithography (UV-NIL). The resulting Zn-containing architectures were then converted to ZnO architectures with reduced feature sizes by thermal annealing at 400 °C for 1 h. The imprinted and annealed ZnO architectures were also used as new masters for the size reduction technique. ZnO pillars of 400 nm diameter were obtained from a silicon master with pillars of 1000 nm diameter by simply repeating the size reduction technique. The photosensitivity and contrast of the Zn-containing precursor were measured as 6.5 J cm −2 and 16.5, respectively. Interesting complex ZnO patterns, with both microscale pillars and nanoscale holes, were demonstrated by the combination of dose-controlled UV exposure and a two-step UV-NIL.

  16. Constraints on the adult-offspring size relationship in protists.

    Science.gov (United States)

    Caval-Holme, Franklin; Payne, Jonathan; Skotheim, Jan M

    2013-12-01

    The relationship between adult and offspring size is an important aspect of reproductive strategy. Although this filial relationship has been extensively examined in plants and animals, we currently lack comparable data for protists, whose strategies may differ due to the distinct ecological and physiological constraints on single-celled organisms. Here, we report measurements of adult and offspring sizes in 3888 species and subspecies of foraminifera, a class of large marine protists. Foraminifera exhibit a wide range of reproductive strategies; species of similar adult size may have offspring whose sizes vary 100-fold. Yet, a robust pattern emerges. The minimum (5th percentile), median, and maximum (95th percentile) offspring sizes exhibit a consistent pattern of increase with adult size independent of environmental change and taxonomic variation over the past 400 million years. The consistency of this pattern may arise from evolutionary optimization of the offspring size-fecundity trade-off and/or from cell-biological constraints that limit the range of reproductive strategies available to single-celled organisms. When compared with plants and animals, foraminifera extend the evidence that offspring size covaries with adult size across an additional five orders of magnitude in organism size. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  17. Design of intelligent comfort control system with human learning and minimum power control strategies

    International Nuclear Information System (INIS)

    Liang, J.; Du, R.

    2008-01-01

    This paper presents the design of an intelligent comfort control system by combining the human learning and minimum power control strategies for the heating, ventilating and air conditioning (HVAC) system. In the system, the predicted mean vote (PMV) is adopted as the control objective to improve indoor comfort level by considering six comfort related variables, whilst a direct neural network controller is designed to overcome the nonlinear feature of the PMV calculation for better performance. To achieve the highest comfort level for the specific user, a human learning strategy is designed to tune the user's comfort zone, and then, a VAV and minimum power control strategy is proposed to minimize the energy consumption further. In order to validate the system design, a series of computer simulations are performed based on a derived HVAC and thermal space model. The simulation results confirm the design of the intelligent comfort control system. In comparison to the conventional temperature controller, this system can provide a higher comfort level and better system performance, so it has great potential for HVAC applications in the future

  18. Autonomous observations of in vivo fluorescence and particle backscatteringin an oceanic oxygen minimum zone.

    Science.gov (United States)

    Whitmire, A L; Letelier, R M; Villagrán, V; Ulloa, O

    2009-11-23

    The eastern South Pacific (ESP) oxygen minimum zone (OMZ) is a permanent hydrographic feature located directly off the coasts of northern Chile and Peru. The ESP OMZ reaches from coastal waters out to thousands of kilometers offshore, and can extend from the near surface to depths greater than 700 m. Oxygen minimum zones support unique microbial assemblages and play an important role in marine elemental cycles. We present results from two autonomous profiling floats that provide nine months of time-series data on temperature, salinity, dissolved oxygen, chlorophyll a, and particulate backscattering in the ESP OMZ. We observed consistently elevated backscattering signals within low-oxygen waters, which appear to be the result of enhanced microbial biomass in the OMZ intermediate waters. We also observed secondary chlorophyll a fluorescence maxima within low-oxygen waters when the upper limit of the OMZ penetrated the base of the photic zone. We suggest that autonomous profiling floats are useful tools for monitoring physical dynamics of OMZs and the microbial response to perturbations in these areas.

  19. Technological Aspects of Creating Large-size Optical Telescopes

    Directory of Open Access Journals (Sweden)

    V. V. Sychev

    2015-01-01

    Full Text Available A concept of the telescope creation, first of all, depends both on a choice of the optical scheme to form optical radiation and images with minimum losses of energy and information and on a choice of design to meet requirements for strength, stiffness, and stabilization characteristics in real telescope operation conditions. Thus, the concept of creating large-size telescopes, certainly, involves the use of adaptive optics methods and means.The level of technological capabilities to realize scientific and engineering ideas define a successful development of large-size optical telescopes in many respects. All developers pursue the same aim that is to raise an amount of information by increasing a main mirror diameter of the telescope.The article analyses the adaptive telescope designs developed in our country. Using a domestic ACT-25 telescope as an example, it considers creation of large-size optical telescopes in terms of technological aspects. It also describes the telescope creation concept features, which allow reaching marginally possible characteristics to ensure maximum amount of information.The article compares a wide range of large-size telescopes projects. It shows that a domestic project to create the adaptive ACT-25 super-telescope surpasses its foreign counterparts, and there is no sense to implement Euro50 (50m and OWL (100m projects.The considered material gives clear understanding on a role of technological aspects in development of such complicated optic-electronic complexes as a large-size optical telescope. The technological criteria of an assessment offered in the article, namely specific informational content of the telescope, its specific mass, and specific cost allow us to reveal weaknesses in the project development and define a reserve regarding further improvement of the telescope.The analysis of results and their judgment have shown that improvement of optical largesize telescopes in terms of their maximum

  20. 5 CFR 551.601 - Minimum age standards.

    Science.gov (United States)

    2010-01-01

    ... ADMINISTRATION UNDER THE FAIR LABOR STANDARDS ACT Child Labor § 551.601 Minimum age standards. (a) 16-year... subject to its child labor provisions, with certain exceptions not applicable here. (b) 18-year minimum... occupation found and declared by the Secretary of Labor to be particularly hazardous for the employment of...

  1. 12 CFR 932.8 - Minimum liquidity requirements.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Minimum liquidity requirements. 932.8 Section... CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL REQUIREMENTS § 932.8 Minimum liquidity requirements. In addition to meeting the deposit liquidity requirements contained in § 965.3 of this chapter, each Bank...

  2. Efficient Feature Selection and Classification of Protein Sequence Data in Bioinformatics

    Science.gov (United States)

    Faye, Ibrahima; Samir, Brahim Belhaouari; Md Said, Abas

    2014-01-01

    Bioinformatics has been an emerging area of research for the last three decades. The ultimate aims of bioinformatics were to store and manage the biological data, and develop and analyze computational tools to enhance their understanding. The size of data accumulated under various sequencing projects is increasing exponentially, which presents difficulties for the experimental methods. To reduce the gap between newly sequenced protein and proteins with known functions, many computational techniques involving classification and clustering algorithms were proposed in the past. The classification of protein sequences into existing superfamilies is helpful in predicting the structure and function of large amount of newly discovered proteins. The existing classification results are unsatisfactory due to a huge size of features obtained through various feature encoding methods. In this work, a statistical metric-based feature selection technique has been proposed in order to reduce the size of the extracted feature vector. The proposed method of protein classification shows significant improvement in terms of performance measure metrics: accuracy, sensitivity, specificity, recall, F-measure, and so forth. PMID:25045727

  3. Sale effects of attention to feature advertisements : A Bayesian mediation analysis

    NARCIS (Netherlands)

    Zhang, J.; Wedel, M.; Pieters, R.

    2009-01-01

    There is much evidence that the presence of a feature advertisement can increase the sales and market share of the featured product. However, little is known about how feature ad characteristics (e.g., size, color, and location of the advertisement) affect the sales outcomes and how the effects take

  4. [Hospitals failing minimum volumes in 2004: reasons and consequences].

    Science.gov (United States)

    Geraedts, M; Kühnen, C; Cruppé, W de; Blum, K; Ohmann, C

    2008-02-01

    In 2004 Germany introduced annual minimum volumes nationwide on five surgical procedures: kidney, liver, stem cell transplantation, complex oesophageal, and pancreatic interventions. Hospitals that fail to reach the minimum volumes are no longer allowed to perform the respective procedures unless they raise one of eight legally accepted exceptions. The goal of our study was to investigate how many hospitals fell short of the minimum volumes in 2004, whether and how this was justified, and whether hospitals that failed the requirements experienced any consequences. We analysed data on meeting the minimum volume requirements in 2004 that all German hospitals were obliged to publish as part of their biannual structured quality reports. We performed telephone interviews: a) with all hospitals not achieving the minimum volumes for complex oesophageal, and pancreatic interventions, and b) with the national umbrella organisations of all German sickness funds. In 2004, one quarter of all German acute care hospitals (N=485) performed 23,128 procedures where minimum volumes applied. 197 hospitals (41%) did not meet at least one of the minimum volumes. These hospitals performed N=715 procedures (3.1%) where the minimum volumes were not met. In 43% of these cases the hospitals raised legally accepted exceptions. In 33% of the cases the hospitals argued using reasons that were not legally acknowledged. 69% of those hospitals that failed to achieve the minimum volumes for complex oesophageal and pancreatic interventions did not experience any consequences from the sickness funds. However, one third of those hospitals reported that the sickness funds addressed the issue and partially announced consequences for the future. The sickness funds' umbrella organisations stated that there were only sparse activities related to the minimum volumes and that neither uniform registrations nor uniform proceedings in case of infringements of the standards had been agreed upon. In spite of the

  5. Classification of resistance to passive motion using minimum probability of error criterion.

    Science.gov (United States)

    Chan, H C; Manry, M T; Kondraske, G V

    1987-01-01

    Neurologists diagnose many muscular and nerve disorders by classifying the resistance to passive motion of patients' limbs. Over the past several years, a computer-based instrument has been developed for automated measurement and parameterization of this resistance. In the device, a voluntarily relaxed lower extremity is moved at constant velocity by a motorized driver. The torque exerted on the extremity by the machine is sampled, along with the angle of the extremity. In this paper a computerized technique is described for classifying a patient's condition as 'Normal' or 'Parkinson disease' (rigidity), from the torque versus angle curve for the knee joint. A Legendre polynomial, fit to the curve, is used to calculate a set of eight normally distributed features of the curve. The minimum probability of error approach is used to classify the curve as being from a normal or Parkinson disease patient. Data collected from 44 different subjects was processes and the results were compared with an independent physician's subjective assessment of rigidity. There is agreement in better than 95% of the cases, when all of the features are used.

  6. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  7. The perceptual processing capacity of summary statistics between and within feature dimensions

    Science.gov (United States)

    Attarha, Mouna; Moore, Cathleen M.

    2015-01-01

    The simultaneous–sequential method was used to test the processing capacity of statistical summary representations both within and between feature dimensions. Sixteen gratings varied with respect to their size and orientation. In Experiment 1, the gratings were equally divided into four separate smaller sets, one of which with a mean size that was larger or smaller than the other three sets, and one of which with a mean orientation that was tilted more leftward or rightward. The task was to report the mean size and orientation of the oddball sets. This therefore required four summary representations for size and another four for orientation. The sets were presented at the same time in the simultaneous condition or across two temporal frames in the sequential condition. Experiment 1 showed evidence of a sequential advantage, suggesting that the system may be limited with respect to establishing multiple within-feature summaries. Experiment 2 eliminates the possibility that some aspect of the task, other than averaging, was contributing to this observed limitation. In Experiment 3, the same 16 gratings appeared as one large superset, and therefore the task only required one summary representation for size and another one for orientation. Equal simultaneous–sequential performance indicated that between-feature summaries are capacity free. These findings challenge the view that within-feature summaries drive a global sense of visual continuity across areas of the peripheral visual field, and suggest a shift in focus to seeking an understanding of how between-feature summaries in one area of the environment control behavior. PMID:26360153

  8. Effect of feature size on dielectric nonlinearity of patterned PbZr0.52Ti0.48O3 films

    International Nuclear Information System (INIS)

    Yang, J. I.; Trolier-McKinstry, S.; Polcawich, R. G.; Sanchez, L. M.

    2015-01-01

    Lead zirconate titanate, PZT (52/48), thin films with a PbTiO 3 seed layer were patterned into features of different widths, including various sizes of squares and 100 μm, 50 μm, and 10 μm serpentine designs, using argon ion beam milling. Patterns with different surface area/perimeter ratios were used to study the relative importance of damage produced by the patterning. It was found that as the pattern dimensions decreased, the remanent polarization increased, presumably due to the fact that the dipoles near the feature perimeter are not as severely clamped to the substrate. This investigation is in agreement with a model in which clamping produces deep wells, which do not allow some fraction of the spontaneous polarization to switch at high field. The domain wall mobility at modest electric fields was investigated using the Rayleigh law. Both the reversible, ε init , and irreversible, α, Rayleigh coefficients increased with decreasing serpentine line width for de-aged samples. For measurements made immediately after annealing, ε init of 500 μm square patterns was 1510 ± 13; with decreasing serpentine line width, ε init rose from 1520 ± 10 for the 100 μm serpentine to 1568 ± 23 for the 10 μm serpentine. The irreversible parameter, α, for the square patterns was 39.4 ± 3.2 cm/kV and it increased to 44.1 ± 3.2 cm/kV as the lateral dimension is reduced. However, it was found that as the width of the serpentine features decreased, the aging rate rose. These observations are consistent with a model in which sidewall damage produces shallow wells that lower the Rayleigh constants of aged samples at small fields. These shallow wells can be overcome by the large fields used to measure the remanent polarization and the large unipolar electric fields typically used to drive thin film piezoelectric actuators

  9. 24 CFR 891.145 - Owner deposit (Minimum Capital Investment).

    Science.gov (United States)

    2010-04-01

    ... General Program Requirements § 891.145 Owner deposit (Minimum Capital Investment). As a Minimum Capital... Investment shall be one-half of one percent (0.5%) of the HUD-approved capital advance, not to exceed $25,000. ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Owner deposit (Minimum Capital...

  10. Statistical-techniques-based computer-aided diagnosis (CAD) using texture feature analysis: application in computed tomography (CT) imaging to fatty liver disease

    International Nuclear Information System (INIS)

    Chung, Woon-Kwan; Park, Hyong-Hu; Im, In-Chul; Lee, Jae-Seung; Goo, Eun-Hoe; Dong, Kyung-Rae

    2012-01-01

    This paper proposes a computer-aided diagnosis (CAD) system based on texture feature analysis and statistical wavelet transformation technology to diagnose fatty liver disease with computed tomography (CT) imaging. In the target image, a wavelet transformation was performed for each lesion area to set the region of analysis (ROA, window size: 50 x 50 pixels) and define the texture feature of a pixel. Based on the extracted texture feature values, six parameters (average gray level, average contrast, relative smoothness, skewness, uniformity, and entropy) were determined to calculate the recognition rate for a fatty liver. In addition, a multivariate analysis of the variance (MANOVA) method was used to perform a discriminant analysis to verify the significance of the extracted texture feature values and the recognition rate for a fatty liver. According to the results, each texture feature value was significant for a comparison of the recognition rate for a fatty liver (p < 0.05). Furthermore, the F-value, which was used as a scale for the difference in recognition rates, was highest in the average gray level, relatively high in the skewness and the entropy, and relatively low in the uniformity, the relative smoothness and the average contrast. The recognition rate for a fatty liver had the same scale as that for the F-value, showing 100% (average gray level) at the maximum and 80% (average contrast) at the minimum. Therefore, the recognition rate is believed to be a useful clinical value for the automatic detection and computer-aided diagnosis (CAD) using the texture feature value. Nevertheless, further study on various diseases and singular diseases will be needed in the future.

  11. Minimum Wages and the Distribution of Family Incomes

    OpenAIRE

    Dube, Arindrajit

    2017-01-01

    Using the March Current Population Survey data from 1984 to 2013, I provide a comprehensive evaluation of how minimum wage policies influence the distribution of family incomes. I find robust evidence that higher minimum wages shift down the cumulative distribution of family incomes at the bottom, reducing the share of non-elderly individuals with incomes below 50, 75, 100, and 125 percent of the federal poverty threshold. The long run (3 or more years) minimum wage elasticity of the non-elde...

  12. 7 CFR 1610.5 - Minimum Bank loan.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Minimum Bank loan. 1610.5 Section 1610.5 Agriculture Regulations of the Department of Agriculture (Continued) RURAL TELEPHONE BANK, DEPARTMENT OF AGRICULTURE LOAN POLICIES § 1610.5 Minimum Bank loan. A Bank loan will not be made unless the applicant qualifies for a Bank...

  13. Minimum Wage Effects in the Longer Run

    Science.gov (United States)

    Neumark, David; Nizalova, Olena

    2007-01-01

    Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…

  14. 29 CFR 783.43 - Computation of seaman's minimum wage.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Computation of seaman's minimum wage. 783.43 Section 783.43...'s minimum wage. Section 6(b) requires, under paragraph (2) of the subsection, that an employee...'s minimum wage requirements by reason of the 1961 Amendments (see §§ 783.23 and 783.26). Although...

  15. 12 CFR 931.3 - Minimum investment in capital stock.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Minimum investment in capital stock. 931.3... CAPITAL STANDARDS FEDERAL HOME LOAN BANK CAPITAL STOCK § 931.3 Minimum investment in capital stock. (a) A Bank shall require each member to maintain a minimum investment in the capital stock of the Bank, both...

  16. Minimum-Cost Reachability for Priced Timed Automata

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Fehnker, Ansgar; Hune, Thomas Seidelin

    2001-01-01

    This paper introduces the model of linearly priced timed automata as an extension of timed automata, with prices on both transitions and locations. For this model we consider the minimum-cost reachability problem: i.e. given a linearly priced timed automaton and a target state, determine...... the minimum cost of executions from the initial state to the target state. This problem generalizes the minimum-time reachability problem for ordinary timed automata. We prove decidability of this problem by offering an algorithmic solution, which is based on a combination of branch-and-bound techniques...... and a new notion of priced regions. The latter allows symbolic representation and manipulation of reachable states together with the cost of reaching them....

  17. Is the minimum enough? Affordability of a nutritious diet for minimum wage earners in Nova Scotia (2002-2012).

    Science.gov (United States)

    Newell, Felicia D; Williams, Patricia L; Watt, Cynthia G

    2014-05-09

    This paper aims to assess the affordability of a nutritious diet for households earning minimum wage in Nova Scotia (NS) from 2002 to 2012 using an economic simulation that includes food costing and secondary data. The cost of the National Nutritious Food Basket (NNFB) was assessed with a stratified, random sample of grocery stores in NS during six time periods: 2002, 2004/2005, 2007, 2008, 2010 and 2012. The NNFB's cost was factored into affordability scenarios for three different household types relying on minimum wage earnings: a household of four; a lone mother with three children; and a lone man. Essential monthly living expenses were deducted from monthly net incomes using methods that were standardized from 2002 to 2012 to determine whether adequate funds remained to purchase a basic nutritious diet across the six time periods. A 79% increase to the minimum wage in NS has resulted in a decrease in the potential deficit faced by each household scenario in the period examined. However, the household of four and the lone mother with three children would still face monthly deficits ($44.89 and $496.77, respectively, in 2012) if they were to purchase a nutritiously sufficient diet. As a social determinant of health, risk of food insecurity is a critical public health issue for low wage earners. While it is essential to increase the minimum wage in the short term, adequately addressing income adequacy in NS and elsewhere requires a shift in thinking from a focus on minimum wage towards more comprehensive policies ensuring an adequate livable income for everyone.

  18. Employment Effects of Minimum and Subminimum Wages. Recent Evidence.

    Science.gov (United States)

    Neumark, David

    Using a specially constructed panel data set on state minimum wage laws and labor market conditions, Neumark and Wascher (1992) presented evidence that countered the claim that minimum wages could be raised with no cost to employment. They concluded that estimates indicating that minimum wages reduced employment on the order of 1-2 percent for a…

  19. Minimum Wage Effects on Educational Enrollments in New Zealand

    Science.gov (United States)

    Pacheco, Gail A.; Cruickshank, Amy A.

    2007-01-01

    This paper empirically examines the impact of minimum wages on educational enrollments in New Zealand. A significant reform to the youth minimum wage since 2000 has resulted in some age groups undergoing a 91% rise in their real minimum wage over the last 10 years. Three panel least squares multivariate models are estimated from a national sample…

  20. Zero forcing parameters and minimum rank problems

    NARCIS (Netherlands)

    Barioli, F.; Barrett, W.; Fallat, S.M.; Hall, H.T.; Hogben, L.; Shader, B.L.; Driessche, van den P.; Holst, van der H.

    2010-01-01

    The zero forcing number Z(G), which is the minimum number of vertices in a zero forcing set of a graph G, is used to study the maximum nullity/minimum rank of the family of symmetric matrices described by G. It is shown that for a connected graph of order at least two, no vertex is in every zero

  1. Minimum bias measurement at 13 TeV

    CERN Document Server

    Orlando, Nicola; The ATLAS collaboration

    2017-01-01

    The modelling of Minimum Bias (MB) is a crucial ingredient to learn about the description of soft QCD processes and to simulate the environment at the LHC with many concurrent pp interactions (pile-up). We summarise the ATLAS minimum bias measurements with proton-proton collision at 13 TeV center-of-mass-energy at the Large Hadron Collider.

  2. An Improved Algorithm Based on Minimum Spanning Tree for Multi-scale Segmentation of Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    LI Hui

    2015-07-01

    Full Text Available As the basis of object-oriented information extraction from remote sensing imagery,image segmentation using multiple image features,exploiting spatial context information, and by a multi-scale approach are currently the research focuses. Using an optimization approach of the graph theory, an improved multi-scale image segmentation method is proposed. In this method, the image is applied with a coherent enhancement anisotropic diffusion filter followed by a minimum spanning tree segmentation approach, and the resulting segments are merged with reference to a minimum heterogeneity criterion.The heterogeneity criterion is defined as a function of the spectral characteristics and shape parameters of segments. The purpose of the merging step is to realize the multi-scale image segmentation. Tested on two images, the proposed method was visually and quantitatively compared with the segmentation method employed in the eCognition software. The results show that the proposed method is effective and outperforms the latter on areas with subtle spectral differences.

  3. Minimum airflow reset of single-duct VAV terminal boxes

    Science.gov (United States)

    Cho, Young-Hum

    Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and

  4. A model-based approach to crack sizing with ultrasonic arrays.

    Science.gov (United States)

    Tant, Katherine M M; Mulholland, Anthony J; Gachagan, Anthony

    2015-05-01

    Ultrasonic phased array systems have become increasingly popular in the last 10 years as tools for flaw detection and characterization within the nondestructive testing industry. The existence and location of flaws can often be deduced via images generated from the data captured by these arrays. A factor common to these imaging techniques is the subjective thresholding required to estimate the size of the flaw. This paper puts forward an objective approach which employs a mathematical model. By exploiting the relationship between the width of the central lobe of the scattering matrix and the crack size, an analytical expression for the crack length is reached via the Born approximation. Conclusions are then drawn on the minimum resolvable crack length of the method and it is thus shown that the formula holds for subwavelength defects. An analytical expression for the error that arises from the discrete nature of the array is then derived and it is observed that the method becomes less sensitive to the discretization of the array as the distance between the flaw and array increases. The methodology is then extended and tested on experimental data collected from welded austenitic plates containing a lack-of-fusion crack of 6 mm length. An objective sizing matrix (OSM) is produced by assessing the similarity between the scattering matrices arising from experimentally collected data with those arising from the Born approximation over a range of crack lengths and frequencies. Initially, the global minimum of the OSM is taken as the objective estimation of the crack size, giving a measurement of 7 mm. This is improved upon by the adoption of a multifrequency averaging approach, with which an improved crack size estimation of 6.4 mm is obtained.

  5. Minimum wall pressure coefficient of orifice plate energy dissipater

    Directory of Open Access Journals (Sweden)

    Wan-zheng Ai

    2015-01-01

    Full Text Available Orifice plate energy dissipaters have been successfully used in large-scale hydropower projects due to their simple structure, convenient construction procedure, and high energy dissipation ratio. The minimum wall pressure coefficient of an orifice plate can indirectly reflect its cavitation characteristics: the lower the minimum wall pressure coefficient is, the better the ability of the orifice plate to resist cavitation damage is. Thus, it is important to study the minimum wall pressure coefficient of the orifice plate. In this study, this coefficient and related parameters, such as the contraction ratio, defined as the ratio of the orifice plate diameter to the flood-discharging tunnel diameter; the relative thickness, defined as the ratio of the orifice plate thickness to the tunnel diameter; and the Reynolds number of the flow through the orifice plate, were theoretically analyzed, and their relationships were obtained through physical model experiments. It can be concluded that the minimum wall pressure coefficient is mainly dominated by the contraction ratio and relative thickness. The lower the contraction ratio and relative thickness are, the larger the minimum wall pressure coefficient is. The effects of the Reynolds number on the minimum wall pressure coefficient can be neglected when it is larger than 105. An empirical expression was presented to calculate the minimum wall pressure coefficient in this study.

  6. An Algorithm Based on the Self-Organized Maps for the Classification of Facial Features

    Directory of Open Access Journals (Sweden)

    Gheorghe Gîlcă

    2015-12-01

    Full Text Available This paper deals with an algorithm based on Self Organized Maps networks which classifies facial features. The proposed algorithm can categorize the facial features defined by the input variables: eyebrow, mouth, eyelids into a map of their grouping. The groups map is based on calculating the distance between each input vector and each output neuron layer , the neuron with the minimum distance being declared winner neuron. The network structure consists of two levels: the first level contains three input vectors, each having forty-one values, while the second level contains the SOM competitive network which consists of 100 neurons. The proposed system can classify facial features quickly and easily using the proposed algorithm based on SOMs.

  7. Quantitative radiomics: impact of stochastic effects on textural feature analysis implies the need for standards.

    Science.gov (United States)

    Nyflot, Matthew J; Yang, Fei; Byrd, Darrin; Bowen, Stephen R; Sandison, George A; Kinahan, Paul E

    2015-10-01

    Image heterogeneity metrics such as textural features are an active area of research for evaluating clinical outcomes with positron emission tomography (PET) imaging and other modalities. However, the effects of stochastic image acquisition noise on these metrics are poorly understood. We performed a simulation study by generating 50 statistically independent PET images of the NEMA IQ phantom with realistic noise and resolution properties. Heterogeneity metrics based on gray-level intensity histograms, co-occurrence matrices, neighborhood difference matrices, and zone size matrices were evaluated within regions of interest surrounding the lesions. The impact of stochastic variability was evaluated with percent difference from the mean of the 50 realizations, coefficient of variation and estimated sample size for clinical trials. Additionally, sensitivity studies were performed to simulate the effects of patient size and image reconstruction method on the quantitative performance of these metrics. Complex trends in variability were revealed as a function of textural feature, lesion size, patient size, and reconstruction parameters. In conclusion, the sensitivity of PET textural features to normal stochastic image variation and imaging parameters can be large and is feature-dependent. Standards are needed to ensure that prospective studies that incorporate textural features are properly designed to measure true effects that may impact clinical outcomes.

  8. Size Matters: Penis Size and Sexual Position in Gay Porn Profiles.

    Science.gov (United States)

    Brennan, Joseph

    2018-01-01

    This article combines qualitative and quantitative textual approaches to the representation of penis size and sexual position of performers in 10 of the most visited gay pornography Web sites currently in operation. Specifically, in excess of 6,900 performer profiles sourced from 10 commercial Web sites are analyzed. Textual analysis of the profile descriptions is combined with a quantitative representation of disclosed penis size and sexual position, which is presented visually by two figures. The figures confirm that these sites generally market themselves as featuring penises that are extraordinarily large and find a sample-wide correlation between smaller penis sizes (5-6.5 inches) and receptive sexual acts (bottoming), and larger (8.5-13 inches) with penetrative acts (topping). These observations are supported through the qualitative textual readings of how the performers are described on these popular sites, revealing the narratives and marketing strategies that shape the construction of popular porn brands, performers, and profitable fantasies.

  9. 76 FR 15368 - Minimum Security Devices and Procedures

    Science.gov (United States)

    2011-03-21

    ... DEPARTMENT OF THE TREASURY Office of Thrift Supervision Minimum Security Devices and Procedures... concerning the following information collection. Title of Proposal: Minimum Security Devices and Procedures... security devices and procedures to discourage robberies, burglaries, and larcenies, and to assist in the...

  10. An application of the 'end-point' method to the minimum critical mass problem in two group transport theory

    International Nuclear Information System (INIS)

    Williams, M.M.R.

    2003-01-01

    A two group integral equation derived using transport theory, which describes the fuel distribution necessary for a flat thermal flux and minimum critical mass, is solved by the classical end-point method. This method has a number of advantages and in particular highlights the changing behaviour of the fissile mass distribution function in the neighbourhood of the core-reflector interface. We also show how the reflector thermal flux behaves and explain the origin of the maximum which arises when the critical size is less than that corresponding to minimum critical mass. A comparison is made with diffusion theory and the necessary and somewhat artificial presence of surface delta functions in the fuel distribution is shown to be analogous to the edge transients that arise naturally in transport theory

  11. Fleet Sizing of Automated Material Handling Using Simulation Approach

    Science.gov (United States)

    Wibisono, Radinal; Ai, The Jin; Ratna Yuniartha, Deny

    2018-03-01

    Automated material handling tends to be chosen rather than using human power in material handling activity for production floor in manufacturing company. One critical issue in implementing automated material handling is designing phase to ensure that material handling activity more efficient in term of cost spending. Fleet sizing become one of the topic in designing phase. In this research, simulation approach is being used to solve fleet sizing problem in flow shop production to ensure optimum situation. Optimum situation in this research means minimum flow time and maximum capacity in production floor. Simulation approach is being used because flow shop can be modelled into queuing network and inter-arrival time is not following exponential distribution. Therefore, contribution of this research is solving fleet sizing problem with multi objectives in flow shop production using simulation approach with ARENA Software

  12. 76 FR 30243 - Minimum Security Devices and Procedures

    Science.gov (United States)

    2011-05-24

    ... DEPARTMENT OF THE TREASURY Office of Thrift Supervision Minimum Security Devices and Procedures.... Title of Proposal: Minimum Security Devices and Procedures. OMB Number: 1550-0062. Form Number: N/A... respect to the installation, maintenance, and operation of security devices and procedures to discourage...

  13. Optimal capacity and buffer size estimation under Generalized Markov Fluids Models and QoS parameters

    International Nuclear Information System (INIS)

    Bavio, José; Marrón, Beatriz

    2014-01-01

    Quality of service (QoS) for internet traffic management requires good traffic models and good estimation of sharing network resource. A link of a network processes all traffic and it is designed with certain capacity C and buffer size B. A Generalized Markov Fluid model (GMFM), introduced by Marrón (2011), is assumed for the sources because describes in a versatile way the traffic, allows estimation based on traffic traces, and also consistent effective bandwidth estimation can be done. QoS, interpreted as buffer overflow probability, can be estimated for GMFM through the effective bandwidth estimation and solving the optimization problem presented in Courcoubetis (2002), the so call inf-sup formulas. In this work we implement a code to solve the inf-sup problem and other optimization related with it, that allow us to do traffic engineering in links of data networks to calculate both, minimum capacity required when QoS and buffer size are given or minimum buffer size required when QoS and capacity are given

  14. Does increasing the minimum wage reduce poverty in developing countries?

    OpenAIRE

    Gindling, T. H.

    2014-01-01

    Do minimum wage policies reduce poverty in developing countries? It depends. Raising the minimum wage could increase or decrease poverty, depending on labor market characteristics. Minimum wages target formal sector workers—a minority of workers in most developing countries—many of whom do not live in poor households. Whether raising minimum wages reduces poverty depends not only on whether formal sector workers lose jobs as a result, but also on whether low-wage workers live in poor househol...

  15. Confocal pore size measurement based on super-resolution image restoration.

    Science.gov (United States)

    Liu, Dali; Wang, Yun; Qiu, Lirong; Mao, Xinyue; Zhao, Weiqian

    2014-09-01

    A confocal pore size measurement based on super-resolution image restoration is proposed to obtain a fast and accurate measurement for submicrometer pore size of nuclear track-etched membranes (NTEMs). This method facilitates the online inspection of the pore size evolution during etching. Combining confocal microscopy with super-resolution image restoration significantly improves the lateral resolution of the NTEM image, yields a reasonable circle edge-setting criterion of 0.2408, and achieves precise pore edge detection. Theoretical analysis shows that the minimum measuring diameter can reach 0.19 μm, and the root mean square of the residuals is only 1.4 nm. Edge response simulation and experiment reveal that the edge response of the proposed method is better than 80 nm. The NTEM pore size measurement results obtained by the proposed method agree well with that obtained by scanning electron microscopy.

  16. A stochastic simulation model for reliable PV system sizing providing for solar radiation fluctuations

    International Nuclear Information System (INIS)

    Kaplani, E.; Kaplanis, S.

    2012-01-01

    Highlights: ► Solar radiation data for European cities follow the Extreme Value or Weibull distribution. ► Simulation model for the sizing of SAPV systems based on energy balance and stochastic analysis. ► Simulation of PV Generator-Loads-Battery Storage System performance for all months. ► Minimum peak power and battery capacity required for reliable SAPV sizing for various European cities. ► Peak power and battery capacity reduced by more than 30% for operation 95% success rate. -- Abstract: The large fluctuations observed in the daily solar radiation profiles affect highly the reliability of the PV system sizing. Increasing the reliability of the PV system requires higher installed peak power (P m ) and larger battery storage capacity (C L ). This leads to increased costs, and makes PV technology less competitive. This research paper presents a new stochastic simulation model for stand-alone PV systems, developed to determine the minimum installed P m and C L for the PV system to be energy independent. The stochastic simulation model developed, makes use of knowledge acquired from an in-depth statistical analysis of the solar radiation data for the site, and simulates the energy delivered, the excess energy burnt, the load profiles and the state of charge of the battery system for the month the sizing is applied, and the PV system performance for the entire year. The simulation model provides the user with values for the autonomy factor d, simulating PV performance in order to determine the minimum P m and C L depending on the requirements of the application, i.e. operation with critical or non-critical loads. The model makes use of NASA’s Surface meteorology and Solar Energy database for the years 1990–2004 for various cities in Europe with a different climate. The results obtained with this new methodology indicate a substantial reduction in installed peak power and battery capacity, both for critical and non-critical operation, when compared to

  17. The SME gauge sector with minimum length

    Science.gov (United States)

    Belich, H.; Louzada, H. L. C.

    2017-12-01

    We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.

  18. The SME gauge sector with minimum length

    Energy Technology Data Exchange (ETDEWEB)

    Belich, H.; Louzada, H.L.C. [Universidade Federal do Espirito Santo, Departamento de Fisica e Quimica, Vitoria, ES (Brazil)

    2017-12-15

    We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory. (orig.)

  19. Akibat Hukum Bagi Bank Bila Kewajiban Modal Inti Minimum Tidak Terpenuhi

    Directory of Open Access Journals (Sweden)

    Indira Retno Aryatie

    2012-02-01

    Full Text Available As an implementation of the Indonesian Banking Architecture policy, the government issues Bank Indonesia Regulation No. 9/16/ PBI/2007 on Minimum Tier One Capital that increases the minimum capital to 100 billion rupiah. This writing discusses the legal complication that a bank will face should it fail to fulfil the minimum ratio. Sebagai tindak lanjut dari kebijakan Arsitektur Perbankan Indonesia, pemerintah mengeluarkan Peraturan Bank Indonesia 9/16/PBI/2007 tentang Kewajiban Modal Inti Minimum Bank yang menaikkan modal inti minimum bank umum menjadi 100 miliar rupiah. Tulisan ini membahas akibat hukum yang akan dialami bank bila kewajiban modal minimum tersebut gagal dipenuhi.

  20. The impact of minimum wage adjustments on Vietnamese wage inequality

    DEFF Research Database (Denmark)

    Hansen, Henrik; Rand, John; Torm, Nina

    Using Vietnamese Labour Force Survey data we analyse the impact of minimum wage changes on wage inequality. Minimum wages serve to reduce local wage inequality in the formal sectors by decreasing the gap between the median wages and the lower tail of the local wage distributions. In contrast, local...... wage inequality is increased in the informal sectors. Overall, the minimum wages decrease national wage inequality. Our estimates indicate a decrease in the wage distribution Gini coefficient of about 2 percentage points and an increase in the 10/50 wage ratio of 5-7 percentage points caused...... by the adjustment of the minimum wages from 2011to 2012 that levelled the minimum wage across economic sectors....

  1. Risk control and the minimum significant risk

    International Nuclear Information System (INIS)

    Seiler, F.A.; Alvarez, J.L.

    1996-01-01

    Risk management implies that the risk manager can, by his actions, exercise at least a modicum of control over the risk in question. In the terminology of control theory, a management action is a control signal imposed as feedback on the system to bring about a desired change in the state of the system. In the terminology of risk management, an action is taken to bring a predicted risk to lower values. Even if it is assumed that the management action taken is 100% effective and that the projected risk reduction is infinitely well known, there is a lower limit to the desired effects that can be achieved. It is based on the fact that all risks, such as the incidence of cancer, exhibit a degree of variability due to a number of extraneous factors such as age at exposure, sex, location, and some lifestyle parameters such as smoking or the consumption of alcohol. If the control signal is much smaller than the variability of the risk, the signal is lost in the noise and control is lost. This defines a minimum controllable risk based on the variability of the risk over the population considered. This quantity is the counterpart of the minimum significant risk which is defined by the uncertainties of the risk model. Both the minimum controllable risk and the minimum significant risk are evaluated for radiation carcinogenesis and are shown to be of the same order of magnitude. For a realistic management action, the assumptions of perfectly effective action and perfect model prediction made above have to be dropped, resulting in an effective minimum controllable risk which is determined by both risk limits. Any action below that effective limit is futile, but it is also unethical due to the ethical requirement of doing more good than harm. Finally, some implications of the effective minimum controllable risk on the use of the ALARA principle and on the evaluation of remedial action goals are presented

  2. Energetic tradeoffs control the size distribution of aquatic mammals

    Science.gov (United States)

    Gearty, William; McClain, Craig R.; Payne, Jonathan L.

    2018-04-01

    Four extant lineages of mammals have invaded and diversified in the water: Sirenia, Cetacea, Pinnipedia, and Lutrinae. Most of these aquatic clades are larger bodied, on average, than their closest land-dwelling relatives, but the extent to which potential ecological, biomechanical, and physiological controls contributed to this pattern remains untested quantitatively. Here, we use previously published data on the body masses of 3,859 living and 2,999 fossil mammal species to examine the evolutionary trajectories of body size in aquatic mammals through both comparative phylogenetic analysis and examination of the fossil record. Both methods indicate that the evolution of an aquatic lifestyle is driving three of the four extant aquatic mammal clades toward a size attractor at ˜500 kg. The existence of this body size attractor and the relatively rapid selection toward, and limited deviation from, this attractor rule out most hypothesized drivers of size increase. These three independent body size increases and a shared aquatic optimum size are consistent with control by differences in the scaling of energetic intake and cost functions with body size between the terrestrial and aquatic realms. Under this energetic model, thermoregulatory costs constrain minimum size, whereas limitations on feeding efficiency constrain maximum size. The optimum size occurs at an intermediate value where thermoregulatory costs are low but feeding efficiency remains high. Rather than being released from size pressures, water-dwelling mammals are driven and confined to larger body sizes by the strict energetic demands of the aquatic medium.

  3. 42 CFR 84.197 - Respirator containers; minimum requirements.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Respirator containers; minimum requirements. 84.197... Cartridge Respirators § 84.197 Respirator containers; minimum requirements. Respirators shall be equipped with a substantial, durable container bearing markings which show the applicant's name, the type and...

  4. 42 CFR 84.174 - Respirator containers; minimum requirements.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Respirator containers; minimum requirements. 84.174... Air-Purifying Particulate Respirators § 84.174 Respirator containers; minimum requirements. (a) Except..., durable container bearing markings which show the applicant's name, the type of respirator it contains...

  5. Quantum mechanics the theoretical minimum

    CERN Document Server

    Susskind, Leonard

    2014-01-01

    From the bestselling author of The Theoretical Minimum, an accessible introduction to the math and science of quantum mechanicsQuantum Mechanics is a (second) book for anyone who wants to learn how to think like a physicist. In this follow-up to the bestselling The Theoretical Minimum, physicist Leonard Susskind and data engineer Art Friedman offer a first course in the theory and associated mathematics of the strange world of quantum mechanics. Quantum Mechanics presents Susskind and Friedman’s crystal-clear explanations of the principles of quantum states, uncertainty and time dependence, entanglement, and particle and wave states, among other topics. An accessible but rigorous introduction to a famously difficult topic, Quantum Mechanics provides a tool kit for amateur scientists to learn physics at their own pace.

  6. Minimum resolvable power contrast model

    Science.gov (United States)

    Qian, Shuai; Wang, Xia; Zhou, Jingjing

    2018-01-01

    Signal-to-noise ratio and MTF are important indexs to evaluate the performance of optical systems. However,whether they are used alone or joint assessment cannot intuitively describe the overall performance of the system. Therefore, an index is proposed to reflect the comprehensive system performance-Minimum Resolvable Radiation Performance Contrast (MRP) model. MRP is an evaluation model without human eyes. It starts from the radiance of the target and the background, transforms the target and background into the equivalent strips,and considers attenuation of the atmosphere, the optical imaging system, and the detector. Combining with the signal-to-noise ratio and the MTF, the Minimum Resolvable Radiation Performance Contrast is obtained. Finally the detection probability model of MRP is given.

  7. ARGO-YBJ OBSERVATION OF THE LARGE-SCALE COSMIC RAY ANISOTROPY DURING THE SOLAR MINIMUM BETWEEN CYCLES 23 AND 24

    Energy Technology Data Exchange (ETDEWEB)

    Bartoli, B.; Catalanotti, S.; Piazzoli, B. D’Ettorre; Girolamo, T. Di [Dipartimento di Fisica dell’Università di Napoli “Federico II”, Complesso Universitario di Monte Sant’Angelo, via Cinthia, I-80126 Napoli (Italy); Bernardini, P.; D’Amone, A.; Mitri, I. De [Dipartimento Matematica e Fisica ”Ennio De Giorgi”, Università del Salento, via per Arnesano, I-73100 Lecce (Italy); Bi, X. J.; Cao, Z.; Chen, S. Z.; Feng, Zhaoyang; Gou, Q. B. [Key Laboratory of Particle Astrophysics, Institute of High Energy Physics, Chinese Academy of Sciences, P.O. Box 918, 100049 Beijing (China); Chen, T. L.; Danzengluobu [Tibet University, 850000 Lhasa, Xizang (China); Cui, S. W.; Gao, W. [Hebei Normal University, 050024, Shijiazhuang Hebei (China); Dai, B. Z. [Yunnan University, 2 North Cuihu Road, 650091 Kunming, Yunnan (China); Sciascio, G. Di [Istituto Nazionale di Fisica Nucleare, Sezione di Roma Tor Vergata, via della Ricerca Scientifica 1, I-00133 Roma (Italy); Feng, C. F. [Shandong University, 250100 Jinan, Shandong (China); Feng, Zhenyong, E-mail: cuisw@ihep.ac.cn [Southwest Jiaotong University, 610031 Chengdu, Sichuan (China); Collaboration: ARGO-YBJ Collaboration; and others

    2015-08-10

    This paper reports on the measurement of the large-scale anisotropy in the distribution of cosmic-ray arrival directions using the data collected by the air shower detector ARGO-YBJ from 2008 January to 2009 December, during the minimum of solar activity between cycles 23 and 24. In this period, more than 2 × 10{sup 11} showers were recorded with energies between ∼1 and 30 TeV. The observed two-dimensional distribution of cosmic rays is characterized by two wide regions of excess and deficit, respectively, both of relative intensity ∼10{sup −3} with respect to a uniform flux, superimposed on smaller size structures. The harmonic analysis shows that the large-scale cosmic-ray relative intensity as a function of R.A. can be described by the first and second terms of a Fouries series. The high event statistics allow the study of the energy dependence of the anistropy, showing that the amplitude increases with energy, with a maximum intensity at ∼10 TeV, and then decreases while the phase slowly shifts toward lower values of R.A. with increasing energy. The ARGO-YBJ data provide accurate observations over more than a decade of energy around this feature of the anisotropy spectrum.

  8. Analysis and Identification of Aptamer-Compound Interactions with a Maximum Relevance Minimum Redundancy and Nearest Neighbor Algorithm.

    Science.gov (United States)

    Wang, ShaoPeng; Zhang, Yu-Hang; Lu, Jing; Cui, Weiren; Hu, Jerry; Cai, Yu-Dong

    2016-01-01

    The development of biochemistry and molecular biology has revealed an increasingly important role of compounds in several biological processes. Like the aptamer-protein interaction, aptamer-compound interaction attracts increasing attention. However, it is time-consuming to select proper aptamers against compounds using traditional methods, such as exponential enrichment. Thus, there is an urgent need to design effective computational methods for searching effective aptamers against compounds. This study attempted to extract important features for aptamer-compound interactions using feature selection methods, such as Maximum Relevance Minimum Redundancy, as well as incremental feature selection. Each aptamer-compound pair was represented by properties derived from the aptamer and compound, including frequencies of single nucleotides and dinucleotides for the aptamer, as well as the constitutional, electrostatic, quantum-chemical, and space conformational descriptors of the compounds. As a result, some important features were obtained. To confirm the importance of the obtained features, we further discussed the associations between them and aptamer-compound interactions. Simultaneously, an optimal prediction model based on the nearest neighbor algorithm was built to identify aptamer-compound interactions, which has the potential to be a useful tool for the identification of novel aptamer-compound interactions. The program is available upon the request.

  9. Classification of Textures Using Filter Based Local Feature Extraction

    Directory of Open Access Journals (Sweden)

    Bocekci Veysel Gokhan

    2016-01-01

    Full Text Available In this work local features are used in feature extraction process in image processing for textures. The local binary pattern feature extraction method from textures are introduced. Filtering is also used during the feature extraction process for getting discriminative features. To show the effectiveness of the algorithm before the extraction process, three different noise are added to both train and test images. Wiener filter and median filter are used to remove the noise from images. We evaluate the performance of the method with Naïve Bayesian classifier. We conduct the comparative analysis on benchmark dataset with different filtering and size. Our experiments demonstrate that feature extraction process combine with filtering give promising results on noisy images.

  10. Predictive features of CT for risk stratifications in patients with primary gastrointestinal stromal tumour

    International Nuclear Information System (INIS)

    Zhou, Cuiping; Zhang, Xiang; Duan, Xiaohui; Hu, Huijun; Wang, Dongye; Shen, Jun

    2016-01-01

    To determine the predictive CT imaging features for risk stratifications in patients with primary gastrointestinal stromal tumours (GISTs). One hundred and twenty-nine patients with histologically confirmed primary GISTs (diameter >2 cm) were enrolled. CT imaging features were reviewed. Tumour risk stratifications were determined according to the 2008 NIH criteria where GISTs were classified into four categories according to the tumour size, location, mitosis count, and tumour rupture. The association between risk stratifications and CT features was analyzed using univariate analysis, followed by multinomial logistic regression and receiver operating characteristic (ROC) curve analysis. CT imaging features including tumour margin, size, shape, tumour growth pattern, direct organ invasion, necrosis, enlarged vessels feeding or draining the mass (EVFDM), lymphadenopathy, and contrast enhancement pattern were associated with the risk stratifications, as determined by univariate analysis (P < 0.05). Only lesion size, growth pattern and EVFDM remained independent risk factors in multinomial logistic regression analysis (OR = 3.480-100.384). ROC curve analysis showed that the area under curve of the obtained multinomial logistic regression model was 0.806 (95 % CI: 0.727-0.885). CT features including lesion size, tumour growth pattern, and EVFDM were predictors of the risk stratifications for GIST. (orig.)

  11. Calculation of the minimum critical mass of fissile nuclides

    International Nuclear Information System (INIS)

    Wright, R.Q.; Hopper, Calvin Mitchell

    2008-01-01

    The OB-1 method for the calculation of the minimum critical mass of fissile actinides in metal/water systems was described in a previous paper. A fit to the calculated minimum critical mass data using the extended criticality parameter is the basis of the revised method. The solution density (grams/liter) for the minimum critical mass is also obtained by a fit to calculated values. Input to the calculation consists of the Maxwellian averaged fission and absorption cross sections and the thermal values of nubar. The revised method gives more accurate values than the original method does for both the minimum critical mass and the solution densities. The OB-1 method has been extended to calculate the uncertainties in the minimum critical mass for 12 different fissile nuclides. The uncertainties for the fission and capture cross sections and the estimated nubar uncertainties are used to determine the uncertainties in the minimum critical mass, either in percent or grams. Results have been obtained for U-233, U-235, Pu-236, Pu-239, Pu-241, Am-242m, Cm-243, Cm-245, Cf-249, Cf-251, Cf-253, and Es-254. Eight of these 12 nuclides are included in the ANS-8.15 standard.

  12. 42 CFR 84.134 - Respirator containers; minimum requirements.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Respirator containers; minimum requirements. 84.134... Respirators § 84.134 Respirator containers; minimum requirements. Supplied-air respirators shall be equipped with a substantial, durable container bearing markings which show the applicant's name, the type and...

  13. 42 CFR 84.1134 - Respirator containers; minimum requirements.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Respirator containers; minimum requirements. 84... Combination Gas Masks § 84.1134 Respirator containers; minimum requirements. (a) Except as provided in paragraph (b) of this section each respirator shall be equipped with a substantial, durable container...

  14. 42 CFR 84.74 - Apparatus containers; minimum requirements.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Apparatus containers; minimum requirements. 84.74...-Contained Breathing Apparatus § 84.74 Apparatus containers; minimum requirements. (a) Apparatus may be equipped with a substantial, durable container bearing markings which show the applicant's name, the type...

  15. Experimental tests on winter cereal: Sod seeding compared to minimum tillage and traditional plowing

    Directory of Open Access Journals (Sweden)

    Antoniotto Guidobono Cavalchini

    2013-09-01

    Full Text Available Compared to traditional plowing and minimum tillage, the sod seeding technique has been tested in order to evaluate the differences in energy consumption, labor and machinery requirement and CO2 emission reduction. The experiments were conducted on winter cereal seeding in a Po valley farm in October 2011. The tests were carried out as follows: wheat variety seeding, over corn and alfalfa crops, in large plots with three repetitions for each thesis. They included: sod seeding anticipated by round up weeding in the case of the plots over alfalfa; traditional plowing at 35 cm followed by rotary tillage and combined seeding (seeder plus rotary tiller; minimum tillage based on ripping at the same depth (35 cm and combined seeder ( seeder plus rotary tiller. The following farm operations - fertilizer, and other agrochemical distributionshave been the same in all the considered theses. The results, statistically significant (P<0.001 in terms of yields, highlighted slight differences: the best data in the case of the traditional plowing both in the case of wheat crop over corn and alfalfa (84.43 and 6.75 t/ha; slightly lower yields for the sod seeding (6.23 and 79.9 t/ha for corn and alfalfa respectively; lower in the case of minimum tillage (5.87; 79.77 t/ha in the two situations. Huge differences in energy and oil consumption have been recorded: in the case of succession to corn 61.47; 35.31; 4.27 kg oil/ha respectively for, traditional plowing, minimum tillage and sod seeding; in the case of alfalfa 61.2; 50.96; 5.14 kg oil/ha respectively for traditional plowing, minimum tillage and sod seeding. The innovative technique, highlighted huge energy saving with an oil consumption equal to 92% and 89% (P<0.001 of what happens in traditional plowing and minimum tillage. Large differences concern labor and machine productivity. These parameters together with oil consumption and machine size [power (kW and weight (t] lead to even greater differences in

  16. Currency features for visually impaired people

    Science.gov (United States)

    Hyland, Sandra L.; Legge, Gordon E.; Shannon, Robert R.; Baer, Norbert S.

    1996-03-01

    The estimated 3.7 million Americans with low vision experience a uniquely difficult task in identifying the denominations of U.S. banknotes because the notes are remarkably uniform in size, color, and general design. The National Research Council's Committee on Currency Features Usable by the Visually Impaired assessed features that could be used by people who are visually disabled to distinguish currency from other documents and to denominate and authenticate banknotes using available technology. Variation of length and height, introduction of large numerals on a uniform, high-contrast background, use of different colors for each of the six denominations printed, and the introduction of overt denomination codes that could lead to development of effective, low-cost devices for examining banknotes were all deemed features available now. Issues affecting performance, including the science of visual and tactile perception, were addressed for these features, as well as for those features requiring additional research and development. In this group the committee included durable tactile features such as those printed with transparent ink, and the production of currency with holes to indicate denomination. Among long-range approaches considered were the development of technologically advanced devices and smart money.

  17. Characterization of additive manufacturing processes for polymer micro parts productions using direct light processing (DLP) method

    DEFF Research Database (Denmark)

    Davoudinejad, Ali; Pedersen, David Bue; Tosello, Guido

    The process capability of additive manufacturing (AM) for direct production of miniaturized polymer components with micro features is analyzed in this work. The consideration of the minimum printable feature size and obtainable tolerances of AM process is a critical step to establish a process...... chains for the production of parts with micro scale features. A specifically designed direct light processing (DLP) AM machine suitable for precision printing has been used. A test part is designed having features with different sizes and aspect ratios in order to evaluate the DLP AM machine capability...

  18. Minimum K-S estimator using PH-transform technique

    Directory of Open Access Journals (Sweden)

    Somchit Boonthiem

    2016-07-01

    Full Text Available In this paper, we propose an improvement of the Minimum Kolmogorov-Smirnov (K-S estimator using proportional hazards transform (PH-transform technique. The data of experiment is 47 fire accidents data of an insurance company in Thailand. This experiment has two operations, the first operation, we minimize K-S statistic value using grid search technique for nine distributions; Rayleigh distribution, gamma distribution, Pareto distribution, log-logistic distribution, logistic distribution, normal distribution, Weibull distribution, lognormal distribution, and exponential distribution and the second operation, we improve K-S statistic using PHtransform. The result appears that PH-transform technique can improve the Minimum K-S estimator. The algorithms give better the Minimum K-S estimator for seven distributions; Rayleigh distribution, logistic distribution, gamma distribution, Pareto distribution, log-logistic distribution, normal distribution, Weibull distribution, log-normal distribution, and exponential distribution while the Minimum K-S estimators of normal distribution and logistic distribution are unchanged

  19. The Einstein-Hilbert gravitation with minimum length

    Science.gov (United States)

    Louzada, H. L. C.

    2018-05-01

    We study the Einstein-Hilbert gravitation with the deformed Heisenberg algebra leading to the minimum length, with the intention to find and estimate the corrections in this theory, clarifying whether or not it is possible to obtain, by means of the minimum length, a theory, in D=4, which is causal, unitary and provides a massive graviton. Therefore, we will calculate and analyze the dispersion relationships of the considered theory.

  20. Reducing tobacco use and access through strengthened minimum price laws.

    Science.gov (United States)

    McLaughlin, Ian; Pearson, Anne; Laird-Metke, Elisa; Ribisl, Kurt

    2014-10-01

    Higher prices reduce consumption and initiation of tobacco products. A minimum price law that establishes a high statutory minimum price and prohibits the industry's discounting tactics for tobacco products is a promising pricing strategy as an alternative to excise tax increases. Although some states have adopted minimum price laws on the basis of statutorily defined price "markups" over the invoice price, existing state laws have been largely ineffective at increasing the retail price. We analyzed 3 new variations of minimum price laws that hold great potential for raising tobacco prices and reducing consumption: (1) a flat rate minimum price law similar to a recent enactment in New York City, (2) an enhanced markup law, and (3) a law that incorporates both elements.

  1. Designing from minimum to optimum functionality

    Science.gov (United States)

    Bannova, Olga; Bell, Larry

    2011-04-01

    This paper discusses a multifaceted strategy to link NASA Minimal Functionality Habitable Element (MFHE) requirements to a compatible growth plan; leading forward to evolutionary, deployable habitats including outpost development stages. The discussion begins by reviewing fundamental geometric features inherent in small scale, vertical and horizontal, pressurized module configuration options to characterize applicability to meet stringent MFHE constraints. A proposed scenario to incorporate a vertical core MFHE concept into an expanded architecture to provide continuity of structural form and a logical path from "minimum" to "optimum" design of a habitable module. The paper describes how habitation and logistics accommodations could be pre-integrated into a common Hab/Log Module that serves both habitation and logistics functions. This is offered as a means to reduce unnecessary redundant development costs and to avoid EVA-intensive on-site adaptation and retrofitting requirements for augmented crew capacity. An evolutionary version of the hard shell Hab/Log design would have an expandable middle section to afford larger living and working accommodations. In conclusion, the paper illustrates that a number of cargo missions referenced for NASA's 4.0.0 Lunar Campaign Scenario could be eliminated altogether to expedite progress and reduce budgets. The plan concludes with a vertical growth geometry that provides versatile and efficient site development opportunities using a combination of hard Hab/Log modules and a hybrid expandable "CLAM" (Crew Lunar Accommodations Module) element.

  2. 12 CFR 567.2 - Minimum regulatory capital requirement.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Minimum regulatory capital requirement. 567.2... Regulatory Capital Requirements § 567.2 Minimum regulatory capital requirement. (a) To meet its regulatory capital requirement a savings association must satisfy each of the following capital standards: (1) Risk...

  3. 29 CFR 525.24 - Advisory Committee on Special Minimum Wages.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Advisory Committee on Special Minimum Wages. 525.24 Section 525.24 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR... Special Minimum Wages. The Advisory Committee on Special Minimum Wages, the members of which are appointed...

  4. Adaptive Mean Queue Size and Its Rate of Change: Queue Management with Random Dropping

    OpenAIRE

    Karmeshu; Patel, Sanjeev; Bhatnagar, Shalabh

    2016-01-01

    The Random early detection (RED) active queue management (AQM) scheme uses the average queue size to calculate the dropping probability in terms of minimum and maximum thresholds. The effect of heavy load enhances the frequency of crossing the maximum threshold value resulting in frequent dropping of the packets. An adaptive queue management with random dropping (AQMRD) algorithm is proposed which incorporates information not just about the average queue size but also the rate of change of th...

  5. Comparison of different methods for determining the size of a focal spot of microfocus X-ray tubes

    International Nuclear Information System (INIS)

    Salamon, M.; Hanke, R.; Krueger, P.; Sukowski, F.; Uhlmann, N.; Voland, V.

    2008-01-01

    The EN 12543-5 describes a method for determining the focal spot size of microfocus X-ray tubes up to a minimum spot size of 5 μm. The wide application of X-ray tubes with even smaller focal spot sizes in computed tomography and radioscopy applications requires the evaluation of existing methods for focal spot sizes below 5 μm. In addition, new methods and conditions for determining submicron focal spot sizes have to be developed. For the evaluation and extension of the present methods to smaller focal spot sizes, different procedures in comparison with the existing EN 12543-5 were analyzed and applied, and the results are presented

  6. Automated Diatom Classification (Part A: Handcrafted Feature Approaches

    Directory of Open Access Journals (Sweden)

    Gloria Bueno

    2017-07-01

    Full Text Available This paper deals with automatic taxa identification based on machine learning methods. The aim is therefore to automatically classify diatoms, in terms of pattern recognition terminology. Diatoms are a kind of algae microorganism with high biodiversity at the species level, which are useful for water quality assessment. The most relevant features for diatom description and classification have been selected using an extensive dataset of 80 taxa with a minimum of 100 samples/taxon augmented to 300 samples/taxon. In addition to published morphological, statistical and textural descriptors, a new textural descriptor, Local Binary Patterns (LBP, to characterize the diatom’s valves, and a log Gabor implementation not tested before for this purpose are introduced in this paper. Results show an overall accuracy of 98.11% using bagging decision trees and combinations of descriptors. Finally, some phycological features of diatoms that are still difficult to integrate in computer systems are discussed for future work.

  7. Deep-learning derived features for lung nodule classification with limited datasets

    Science.gov (United States)

    Thammasorn, P.; Wu, W.; Pierce, L. A.; Pipavath, S. N.; Lampe, P. D.; Houghton, A. M.; Haynor, D. R.; Chaovalitwongse, W. A.; Kinahan, P. E.

    2018-02-01

    Only a few percent of indeterminate nodules found in lung CT images are cancer. However, enabling earlier diagnosis is important to avoid invasive procedures or long-time surveillance to those benign nodules. We are evaluating a classification framework using radiomics features derived with a machine learning approach from a small data set of indeterminate CT lung nodule images. We used a retrospective analysis of 194 cases with pulmonary nodules in the CT images with or without contrast enhancement from lung cancer screening clinics. The nodules were contoured by a radiologist and texture features of the lesion were calculated. In addition, sematic features describing shape were categorized. We also explored a Multiband network, a feature derivation path that uses a modified convolutional neural network (CNN) with a Triplet Network. This was trained to create discriminative feature representations useful for variable-sized nodule classification. The diagnostic accuracy was evaluated for multiple machine learning algorithms using texture, shape, and CNN features. In the CT contrast-enhanced group, the texture or semantic shape features yielded an overall diagnostic accuracy of 80%. Use of a standard deep learning network in the framework for feature derivation yielded features that substantially underperformed compared to texture and/or semantic features. However, the proposed Multiband approach of feature derivation produced results similar in diagnostic accuracy to the texture and semantic features. While the Multiband feature derivation approach did not outperform the texture and/or semantic features, its equivalent performance indicates promise for future improvements to increase diagnostic accuracy. Importantly, the Multiband approach adapts readily to different size lesions without interpolation, and performed well with relatively small amount of training data.

  8. Statistical physics when the minimum temperature is not absolute zero

    Science.gov (United States)

    Chung, Won Sang; Hassanabadi, Hassan

    2018-04-01

    In this paper, the nonzero minimum temperature is considered based on the third law of thermodynamics and existence of the minimal momentum. From the assumption of nonzero positive minimum temperature in nature, we deform the definitions of some thermodynamical quantities and investigate nonzero minimum temperature correction to the well-known thermodynamical problems.

  9. Dysphonic Voice Pattern Analysis of Patients in Parkinson’s Disease Using Minimum Interclass Probability Risk Feature Selection and Bagging Ensemble Learning Methods

    Directory of Open Access Journals (Sweden)

    Yunfeng Wu

    2017-01-01

    Full Text Available Analysis of quantified voice patterns is useful in the detection and assessment of dysphonia and related phonation disorders. In this paper, we first study the linear correlations between 22 voice parameters of fundamental frequency variability, amplitude variations, and nonlinear measures. The highly correlated vocal parameters are combined by using the linear discriminant analysis method. Based on the probability density functions estimated by the Parzen-window technique, we propose an interclass probability risk (ICPR method to select the vocal parameters with small ICPR values as dominant features and compare with the modified Kullback-Leibler divergence (MKLD feature selection approach. The experimental results show that the generalized logistic regression analysis (GLRA, support vector machine (SVM, and Bagging ensemble algorithm input with the ICPR features can provide better classification results than the same classifiers with the MKLD selected features. The SVM is much better at distinguishing normal vocal patterns with a specificity of 0.8542. Among the three classification methods, the Bagging ensemble algorithm with ICPR features can identify 90.77% vocal patterns, with the highest sensitivity of 0.9796 and largest area value of 0.9558 under the receiver operating characteristic curve. The classification results demonstrate the effectiveness of our feature selection and pattern analysis methods for dysphonic voice detection and measurement.

  10. 42 CFR 422.382 - Minimum net worth amount.

    Science.gov (United States)

    2010-10-01

    ... that CMS considers appropriate to reduce, control or eliminate start-up administrative costs. (b) After... section. (c) Calculation of the minimum net worth amount—(1) Cash requirement. (i) At the time of application, the organization must maintain at least $750,000 of the minimum net worth amount in cash or cash...

  11. Minimum Competencies in Undergraduate Motor Development. Guidance Document

    Science.gov (United States)

    National Association for Sport and Physical Education, 2004

    2004-01-01

    The minimum competency guidelines in Motor Development described herein at the undergraduate level may be gained in one or more motor development course(s) or through other courses provided in an undergraduate curriculum. The minimum guidelines include: (1) Formulation of a developmental perspective; (2) Knowledge of changes in motor behavior…

  12. Comparison of Machine Learning Techniques in Inferring Phytoplankton Size Classes

    Directory of Open Access Journals (Sweden)

    Shuibo Hu

    2018-03-01

    Full Text Available The size of phytoplankton not only influences its physiology, metabolic rates and marine food web, but also serves as an indicator of phytoplankton functional roles in ecological and biogeochemical processes. Therefore, some algorithms have been developed to infer the synoptic distribution of phytoplankton cell size, denoted as phytoplankton size classes (PSCs, in surface ocean waters, by the means of remotely sensed variables. This study, using the NASA bio-Optical Marine Algorithm Data set (NOMAD high performance liquid chromatography (HPLC database, and satellite match-ups, aimed to compare the effectiveness of modeling techniques, including partial least square (PLS, artificial neural networks (ANN, support vector machine (SVM and random forests (RF, and feature selection techniques, including genetic algorithm (GA, successive projection algorithm (SPA and recursive feature elimination based on support vector machine (SVM-RFE, for inferring PSCs from remote sensing data. Results showed that: (1 SVM-RFE worked better in selecting sensitive features; (2 RF performed better than PLS, ANN and SVM in calibrating PSCs retrieval models; (3 machine learning techniques produced better performance than the chlorophyll-a based three-component method; (4 sea surface temperature, wind stress, and spectral curvature derived from the remote sensing reflectance at 490, 510, and 555 nm were among the most sensitive features to PSCs; and (5 the combination of SVM-RFE feature selection techniques and random forests regression was recommended for inferring PSCs. This study demonstrated the effectiveness of machine learning techniques in selecting sensitive features and calibrating models for PSCs estimations with remote sensing.

  13. Efficiency calibration and minimum detectable activity concentration of a real-time UAV airborne sensor system with two gamma spectrometers.

    Science.gov (United States)

    Tang, Xiao-Bin; Meng, Jia; Wang, Peng; Cao, Ye; Huang, Xi; Wen, Liang-Sheng; Chen, Da

    2016-04-01

    A small-sized UAV (NH-UAV) airborne system with two gamma spectrometers (LaBr3 detector and HPGe detector) was developed to monitor activity concentration in serious nuclear accidents, such as the Fukushima nuclear accident. The efficiency calibration and determination of minimum detectable activity concentration (MDAC) of the specific system were studied by MC simulations at different flight altitudes, different horizontal distances from the detection position to the source term center and different source term sizes. Both air and ground radiation were considered in the models. The results obtained may provide instructive suggestions for in-situ radioactivity measurements of NH-UAV. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. The Unusual Minimum of Cycle 23: Observations and Interpretation

    Science.gov (United States)

    Martens, Petrus C.; Nandy, D.; Munoz-Jaramillo, A.

    2009-05-01

    The current minimum of cycle 23 is unusual in its long duration, the very low level to which Total Solar Irradiance (TSI) has fallen, and the small flux of the open polar fields. The deep minimum of TSI seems to be related to an unprecedented dearth of polar faculae, and hence to the small amount of open flux. Based upon surface flux transport models it has been suggested that the causes of these phenomena may be an unusually vigorous meridional flow, or even a deviation from Joy's law resulting in smaller Joy angles than usual for emerging flux in cycle 23. There is also the possibility of a connection with the recently inferred emergence in polar regions of bipoles that systematically defy Hale's law. Much speculation has been going on as to the consequences of this exceptional minimum: are we entering another global minimum, is this the end of the 80 year period of exceptionally high solar activity, or is this just a statistical hiccup? Dynamo simulations are underway that may help answer this question. As an aside it must be mentioned that the current minimum of TSI puts an upper limit in the TSI input for global climate simulations during the Maunder minimum, and that a possible decrease in future solar activity will result in a very small but not insignificant reduction in the pace of global warming.

  15. Minimum Wages and Skill Acquisition: Another Look at Schooling Effects.

    Science.gov (United States)

    Neumark, David; Wascher, William

    2003-01-01

    Examines the effects of minimum wage on schooling, seeking to reconcile some of the contradictory results in recent research using Current Population Survey data from the late 1970s through the 1980s. Findings point to negative effects of minimum wages on school enrollment, bolstering the findings of negative effects of minimum wages on enrollment…

  16. A new concept of feature-based gauge for coordinate measuring arm evaluation

    Science.gov (United States)

    Cuesta, E.; González-Madruga, D.; Alvarez, B. J.; Barreiro, J.

    2014-06-01

    Articulated arm coordinate measuring machines (AACMM or CMA) have conquered a market share in the actual dimensional metrology field, overall when their role implies the inspection of geometrical and dimensional tolerances in an accurate 3D environment for medium-size parts. However, the unavoidable fact of AACMM manual operation constrains its reliability to a great extent, avoiding rigorous evaluation and casting doubt upon the usefulness of external calibration. In this research, a dimensional gauge especially aimed at AACMM evaluation has been developed. Furthermore, the operator skill will be revealed through the use of this gauge. A set of geometrical features, some of them oriented to evaluate the operator and others the equipment, have been collected for the gauge. The proposed evaluation methodology clearly distinguishes between dimensional and geometrical tolerances (with or without datum references), whereas actual verification standards only consider the former. Next, quality indicators deduced from the measurement results are proposed in order to compare AACMM versus coordinate measuring machine (CMM) performance, assuming that CMM possess the maximum accuracy that AACMM could reach, because CMM combines maximum contact accuracy with minimum operator influence. As a result, AACMM evaluation time could be significantly reduced since this gauge allows us to perform a customized evaluation of only those specific tolerances of interest to the user.

  17. A new concept of feature-based gauge for coordinate measuring arm evaluation

    International Nuclear Information System (INIS)

    Cuesta, E; Alvarez, B J; González-Madruga, D; Barreiro, J

    2014-01-01

    Articulated arm coordinate measuring machines (AACMM or CMA) have conquered a market share in the actual dimensional metrology field, overall when their role implies the inspection of geometrical and dimensional tolerances in an accurate 3D environment for medium-size parts. However, the unavoidable fact of AACMM manual operation constrains its reliability to a great extent, avoiding rigorous evaluation and casting doubt upon the usefulness of external calibration. In this research, a dimensional gauge especially aimed at AACMM evaluation has been developed. Furthermore, the operator skill will be revealed through the use of this gauge. A set of geometrical features, some of them oriented to evaluate the operator and others the equipment, have been collected for the gauge. The proposed evaluation methodology clearly distinguishes between dimensional and geometrical tolerances (with or without datum references), whereas actual verification standards only consider the former. Next, quality indicators deduced from the measurement results are proposed in order to compare AACMM versus coordinate measuring machine (CMM) performance, assuming that CMM possess the maximum accuracy that AACMM could reach, because CMM combines maximum contact accuracy with minimum operator influence. As a result, AACMM evaluation time could be significantly reduced since this gauge allows us to perform a customized evaluation of only those specific tolerances of interest to the user. (paper)

  18. 14 CFR 91.155 - Basic VFR weather minimums.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Basic VFR weather minimums. 91.155 Section...) AIR TRAFFIC AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Flight Rules Visual Flight Rules § 91.155 Basic VFR weather minimums. (a) Except as provided in paragraph (b) of this section and...

  19. Physical model for the 2175 A interstellar extinction feature

    International Nuclear Information System (INIS)

    Hecht, J.H.

    1986-01-01

    Recent IUE observations have shown that the 2175 A interstellar extinction feature is constant in wavelength but varies in width. A model has been constructed to explain these results. It is proposed that the 2175 A feature will only be seen when there is extinction due to carbon grains which have lost their hydrogen. In particular, the feature is caused by a separate population of small (less than 50 A radius), hydrogen-free carbon grains. The variations in width would be due to differences in either their temperature, size distribution, or impurity content. All other carbon grains retain hydrogen, which causes the feature to be suppressed. If this model is correct, then it implies that the grains responsible for the unidentified IR emission features would not generally cause the 2175 A feature. 53 references

  20. Relative merits of size, field, and current on ignited tokamak performance

    International Nuclear Information System (INIS)

    Uckan, N.A.

    1988-01-01

    A simple global analysis is developed to examine the relative merits of size (L = a or R/sub 0 /), field (B/sub 0 /), and current (I) on ignition regimes of tokamaks under various confinement scaling laws. Scalings of key parameters with L, B/sub 0 /, and I are presented at several operating points, including (a) optimal path to ignition (saddle point), (b) ignition at minimum beta, (c) ignition at 10 keV, and (d) maximum performance at the limits of density and beta. Expressions for the saddle point and the minimum conditions needed for ohmic ignition are derived analytically for any confinement model of the form tau/sub E/ ∼ n/sup x/T/sup y/. For a wide range of confinement models, the ''figure of merit'' parameters and I are found to give a good indication of the relative performance of the devices where q* is the cylindrical safety factor. As an illustration, the results are applied to representative ''CIT'' (as a class of compact, high-field ignition tokamaks) and ''Super-JETs'' [a class of large-size (few x JET), low-field, high-current (≥20-MA) devices.

  1. Minimum Wages and Teen Employment: A Spatial Panel Approach

    OpenAIRE

    Charlene Kalenkoski; Donald Lacombe

    2011-01-01

    The authors employ spatial econometrics techniques and Annual Averages data from the U.S. Bureau of Labor Statistics for 1990-2004 to examine how changes in the minimum wage affect teen employment. Spatial econometrics techniques account for the fact that employment is correlated across states. Such correlation may exist if a change in the minimum wage in a state affects employment not only in its own state but also in other, neighboring states. The authors show that state minimum wages negat...

  2. Preeminence and prerequisites of sample size calculations in clinical trials

    OpenAIRE

    Richa Singhal; Rakesh Rana

    2015-01-01

    The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary out...

  3. Minimum Viable Product and the Importance of Experimentation in Technology Startups

    Directory of Open Access Journals (Sweden)

    Dobrila Rancic Moogk

    2012-03-01

    Full Text Available Entrepreneurs are often faced with limited resources in their quest to commercialize new technology. This article presents the model of a lean startup, which can be applied to an organization regardless of its size or environment. It also emphasizes the conditions of extreme uncertainty under which the commercialization of new technology is carried out. The lean startup philosophy advocates efficient use of resources by introducing a minimum viable product to the market as soon as possible in order to test its value and the entrepreneur’s growth projections. This testing is done by running experiments that examine the metrics relevant to three distinct types of the growth. These experiments bring about accelerated learning to help reduce the uncertainty that accompanies commercialization projects, thereby bringing the resulting new technology to market faster.

  4. Nowcasting daily minimum air and grass temperature

    Science.gov (United States)

    Savage, M. J.

    2016-02-01

    Site-specific and accurate prediction of daily minimum air and grass temperatures, made available online several hours before their occurrence, would be of significant benefit to several economic sectors and for planning human activities. Site-specific and reasonably accurate nowcasts of daily minimum temperature several hours before its occurrence, using measured sub-hourly temperatures hours earlier in the morning as model inputs, was investigated. Various temperature models were tested for their ability to accurately nowcast daily minimum temperatures 2 or 4 h before sunrise. Temperature datasets used for the model nowcasts included sub-hourly grass and grass-surface (infrared) temperatures from one location in South Africa and air temperature from four subtropical sites varying in altitude (USA and South Africa) and from one site in central sub-Saharan Africa. Nowcast models used employed either exponential or square root functions to describe the rate of nighttime temperature decrease but inverted so as to determine the minimum temperature. The models were also applied in near real-time using an open web-based system to display the nowcasts. Extrapolation algorithms for the site-specific nowcasts were also implemented in a datalogger in an innovative and mathematically consistent manner. Comparison of model 1 (exponential) nowcasts vs measured daily minima air temperatures yielded root mean square errors (RMSEs) <1 °C for the 2-h ahead nowcasts. Model 2 (also exponential), for which a constant model coefficient ( b = 2.2) was used, was usually slightly less accurate but still with RMSEs <1 °C. Use of model 3 (square root) yielded increased RMSEs for the 2-h ahead comparisons between nowcasted and measured daily minima air temperature, increasing to 1.4 °C for some sites. For all sites for all models, the comparisons for the 4-h ahead air temperature nowcasts generally yielded increased RMSEs, <2.1 °C. Comparisons for all model nowcasts of the daily grass

  5. Topside measurements at Jicamarca during solar minimum

    Directory of Open Access Journals (Sweden)

    D. L. Hysell

    2009-01-01

    Full Text Available Long-pulse topside radar data acquired at Jicamarca and processed using full-profile analysis are compared to data processed using more conventional, range-gated approaches and with analytic and computational models. The salient features of the topside observations include a dramatic increase in the Te/Ti temperature ratio above the F peak at dawn and a local minimum in the topside plasma temperature in the afternoon. The hydrogen ion fraction was found to exhibit hyperbolic tangent-shaped profiles that become shallow (gradually changing above the O+-H+ transition height during the day. The profile shapes are generally consistent with diffusive equilibrium, although shallowing to the point of changes in inflection can only be accounted for by taking the effects of E×B drifts and meridional winds into account. The SAMI2 model demonstrates this as well as the substantial effect that drifts and winds can have on topside temperatures. Significant quiet-time variability in the topside composition and temperatures may be due to variability in the mechanical forcing. Correlations between topside measurements and magnetometer data at Jicamarca support this hypothesis.

  6. A note on minimum-variance theory and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, Sussex University, Brighton, BN1 9QH (United Kingdom); Tartaglia, Giangaetano [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy); Tirozzi, Brunello [Physics Department, Rome University ' La Sapienza' , Rome 00185 (Italy)

    2004-04-30

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons.

  7. A note on minimum-variance theory and beyond

    International Nuclear Information System (INIS)

    Feng Jianfeng; Tartaglia, Giangaetano; Tirozzi, Brunello

    2004-01-01

    We revisit the minimum-variance theory proposed by Harris and Wolpert (1998 Nature 394 780-4), discuss the implications of the theory on modelling the firing patterns of single neurons and analytically find the optimal control signals, trajectories and velocities. Under the rate coding assumption, input control signals employed in the minimum-variance theory should be Fitts processes rather than Poisson processes. Only if information is coded by interspike intervals, Poisson processes are in agreement with the inputs employed in the minimum-variance theory. For the integrate-and-fire model with Fitts process inputs, interspike intervals of efferent spike trains are very irregular. We introduce diffusion approximations to approximate neural models with renewal process inputs and present theoretical results on calculating moments of interspike intervals of the integrate-and-fire model. Results in Feng, et al (2002 J. Phys. A: Math. Gen. 35 7287-304) are generalized. In conclusion, we present a complete picture on the minimum-variance theory ranging from input control signals, to model outputs, and to its implications on modelling firing patterns of single neurons

  8. Split-plot fractional designs: Is minimum aberration enough?

    DEFF Research Database (Denmark)

    Kulahci, Murat; Ramirez, Jose; Tobias, Randy

    2006-01-01

    Split-plot experiments are commonly used in industry for product and process improvement. Recent articles on designing split-plot experiments concentrate on minimum aberration as the design criterion. Minimum aberration has been criticized as a design criterion for completely randomized fractional...... factorial design and alternative criteria, such as the maximum number of clear two-factor interactions, are suggested (Wu and Hamada (2000)). The need for alternatives to minimum aberration is even more acute for split-plot designs. In a standard split-plot design, there are several types of two...... for completely randomized designs. Consequently, we provide a modified version of the maximum number of clear two-factor interactions design criterion to be used for split-plot designs....

  9. Applicability of the minimum entropy generation method for optimizing thermodynamic cycles

    Institute of Scientific and Technical Information of China (English)

    Cheng Xue-Tao; Liang Xin-Gang

    2013-01-01

    Entropy generation is often used as a figure of merit in thermodynamic cycle optimizations.In this paper,it is shown that the applicability of the minimum entropy generation method to optimizing output power is conditional.The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power when the total heat into the system of interest is not prescribed.For the cycles whose working medium is heated or cooled by streams with prescribed inlet temperatures and prescribed heat capacity flow rates,it is theoretically proved that both the minimum entropy generation rate and the minimum entropy generation number correspond to the maximum output power when the virtual entropy generation induced by dumping the used streams into the environment is considered.However,the minimum principle of entropy generation is not tenable in the case that the virtual entropy generation is not included,because the total heat into the system of interest is not fixed.An irreversible Carnot cycle and an irreversible Brayton cycle are analysed.The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power if the heat into the system of interest is not prescribed.

  10. Applicability of the minimum entropy generation method for optimizing thermodynamic cycles

    International Nuclear Information System (INIS)

    Cheng Xue-Tao; Liang Xin-Gang

    2013-01-01

    Entropy generation is often used as a figure of merit in thermodynamic cycle optimizations. In this paper, it is shown that the applicability of the minimum entropy generation method to optimizing output power is conditional. The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power when the total heat into the system of interest is not prescribed. For the cycles whose working medium is heated or cooled by streams with prescribed inlet temperatures and prescribed heat capacity flow rates, it is theoretically proved that both the minimum entropy generation rate and the minimum entropy generation number correspond to the maximum output power when the virtual entropy generation induced by dumping the used streams into the environment is considered. However, the minimum principle of entropy generation is not tenable in the case that the virtual entropy generation is not included, because the total heat into the system of interest is not fixed. An irreversible Carnot cycle and an irreversible Brayton cycle are analysed. The minimum entropy generation rate and the minimum entropy generation number do not correspond to the maximum output power if the heat into the system of interest is not prescribed. (general)

  11. Minimum Moduli in Von Neumann Algebras | Gopalraj | Quaestiones ...

    African Journals Online (AJOL)

    In this paper we answer a question raised in [12] in the affirmative, namely that the essential minimum modulus of an element in a von. Neumann algebra, relative to any norm closed two-sided ideal, is equal to the minimum modulus of the element perturbed by an element from the ideal. As a corollary of this result, we ...

  12. Robust Branch-Cut-and-Price for the Capacitated Minimum Spanning Tree Problem over a Large Extended Formulation

    DEFF Research Database (Denmark)

    Uchoa, Eduardo; Fukasawa, Ricardo; Lysgaard, Jens

    This paper presents a robust branch-cut-and-price algorithm for the Capacitated Minimum Spanning Tree Problem (CMST). The variables are associated to q-arbs, a structure that arises from a relaxation of the capacitated prize-collecting arborescence problem in order to make it solvable in pseudo......-polynomial time. Traditional inequalities over the arc formulation, like Capacity Cuts, are also used. Moreover, a novel feature is introduced in such kind of algorithms. Powerful new cuts expressed over a very large set of variables could be added, without increasing the complexity of the pricing subproblem...

  13. Subjective well-being and minimum wages: Evidence from U.S. states.

    Science.gov (United States)

    Kuroki, Masanori

    2018-02-01

    This paper investigates whether increases in minimum wages are associated with higher life satisfaction by using monthly-level state minimum wages and individual-level data from the 2005-2010 Behavioral Risk Factor Surveillance System. The magnitude I find suggests that a 10% increase in the minimum wage is associated with a 0.03-point increase in life satisfaction for workers without a high school diploma, on a 4-point scale. Contrary to popular belief that higher minimum wages hurt business owners, I find little evidence that higher minimum wages lead to the loss of well-being among self-employed people. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Degree of contribution (DoC) feature selection algorithm for structural brain MRI volumetric features in depression detection.

    Science.gov (United States)

    Kipli, Kuryati; Kouzani, Abbas Z

    2015-07-01

    Accurate detection of depression at an individual level using structural magnetic resonance imaging (sMRI) remains a challenge. Brain volumetric changes at a structural level appear to have importance in depression biomarkers studies. An automated algorithm is developed to select brain sMRI volumetric features for the detection of depression. A feature selection (FS) algorithm called degree of contribution (DoC) is developed for selection of sMRI volumetric features. This algorithm uses an ensemble approach to determine the degree of contribution in detection of major depressive disorder. The DoC is the score of feature importance used for feature ranking. The algorithm involves four stages: feature ranking, subset generation, subset evaluation, and DoC analysis. The performance of DoC is evaluated on the Duke University Multi-site Imaging Research in the Analysis of Depression sMRI dataset. The dataset consists of 115 brain sMRI scans of 88 healthy controls and 27 depressed subjects. Forty-four sMRI volumetric features are used in the evaluation. The DoC score of forty-four features was determined as the accuracy threshold (Acc_Thresh) was varied. The DoC performance was compared with that of four existing FS algorithms. At all defined Acc_Threshs, DoC outperformed the four examined FS algorithms for the average classification score and the maximum classification score. DoC has a good ability to generate reduced-size subsets of important features that could yield high classification accuracy. Based on the DoC score, the most discriminant volumetric features are those from the left-brain region.

  15. Prediction of BRAF mutation status of craniopharyngioma using magnetic resonance imaging features.

    Science.gov (United States)

    Yue, Qi; Yu, Yang; Shi, Zhifeng; Wang, Yongfei; Zhu, Wei; Du, Zunguo; Yao, Zhenwei; Chen, Liang; Mao, Ying

    2017-10-06

    OBJECTIVE Treatment with a BRAF mutation inhibitor might shrink otherwise refractory craniopharyngiomas and is a promising preoperative treatment to facilitate tumor resection. The aim of this study was to investigate the noninvasive diagnosis of BRAF-mutated craniopharyngiomas based on MRI characteristics. METHODS Fifty-two patients with pathologically diagnosed craniopharyngioma were included in this study. Polymerase chain reaction was performed on tumor tissue specimens to detect BRAF and CTNNB1 mutations. MRI manifestations-including tumor location, size, shape, and composition; signal intensity of cysts; enhancement pattern; pituitary stalk morphology; and encasement of the internal carotid artery-were analyzed by 2 neuroradiologists blinded to patient identity and clinical characteristics, including BRAF mutation status. Results were compared between the BRAF-mutated and wild-type (WT) groups. Characteristics that were significantly more prevalent (p < 0.05) in the BRAF-mutated craniopharyngiomas were defined as diagnostic features. The minimum number of diagnostic features needed to make a diagnosis was determined by analyzing the receiver operating characteristic (ROC) curve. RESULTS Eight of the 52 patients had BRAF-mutated craniopharyngiomas, and the remaining 44 had BRAF WT tumors. The clinical characteristics did not differ significantly between the 2 groups. Interobserver agreement for MRI data analysis was relatively reliable, with values of Cohen κ ranging from 0.65 to 0.97 (p < 0.001). A comparison of findings in the 2 patient groups showed that BRAF-mutated craniopharyngiomas tended to be suprasellar (p < 0.001), spherical (p = 0.005), predominantly solid (p = 0.003), and homogeneously enhancing (p < 0.001), and that patients with these tumors tended to have a thickened pituitary stalk (p = 0.014). When at least 3 of these 5 features were present, a tumor might be identified as BRAF mutated with a sensitivity of 1.00 and a specificity of 0

  16. Six months into Myanmar's minimum wage: Reflecting on progress ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2016-04-25

    Apr 25, 2016 ... Participants examined recent results from an IDRC-funded enterprise survey, ... of a minimum wage, and how they have coped with the new situation.” ... Debate on the impact of minimum wages on employment continues ...

  17. Do minimum wages reduce poverty? Evidence from Central America ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2010-12-16

    Dec 16, 2010 ... Raising minimum wages has traditionally been considered a way to protect poor ... However, the effect of raising minimum wages remains an empirical question ... ​More than 70 of Vietnamese entrepreneurs choose to start a ...

  18. Feature Selection using Multi-objective Genetic Algorith m: A Hybrid Approach

    OpenAIRE

    Ahuja, Jyoti; GJUST - Guru Jambheshwar University of Sciecne and Technology; Ratnoo, Saroj Dahiya; GJUST - Guru Jambheshwar University of Sciecne and Technology

    2015-01-01

    Feature selection is an important pre-processing task for building accurate and comprehensible classification models. Several researchers have applied filter, wrapper or hybrid approaches using genetic algorithms which are good candidates for optimization problems that involve large search spaces like in the case of feature selection. Moreover, feature selection is an inherently multi-objective problem with many competing objectives involving size, predictive power and redundancy of the featu...

  19. Decision trees with minimum average depth for sorting eight elements

    KAUST Repository

    AbouEisha, Hassan M.

    2015-11-19

    We prove that the minimum average depth of a decision tree for sorting 8 pairwise different elements is equal to 620160/8!. We show also that each decision tree for sorting 8 elements, which has minimum average depth (the number of such trees is approximately equal to 8.548×10^326365), has also minimum depth. Both problems were considered by Knuth (1998). To obtain these results, we use tools based on extensions of dynamic programming which allow us to make sequential optimization of decision trees relative to depth and average depth, and to count the number of decision trees with minimum average depth.

  20. Coupling between minimum scattering antennas

    DEFF Research Database (Denmark)

    Andersen, J.; Lessow, H; Schjær-Jacobsen, Hans

    1974-01-01

    Coupling between minimum scattering antennas (MSA's) is investigated by the coupling theory developed by Wasylkiwskyj and Kahn. Only rotationally symmetric power patterns are considered, and graphs of relative mutual impedance are presented as a function of distance and pattern parameters. Crossed...

  1. Regional climate model sensitivity to domain size

    Science.gov (United States)

    Leduc, Martin; Laprise, René

    2009-05-01

    Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.

  2. Regional climate model sensitivity to domain size

    Energy Technology Data Exchange (ETDEWEB)

    Leduc, Martin [Universite du Quebec a Montreal, Canadian Regional Climate Modelling and Diagnostics (CRCMD) Network, ESCER Centre, Montreal (Canada); UQAM/Ouranos, Montreal, QC (Canada); Laprise, Rene [Universite du Quebec a Montreal, Canadian Regional Climate Modelling and Diagnostics (CRCMD) Network, ESCER Centre, Montreal (Canada)

    2009-05-15

    Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the ''perfect model'' approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 x 100 grid points). The permanent ''spatial spin-up'' corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere. (orig.)

  3. 29 CFR 510.23 - Agricultural activities eligible for minimum wage phase-in.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Agricultural activities eligible for minimum wage phase-in..., DEPARTMENT OF LABOR REGULATIONS IMPLEMENTATION OF THE MINIMUM WAGE PROVISIONS OF THE 1989 AMENDMENTS TO THE... eligible for minimum wage phase-in. Agriculture activities eligible for an extended phase-in of the minimum...

  4. Is a Minimum Wage an Appropriate Instrument for Redistribution?

    NARCIS (Netherlands)

    A.A.F. Gerritsen (Aart); B. Jacobs (Bas)

    2016-01-01

    textabstractWe analyze the redistributional (dis)advantages of a minimum wage over income taxation in competitive labor markets, without imposing assumptions on the (in)efficiency of labor rationing. Compared to a distributionally equivalent tax change, a minimum-wage increase raises involuntary

  5. Economies of scale and trends in the size of southern forest industries

    Science.gov (United States)

    James E. Granskog

    1978-01-01

    In each of the major southern forest industries, the trend has been toward achieving economies of scale, that is, to build larger production units to reduce unit costs. Current minimum efficient plant size estimated by survivor analysis is 1,000 tons per day capacity for sulfate pulping, 100 million square feet (3/8- inch basis) annual capacity for softwood plywood,...

  6. The impact of the minimum wage on health.

    Science.gov (United States)

    Andreyeva, Elena; Ukert, Benjamin

    2018-03-07

    This study evaluates the effect of minimum wage on risky health behaviors, healthcare access, and self-reported health. We use data from the 1993-2015 Behavioral Risk Factor Surveillance System, and employ a difference-in-differences strategy that utilizes time variation in new minimum wage laws across U.S. states. Results suggest that the minimum wage increases the probability of being obese and decreases daily fruit and vegetable intake, but also decreases days with functional limitations while having no impact on healthcare access. Subsample analyses reveal that the increase in weight and decrease in fruit and vegetable intake are driven by the older population, married, and whites. The improvement in self-reported health is especially strong among non-whites, females, and married.

  7. Setting a minimum age for juvenile justice jurisdiction in California.

    Science.gov (United States)

    S Barnert, Elizabeth; S Abrams, Laura; Maxson, Cheryl; Gase, Lauren; Soung, Patricia; Carroll, Paul; Bath, Eraka

    2017-03-13

    Purpose Despite the existence of minimum age laws for juvenile justice jurisdiction in 18 US states, California has no explicit law that protects children (i.e. youth less than 12 years old) from being processed in the juvenile justice system. In the absence of a minimum age law, California lags behind other states and international practice and standards. The paper aims to discuss these issues. Design/methodology/approach In this policy brief, academics across the University of California campuses examine current evidence, theory, and policy related to the minimum age of juvenile justice jurisdiction. Findings Existing evidence suggests that children lack the cognitive maturity to comprehend or benefit from formal juvenile justice processing, and diverting children from the system altogether is likely to be more beneficial for the child and for public safety. Research limitations/implications Based on current evidence and theory, the authors argue that minimum age legislation that protects children from contact with the juvenile justice system and treats them as children in need of services and support, rather than as delinquents or criminals, is an important policy goal for California and for other national and international jurisdictions lacking a minimum age law. Originality/value California has no law specifying a minimum age for juvenile justice jurisdiction, meaning that young children of any age can be processed in the juvenile justice system. This policy brief provides a rationale for a minimum age law in California and other states and jurisdictions without one.

  8. Optimising a Model of Minimum Stock Level Control and a Model of Standing Order Cycle in Selected Foundry Plant

    Directory of Open Access Journals (Sweden)

    Szymszal J.

    2013-09-01

    Full Text Available It has been found that the area where one can look for significant reserves in the procurement logistics is a rational management of the stock of raw materials. Currently, the main purpose of projects which increase the efficiency of inventory management is to rationalise all the activities in this area, taking into account and minimising at the same time the total inventory costs. The paper presents a method for optimising the inventory level of raw materials under a foundry plant conditions using two different control models. The first model is based on the estimate of an optimal level of the minimum emergency stock of raw materials, giving information about the need for an order to be placed immediately and about the optimal size of consignments ordered after the minimum emergency level has occurred. The second model is based on the estimate of a maximum inventory level of raw materials and an optimal order cycle. Optimisation of the presented models has been based on the previously done selection and use of rational methods for forecasting the time series of the delivery of a chosen auxiliary material (ceramic filters to a casting plant, including forecasting a mean size of the delivered batch of products and its standard deviation.

  9. Quality control methods in accelerometer data processing: defining minimum wear time.

    Directory of Open Access Journals (Sweden)

    Carly Rich

    Full Text Available BACKGROUND: When using accelerometers to measure physical activity, researchers need to determine whether subjects have worn their device for a sufficient period to be included in analyses. We propose a minimum wear criterion using population-based accelerometer data, and explore the influence of gender and the purposeful inclusion of children with weekend data on reliability. METHODS: Accelerometer data obtained during the age seven sweep of the UK Millennium Cohort Study were analysed. Children were asked to wear an ActiGraph GT1M accelerometer for seven days. Reliability coefficients(r of mean daily counts/minute were calculated using the Spearman-Brown formula based on the intraclass correlation coefficient. An r of 1.0 indicates that all the variation is between- rather than within-children and that measurement is 100% reliable. An r of 0.8 is often regarded as acceptable reliability. Analyses were repeated on data from children who met different minimum daily wear times (one to 10 hours and wear days (one to seven days. Analyses were conducted for all children, separately for boys and girls, and separately for children with and without weekend data. RESULTS: At least one hour of wear time data was obtained from 7,704 singletons. Reliability increased as the minimum number of days and the daily wear time increased. A high reliability (r = 0.86 and sample size (n = 6,528 was achieved when children with ≥ two days lasting ≥10 hours/day were included in analyses. Reliability coefficients were similar for both genders. Purposeful sampling of children with weekend data resulted in comparable reliabilities to those calculated independent of weekend wear. CONCLUSION: Quality control procedures should be undertaken before analysing accelerometer data in large-scale studies. Using data from children with ≥ two days lasting ≥10 hours/day should provide reliable estimates of physical activity. It's unnecessary to include only children

  10. CFD-DEM Simulation of Minimum Fluidisation Velocity in Two Phase Medium

    Directory of Open Access Journals (Sweden)

    H Khawaja

    2016-09-01

    Full Text Available In this work, CFD-DEM (computational fluid dynamics - discrete element method has been used to model the 2 phase flow composed of solid particle and gas in the fluidised bed. This technique uses the Eulerian and the Langrangian methods to solve fluid and particles respectively. Each particle is treated as a discrete entity whose motion is governed by Newton's laws of motion. The particle-particle and particle-wall interaction is modelled using the classical contact mechanics. The particles motion is coupled with the volume averaged equations of the fluid dynamics using drag law. In fluidised bed, particles start experiencing drag once the fluid is passing through. The solid particles response to it once drag experienced is just equal to the weight of the particles. At this moment pressure drop across the bed is just equal to the weight of particles divide by the cross-section area. This is the first regime of fluidization, also referred as ‘the regime of minimum fluidization’. In this study, phenomenon of minimum fluidization is studied using CFD-DEM simulation with 4 different sizes of particles 0.15 mm, 0.3 mm, 0.6 mm, and 1.2 mm diameters. The results are presented in the form of pressure drop across the bed with the fluid superficial velocity. The achieved results are found in good agreement with the experimental and theoretical data available in literature.

  11. 47 CFR 25.205 - Minimum angle of antenna elevation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Minimum angle of antenna elevation. 25.205 Section 25.205 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Technical Standards § 25.205 Minimum angle of antenna elevation. (a) Earth station...

  12. An Experimental study on a Method of Computing Minimum flow rate

    International Nuclear Information System (INIS)

    Cho, Yeon Sik; Kim, Tae Hyun; Kim, Chang Hyun

    2009-01-01

    Many pump reliability problems in the Nuclear Power Plants (NPPs) are being attributed to the operation of the pump at flow rates well below its best efficiency point(BEP). Generally, the manufacturer and the user try to avert such problems by specifying a minimum flow, below which the pump should not be operated. Pump minimum flow usually involves two considerations. The first consideration is normally termed the 'thermal minimum flow', which is that flow required to prevent the fluid inside the pump from reaching saturation conditions. The other consideration is often referred to as 'mechanical minimum flow', which is that flow required to prevent mechanical damage. However, the criteria for specifying such a minimum flow are not clearly understood by all parties concerned. Also various factor and information for computing minimum flow are not easily available as considering for the pump manufacturer' proprietary. The objective of this study is to obtain experimental data for computing minimum flow rate and to understand the pump performances due to low flow operation. A test loop consisted of the pump to be used in NPPs, water tank, flow rate measurements and piping system with flow control devices was established for this study

  13. The memory characteristics of submicron feature-size PZT capacitors with PtOx top electrode by using dry-etching

    International Nuclear Information System (INIS)

    Huang, C.-K.; Wang, C.-C.; Wu, T.-B.

    2007-01-01

    Dry etching and its effect on the characteristics of submicron feature-size PbZr 1-x Ti x O 3 (PZT) capacitors with PtO x top electrode were investigated. The photoresist (PR)-masked PtO x films were etched by an Ar/(20%)Cl 2 /O 2 helicon wave plasma. A fence-free pattern with a significantly high etch rate and sidewall slope was obtained by the addition of O 2 into the etching gas mixture, due to the chemical instability of PtO x and the formation of a PtO 2 passivation layer to suppress redeposition of the etch by-products on the etched surface. The patterned PtO x electrode can be further used as a hard mask for etching the PZT film, subsequently, with the gas mixture of Ar, CF 4 and O 2 . A high etching rate of PZT and a good etching selectivity to PtO x can be obtained at 30% O 2 addition into the Ar/(50%)CF 4 plasma. The etched capacitors have a steep, 72 0 , sidewall angle with a clean surface. Moreover, the addition of O 2 into the etching gas can well preserve the properties and the fatigue endurance of PtO x /PZT capacitors

  14. Parameterization of ion channeling half-angles and minimum yields

    Energy Technology Data Exchange (ETDEWEB)

    Doyle, Barney L.

    2016-03-15

    A MS Excel program has been written that calculates ion channeling half-angles and minimum yields in cubic bcc, fcc and diamond lattice crystals. All of the tables and graphs in the three Ion Beam Analysis Handbooks that previously had to be manually looked up and read from were programed into Excel in handy lookup tables, or parameterized, for the case of the graphs, using rather simple exponential functions with different power functions of the arguments. The program then offers an extremely convenient way to calculate axial and planar half-angles, minimum yields, effects on half-angles and minimum yields of amorphous overlayers. The program can calculate these half-angles and minimum yields for 〈u v w〉 axes and [h k l] planes up to (5 5 5). The program is open source and available at (http://www.sandia.gov/pcnsc/departments/iba/ibatable.html).

  15. Innovative small and medium sized reactors: Design features, safety approaches and R and D trends. Final report of a technical meeting

    International Nuclear Information System (INIS)

    2005-05-01

    In order to beat the economy of scale small and medium sized reactors (SMRs) have to incorporate specific design features that result into simplification of the overall plant design, modularization and mass production. Several approaches are being under development and consideration, including the increased use of passive features for reactivity control and reactor shut down, decay heat removal and core cooling, and reliance on the increased margin to fuel failure achieved through the use of advanced high-temperature fuel forms and structural materials. Some SMRs also offer the possibility of very long core lifetimes with burnable absorbers or high conversion ratio in the core. These reactors incorporate increased proliferation resistance and may offer a very attractive solution for the implementation of adequate safeguards in a scenario of global deployment of nuclear power. About 50 concepts and designs of the innovative SMRs are under development in more than 15 IAEA Member States representing both industrialized and developing countries. SMRs are under development for all principle reactor lines, i.e., water cooled, liquid metal cooled, gas cooled, and molten salt cooled reactors, as well as for some non-conventional combinations thereof. Upon a diversity of the conceptual and design approaches to SMRs, it may be useful to identify the so-called enabling technologies that are common to certain reactor types or lines. An enabling technology is the technology that needs to be developed and demonstrated to make a certain reactor concept viable. When a certain technology is common to several SMR concepts or designs, it could benefit from being developed on a common or shared basis. The identification of common enabling technologies could speed up the development and deployment of many SMRs by merging the efforts of their designers through an increased international cooperation. This publication has been prepared through the collaboration of all participants of this

  16. 26 CFR 5c.168(f)(8)-4 - Minimum investment of lessor.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Minimum investment of lessor. 5c.168(f)(8)-4....168(f)(8)-4 Minimum investment of lessor. (a) Minimum investment. Under section 168(f)(8)(B)(ii), an... has a minimum at risk investment which, at the time the property is placed in service under the lease...

  17. Towards nanopatterning by femtosecond laser ablation of pre-stretched elastomers

    Energy Technology Data Exchange (ETDEWEB)

    Surdo, Salvatore; Piazza, Simonluca; Ceseracciu, Luca; Diaspro, Alberto; Duocastella, Martí, E-mail: marti.duocastella@iit.it

    2016-06-30

    Graphical abstract: - Highlights: • We present a new approach to increase the focusing capabilities of optical systems. • Laser patterning is performed over a stretched elastomeric membrane. • After releasing stress, patterns shrink according to the applied strain. • Minimum feature size is controlled by strain, enabling sub-diffraction patterning. - Abstract: Diffraction limits the focusing capabilities of an optical system seriously constraining the use of lasers for nanopatterning. In this work, we present a novel and simple approach to reduce the minimum feature size of a laser-direct write system by ablating a pre-stretched material. In particular, by focusing and scanning a femtosecond laser beam on the surface of a uniaxially pre-stretched elastomeric membrane we are able to obtain microstructures according to a desired pattern. After removing the stress applied to the elastomer, the membrane relaxes to its original size and the ablated patterns shrink while preserving their shape. In this way, the minimum feature size that is typically determined by the optical properties of the focusing system can be now controlled by the strain applied to the elastomer during the ablation process. We demonstrate this approach by ablating lines on a stretchable polymeric membrane at different strain conditions. Experimental results are in good agreement with theoretical predictions. The proposed method opens up new interesting possibilities for the rapid prototyping of micro- and nano-structures suitable for a wide range of applications such as soft-lithography, micro-/nano-fluidics and lab-on-chip.

  18. Towards nanopatterning by femtosecond laser ablation of pre-stretched elastomers

    International Nuclear Information System (INIS)

    Surdo, Salvatore; Piazza, Simonluca; Ceseracciu, Luca; Diaspro, Alberto; Duocastella, Martí

    2016-01-01

    Graphical abstract: - Highlights: • We present a new approach to increase the focusing capabilities of optical systems. • Laser patterning is performed over a stretched elastomeric membrane. • After releasing stress, patterns shrink according to the applied strain. • Minimum feature size is controlled by strain, enabling sub-diffraction patterning. - Abstract: Diffraction limits the focusing capabilities of an optical system seriously constraining the use of lasers for nanopatterning. In this work, we present a novel and simple approach to reduce the minimum feature size of a laser-direct write system by ablating a pre-stretched material. In particular, by focusing and scanning a femtosecond laser beam on the surface of a uniaxially pre-stretched elastomeric membrane we are able to obtain microstructures according to a desired pattern. After removing the stress applied to the elastomer, the membrane relaxes to its original size and the ablated patterns shrink while preserving their shape. In this way, the minimum feature size that is typically determined by the optical properties of the focusing system can be now controlled by the strain applied to the elastomer during the ablation process. We demonstrate this approach by ablating lines on a stretchable polymeric membrane at different strain conditions. Experimental results are in good agreement with theoretical predictions. The proposed method opens up new interesting possibilities for the rapid prototyping of micro- and nano-structures suitable for a wide range of applications such as soft-lithography, micro-/nano-fluidics and lab-on-chip.

  19. (I Can’t Get No) Saturation: A Simulation and Guidelines for Minimum Sample Sizes in Qualitative Research

    NARCIS (Netherlands)

    van Rijnsoever, F.J.

    2015-01-01

    This paper explores the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the

  20. Normal Indian pituitary gland size on MR imaging

    International Nuclear Information System (INIS)

    Gupta, A.K.; Jena, A.N.; Gulati, P.K.; Marwah, R.K.; Tripathi, R.P.; Sharma, R.K.; Khanna, C.M.

    1994-01-01

    The size of the pituitary gland was measured in 294 subjects, who had no known pituitary or hypothalamic disorders. Mid sagittal TIW images showing maximum dimensions of the pituitary gland, were used for measurement of the height in each age and sex group. The mean pituitary height of all the subjects in men was 5.3 mm (SD=0.9 mm), whereas in women, the mean height was 5.9 mm (SD = 1.2 mm). Beyond 10 years of age, the pituitary height measured was greater in women than in men. The gland height showed a gradual decrease with increasing age after the age of 30 years in both men and women except in the age group of 51-60 years, which showed paradoxical increase in size. The minimum gland height found in this study was 2.5 mm and the maximum, 8.8 mm. The study presents a demographic profile of pituitary gland size in north Indian subjects as measured on MR images. (author). 6 refs., 2 tabs., 1 fig

  1. Pore Size Distribution in Chicken Eggs as Determined by Mercury Porosimetry

    Directory of Open Access Journals (Sweden)

    La Scala Jr N

    2000-01-01

    Full Text Available In this study we investigated the application of mercury porosimetry technique into the determination of porosity features in 28 week old hen eggshells. Our results have shown that the majority of the pores have sizes between 1 to 10 mu m in the eggshells studied. By applying mercury porosimetry technique we were able to describe the porosity features better, by determining a pore size distribution in the eggshells. Here, we introduce mercury porosimetry technique as a new routine technique applied into the study of eggshells.

  2. 30 CFR 18.97 - Inspection of machines; minimum requirements.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Inspection of machines; minimum requirements... TESTING, EVALUATION, AND APPROVAL OF MINING PRODUCTS ELECTRIC MOTOR-DRIVEN MINE EQUIPMENT AND ACCESSORIES Field Approval of Electrically Operated Mining Equipment § 18.97 Inspection of machines; minimum...

  3. Elemental GCR Observations during the 2009-2010 Solar Minimum Period

    Science.gov (United States)

    Lave, K. A.; Israel, M. H.; Binns, W. R.; Christian, E. R.; Cummings, A. C.; Davis, A. J.; deNolfo, G. A.; Leske, R. A.; Mewaldt, R. A.; Stone, E. C.; hide

    2013-01-01

    Using observations from the Cosmic Ray Isotope Spectrometer (CRIS) onboard the Advanced Composition Explorer (ACE), we present new measurements of the galactic cosmic ray (GCR) elemental composition and energy spectra for the species B through Ni in the energy range approx. 50-550 MeV/nucleon during the record setting 2009-2010 solar minimum period. These data are compared with our observations from the 1997-1998 solar minimum period, when solar modulation in the heliosphere was somewhat higher. For these species, we find that the intensities during the 2009-2010 solar minimum were approx. 20% higher than those in the previous solar minimum, and in fact were the highest GCR intensities recorded during the space age. Relative abundances for these species during the two solar minimum periods differed by small but statistically significant amounts, which are attributed to the combination of spectral shape differences between primary and secondary GCRs in the interstellar medium and differences between the levels of solar modulation in the two solar minima. We also present the secondary-to-primary ratios B/C and (Sc+Ti+V)/Fe for both solar minimum periods, and demonstrate that these ratios are reasonably well fit by a simple "leaky-box" galactic transport model that is combined with a spherically symmetric solar modulation model.

  4. Newly Discovered Silicate Features in the Spectra of Young Warm Debris Disks: Probing Terrestrial Regions of Planetary Systems

    Science.gov (United States)

    Ballering, N.; Rieke, G.

    2014-03-01

    Terrestrial planets form by the collisional accretion of planetesimals during the first 100 Myr of a system’s lifetime. For most systems, the terrestrial regions are too near their host star to be directly seen with high-contrast imaging (e.g. with HST, MagAO, or LBTI) and too warm to be imaged with submillimeter interferometers (e.g. ALMA). Mid-infrared excess spectra—originating from the thermal emission of the circumstellar dust leftover from these collisions—remain the best data to constrain the properties of the debris in these regions. The spectra of most debris disks are featureless, taking the shape of (modified) blackbodies. Determining the properties of debris disks with featureless spectra is complicated by a degeneracy between the grain size and location (large grains near the star and small grains farther from the star may be indistinguishable). Debris disk spectra that exhibit solid state emission features allow for a more accurate determination of the dust size and location (e.g. Chen et al. 2006; Olofsson et al. 2012). Such features probe small, warm dust grains in the inner regions of these systems where terrestrial planet formation may be proceeding (Lisse et al. 2009). We report here a successful search for such features. We identified our targets with a preliminary search for signs of emission features in the Spitzer IRS spectra of a number of young early type stars known to harbor warm debris disks. We fit to each target a physically-motivated model spectrum consisting of the sum of the stellar photosphere (modeled as a blackbody) and thermal emission from two dust belts. Each belt was defined by 6 parameters: the inner and outer orbital radii (rin and rout), the index of the radial surface density power law (rexp), the minimum and maximum grain sizes (amin and amax), and the index of the grain size distribution power law (aexp). aexp was fixed to -3.65 and amax was fixed to 1000 μm for all models; all other parameters were allowed to

  5. Automated x-ray television complex for inspecting standard-size dynamic objects

    International Nuclear Information System (INIS)

    Gusev, E.A.; Luk'yanenko, E.A.; Chelnokov, V.B.; Kuleshov, V.K.; Alkhimov, Yu.V.

    1993-01-01

    An automated x-ray television complex based on a matrix gas-discharge converter having a large area (2.1 x 1.0 m) for inspecting standard-size freight and containers and for diagnosing industrial articles is presented. The pulsed operating mode of the complex with a 512K digital television storage makes it possible to inspect dynamic objects with a minimum dose load (20--100 μR). 6 refs., 5 figs

  6. Episodic retrieval and feature facilitation in intertrial priming of visual search

    DEFF Research Database (Denmark)

    Asgeirsson, Arni Gunnar; Kristjánsson, Árni

    2011-01-01

    Abstract Huang, Holcombe, and Pashler (Memory & Cognition, 32, 12–20, 2004) found that priming from repetition of different features of a target in a visual search task resulted in significant response time (RT) reductions when both target brightness and size were repeated. But when only one...... feature was repeated and the other changed, RTs were longer than when neither feature was repeated. From this, they argued that priming in visual search reflected episodic retrieval of memory traces, rather than facilitation of repeated features. We tested different varia- tions of the search task...

  7. Pulmonary nodule characterization, including computer analysis and quantitative features.

    Science.gov (United States)

    Bartholmai, Brian J; Koo, Chi Wan; Johnson, Geoffrey B; White, Darin B; Raghunath, Sushravya M; Rajagopalan, Srinivasan; Moynagh, Michael R; Lindell, Rebecca M; Hartman, Thomas E

    2015-03-01

    Pulmonary nodules are commonly detected in computed tomography (CT) chest screening of a high-risk population. The specific visual or quantitative features on CT or other modalities can be used to characterize the likelihood that a nodule is benign or malignant. Visual features on CT such as size, attenuation, location, morphology, edge characteristics, and other distinctive "signs" can be highly suggestive of a specific diagnosis and, in general, be used to determine the probability that a specific nodule is benign or malignant. Change in size, attenuation, and morphology on serial follow-up CT, or features on other modalities such as nuclear medicine studies or MRI, can also contribute to the characterization of lung nodules. Imaging analytics can objectively and reproducibly quantify nodule features on CT, nuclear medicine, and magnetic resonance imaging. Some quantitative techniques show great promise in helping to differentiate benign from malignant lesions or to stratify the risk of aggressive versus indolent neoplasm. In this article, we (1) summarize the visual characteristics, descriptors, and signs that may be helpful in management of nodules identified on screening CT, (2) discuss current quantitative and multimodality techniques that aid in the differentiation of nodules, and (3) highlight the power, pitfalls, and limitations of these various techniques.

  8. Minimum qualifications for nuclear criticality safety professionals

    International Nuclear Information System (INIS)

    Ketzlach, N.

    1990-01-01

    A Nuclear Criticality Technology and Safety Training Committee has been established within the U.S. Department of Energy (DOE) Nuclear Criticality Safety and Technology Project to review and, if necessary, develop standards for the training of personnel involved in nuclear criticality safety (NCS). The committee is exploring the need for developing a standard or other mechanism for establishing minimum qualifications for NCS professionals. The development of standards and regulatory guides for nuclear power plant personnel may serve as a guide in developing the minimum qualifications for NCS professionals

  9. A minimum achievable PV electrical generating cost

    International Nuclear Information System (INIS)

    Sabisky, E.S.

    1996-01-01

    The role and share of photovoltaic (PV) generated electricity in our nation's future energy arsenal is primarily dependent on its future production cost. This paper provides a framework for obtaining a minimum achievable electrical generating cost (a lower bound) for fixed, flat-plate photovoltaic systems. A cost of 2.8 $cent/kWh (1990$) was derived for a plant located in Southwestern USA sunshine using a cost of money of 8%. In addition, a value of 22 $cent/Wp (1990$) was estimated as a minimum module manufacturing cost/price

  10. A low-cost, high-magnification imaging system for particle sizing applications

    International Nuclear Information System (INIS)

    Tipnis, Tanmay J; Lawson, Nicholas J; Tatam, Ralph P

    2014-01-01

    A low-cost imaging system for high magnification and high resolution was developed as an alternative to long-working-distance microscope-based systems, primarily for particle sizing applications. The imaging optics, comprising an inverted fixed focus lens coupled to a microscope objective, were able to provide a working distance of approximately 50 mm. The system magnification could be changed by using an appropriate microscope objective. Particle sizing was achieved using shadow-based techniques with the backlight illumination provided by a pulsed light-emitting diode light source. The images were analysed using commercial sizing software which gave the particle sizes and their distribution. A range of particles, from 6 to 8 µm to over 100 µm, was successfully measured with a minimum spatial resolution of approximately 2.5 µm. This system allowed measurement of a wide range of particles at a lower cost and improved operator safety without disturbing the flow. (technical design note)

  11. Setting a minimum age for juvenile justice jurisdiction in California

    Science.gov (United States)

    Barnert, Elizabeth S.; Abrams, Laura S.; Maxson, Cheryl; Gase, Lauren; Soung, Patricia; Carroll, Paul; Bath, Eraka

    2018-01-01

    Purpose Despite the existence of minimum age laws for juvenile justice jurisdiction in 18 US states, California has no explicit law that protects children (i.e. youth less than 12 years old) from being processed in the juvenile justice system. In the absence of a minimum age law, California lags behind other states and international practice and standards. The paper aims to discuss these issues. Design/methodology/approach In this policy brief, academics across the University of California campuses examine current evidence, theory, and policy related to the minimum age of juvenile justice jurisdiction. Findings Existing evidence suggests that children lack the cognitive maturity to comprehend or benefit from formal juvenile justice processing, and diverting children from the system altogether is likely to be more beneficial for the child and for public safety. Research limitations/implications Based on current evidence and theory, the authors argue that minimum age legislation that protects children from contact with the juvenile justice system and treats them as children in need of services and support, rather than as delinquents or criminals, is an important policy goal for California and for other national and international jurisdictions lacking a minimum age law. Originality/value California has no law specifying a minimum age for juvenile justice jurisdiction, meaning that young children of any age can be processed in the juvenile justice system. This policy brief provides a rationale for a minimum age law in California and other states and jurisdictions without one. Paper type Conceptual paper PMID:28299968

  12. Applying Data Clustering Feature to Speed Up Ant Colony Optimization

    Directory of Open Access Journals (Sweden)

    Chao-Yang Pang

    2014-01-01

    Full Text Available Ant colony optimization (ACO is often used to solve optimization problems, such as traveling salesman problem (TSP. When it is applied to TSP, its runtime is proportional to the squared size of problem N so as to look less efficient. The following statistical feature is observed during the authors’ long-term gene data analysis using ACO: when the data size N becomes big, local clustering appears frequently. That is, some data cluster tightly in a small area and form a class, and the correlation between different classes is weak. And this feature makes the idea of divide and rule feasible for the estimate of solution of TSP. In this paper an improved ACO algorithm is presented, which firstly divided all data into local clusters and calculated small TSP routes and then assembled a big TSP route with them. Simulation shows that the presented method improves the running speed of ACO by 200 factors under the condition that data set holds feature of local clustering.

  13. Passive Safety Features for Small Modular Reactors

    International Nuclear Information System (INIS)

    Ingersoll, Daniel T.

    2010-01-01

    The rapid growth in the size and complexity of commercial nuclear power plants in the 1970s spawned an interest in smaller, simpler designs that are inherently or intrinsically safe through the use of passive design features. Several designs were developed, but none were ever built, although some of their passive safety features were incorporated into large commercial plant designs that are being planned or built today. In recent years, several reactor vendors are actively redeveloping small modular reactor (SMR) designs with even greater use of passive features. Several designs incorporate the ultimate in passive safety they completely eliminate specific accident initiators from the design. Other design features help to reduce the likelihood of an accident or help to mitigate the accidents consequences, should one occur. While some passive safety features are common to most SMR designs, irrespective of the coolant technology, other features are specific to water, gas, or liquid-metal cooled SMR designs. The extensive use of passive safety features in SMRs promise to make these plants highly robust, protecting both the general public and the owner/investor. Once demonstrated, these plants should allow nuclear power to be used confidently for a broader range of customers and applications than will be possible with large plants alone.

  14. A superconducting magnet mandrel with minimum symmetry laminations for proton therapy

    Science.gov (United States)

    Caspi, S.; Arbelaez, D.; Brouwer, L.; Dietderich, D. R.; Felice, H.; Hafalia, R.; Prestemon, S.; Robin, D.; Sun, C.; Wan, W.

    2013-08-01

    The size and weight of ion-beam cancer therapy gantries are frequently determined by a large aperture, curved, ninety degree, dipole magnet. The higher fields achievable with superconducting technology promise to greatly reduce the size and weight of this magnet and therefore also the gantry as a whole. This paper reports advances in the design of winding mandrels for curved, canted cosine-theta (CCT) magnets in the context of a preliminary magnet design for a proton gantry. The winding mandrel is integral to the CCT design and significantly affects the construction cost, stress management, winding feasibility, eddy current power losses, and field quality of the magnet. A laminated mandrel design using a minimum symmetry in the winding path is introduced and its feasibility demonstrated by a rapid prototype model. Piecewise construction of the mandrel using this laminated approach allows for increased manufacturing techniques and material choices. Sectioning the mandrel also reduces eddy currents produced during field changes accommodating the scan of beam energies during treatment. This symmetry concept can also greatly reduce the computational resources needed for 3D finite element calculations. It is shown that the small region of symmetry forming the laminations combined with periodic boundary conditions can model the entire magnet geometry disregarding the ends.

  15. Planetary tides during the Maunder sunspot minimum

    International Nuclear Information System (INIS)

    Smythe, C.M.; Eddy, J.A.

    1977-01-01

    Sun-centered planetary conjunctions and tidal potentials are here constructed for the AD1645 to 1715 period of sunspot absence, referred to as the 'Maunder Minimum'. These are found to be effectively indistinguishable from patterns of conjunctions and power spectra of tidal potential in the present era of a well established 11 year sunspot cycle. This places a new and difficult restraint on any tidal theory of sunspot formation. Problems arise in any direct gravitational theory due to the apparently insufficient forces and tidal heights involved. Proponents of the tidal hypothesis usually revert to trigger mechanisms, which are difficult to criticise or test by observation. Any tidal theory rests on the evidence of continued sunspot periodicity and the substantiation of a prolonged period of solar anomaly in the historical past. The 'Maunder Minimum' was the most drastic change in the behaviour of solar activity in the last 300 years; sunspots virtually disappeared for a 70 year period and the 11 year cycle was probably absent. During that time, however, the nine planets were all in their orbits, and planetary conjunctions and tidal potentials were indistinguishable from those of the present era, in which the 11 year cycle is well established. This provides good evidence against the tidal theory. The pattern of planetary tidal forces during the Maunder Minimum was reconstructed to investigate the possibility that the multiple planet forces somehow fortuitously cancelled at the time, that is that the positions of the slower moving planets in the 17th and early 18th centuries were such that conjunctions and tidal potentials were at the time reduced in number and force. There was no striking dissimilarity between the time of the Maunder Minimum and any period investigated. The failure of planetary conjunction patterns to reflect the drastic drop in sunspots during the Maunder Minimum casts doubt on the tidal theory of solar activity, but a more quantitative test

  16. 29 CFR 510.22 - Industries eligible for minimum wage phase-in.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Industries eligible for minimum wage phase-in. 510.22... REGULATIONS IMPLEMENTATION OF THE MINIMUM WAGE PROVISIONS OF THE 1989 AMENDMENTS TO THE FAIR LABOR STANDARDS ACT IN PUERTO RICO Classification of Industries § 510.22 Industries eligible for minimum wage phase-in...

  17. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    Science.gov (United States)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  18. Factors that affect micro-tooling features created by direct printing approach

    Science.gov (United States)

    Kumbhani, Mayur N.

    Current market required faster pace production of smaller, better, and improved products in shorter amount of time. Traditional high-rate manufacturing process such as hot embossing, injection molding, compression molding, etc. use tooling to replicate feature on a products. Miniaturization of many product in the field of biomedical, electronics, optical, and microfluidic is occurring on a daily bases. There is a constant need to produce cheaper, and faster tooling, which can be utilize by existing manufacturing processes. Traditionally, in order to manufacture micron size tooling features processes such as micro-machining, Electrical Discharge Machining (EDM), etc. are utilized. Due to a higher difficulty to produce smaller size features, and longer production cycle time, various additive manufacturing approaches are proposed, e.g. selective laser sintering (SLS), inkjet printing (3DP), fused deposition modeling (FDM), etc. were proposed. Most of these approaches can produce net shaped products from different materials such as metal, ceramic, or polymers. Several attempts were made to produce tooling features using additive manufacturing approaches. Most of these produced tooling were not cost effective, and the life cycle of these tooling was reported short. In this research, a method to produce tooling features using direct printing approach, where highly filled feedstock was dispensed on a substrate. This research evaluated different natural binders, such as guar gum, xanthan gum, and sodium carboxymethyl cellulose (NaCMC) and their combinations were evaluated. The best binder combination was then use to evaluate effect of different metal (316L stainless steel (3 mum), 316 stainless steel (45 mum), and 304 stainless steel (45 mum)) particle size on feature quality. Finally, the effect of direct printing process variables such as dispensing tip internal diameter (500 mum, and 333 mum) at different printing speeds were evaluated.

  19. Classification of high resolution imagery based on fusion of multiscale texture features

    International Nuclear Information System (INIS)

    Liu, Jinxiu; Liu, Huiping; Lv, Ying; Xue, Xiaojuan

    2014-01-01

    In high resolution data classification process, combining texture features with spectral bands can effectively improve the classification accuracy. However, the window size which is difficult to choose is regarded as an important factor influencing overall classification accuracy in textural classification and current approaches to image texture analysis only depend on a single moving window which ignores different scale features of various land cover types. In this paper, we propose a new method based on the fusion of multiscale texture features to overcome these problems. The main steps in new method include the classification of fixed window size spectral/textural images from 3×3 to 15×15 and comparison of all the posterior possibility values for every pixel, as a result the biggest probability value is given to the pixel and the pixel belongs to a certain land cover type automatically. The proposed approach is tested on University of Pavia ROSIS data. The results indicate that the new method improve the classification accuracy compared to results of methods based on fixed window size textural classification

  20. 13 CFR 107.830 - Minimum duration/term of financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Minimum duration/term of financing... INVESTMENT COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.830 Minimum duration/term of financing. (a...

  1. 42 CFR 84.117 - Gas mask containers; minimum requirements.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Gas mask containers; minimum requirements. 84.117... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Gas Masks § 84.117 Gas mask containers; minimum requirements. (a) Gas masks shall be equipped with a substantial...

  2. Do Minimum Wages in Latin America and the Caribbean Matter? Evidence from 19 Countries

    DEFF Research Database (Denmark)

    Kristensen, Nicolai; Cunningham, Wendy

    of if and how minimum wages affect wage distributions in LAC countries. Although there is no single minimum wage institution in the LAC region, we find regional trends. Minimum wages affect the wage distribution in both the formal and, especially, the informal sector, both at the minimum wage and at multiples...... of the minimum. The minimum does not uniformly benefit low-wage workers: in countries where the minimum wage is relatively low compared to mean wages, the minimum wage affects the more disadvantaged segments of the labor force, namely informal sector workers, women, young and older workers, and the low skilled...

  3. Small-size pedestrian detection in large scene based on fast R-CNN

    Science.gov (United States)

    Wang, Shengke; Yang, Na; Duan, Lianghua; Liu, Lu; Dong, Junyu

    2018-04-01

    Pedestrian detection is a canonical sub-problem of object detection with high demand during recent years. Although recent deep learning object detectors such as Fast/Faster R-CNN have shown excellent performance for general object detection, they have limited success for small size pedestrian detection in large-view scene. We study that the insufficient resolution of feature maps lead to the unsatisfactory accuracy when handling small instances. In this paper, we investigate issues involving Fast R-CNN for pedestrian detection. Driven by the observations, we propose a very simple but effective baseline for pedestrian detection based on Fast R-CNN, employing the DPM detector to generate proposals for accuracy, and training a fast R-CNN style network to jointly optimize small size pedestrian detection with skip connection concatenating feature from different layers to solving coarseness of feature maps. And the accuracy is improved in our research for small size pedestrian detection in the real large scene.

  4. 29 CFR 552.100 - Application of minimum wage and overtime provisions.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Application of minimum wage and overtime provisions. 552... § 552.100 Application of minimum wage and overtime provisions. (a)(1) Domestic service employees must receive for employment in any household a minimum wage of not less than that required by section 6(a) of...

  5. Significance of the impact of motion compensation on the variability of PET image features

    Science.gov (United States)

    Carles, M.; Bach, T.; Torres-Espallardo, I.; Baltas, D.; Nestle, U.; Martí-Bonmatí, L.

    2018-03-01

    In lung cancer, quantification by positron emission tomography/computed tomography (PET/CT) imaging presents challenges due to respiratory movement. Our primary aim was to study the impact of motion compensation implied by retrospectively gated (4D)-PET/CT on the variability of PET quantitative parameters. Its significance was evaluated by comparison with the variability due to (i) the voxel size in image reconstruction and (ii) the voxel size in image post-resampling. The method employed for feature extraction was chosen based on the analysis of (i) the effect of discretization of the standardized uptake value (SUV) on complementarity between texture features (TF) and conventional indices, (ii) the impact of the segmentation method on the variability of image features, and (iii) the variability of image features across the time-frame of 4D-PET. Thirty-one PET-features were involved. Three SUV discretization methods were applied: a constant width (SUV resolution) of the resampling bin (method RW), a constant number of bins (method RN) and RN on the image obtained after histogram equalization (method EqRN). The segmentation approaches evaluated were 40% of SUVmax and the contrast oriented algorithm (COA). Parameters derived from 4D-PET images were compared with values derived from the PET image obtained for (i) the static protocol used in our clinical routine (3D) and (ii) the 3D image post-resampled to the voxel size of the 4D image and PET image derived after modifying the reconstruction of the 3D image to comprise the voxel size of the 4D image. Results showed that TF complementarity with conventional indices was sensitive to the SUV discretization method. In the comparison of COA and 40% contours, despite the values not being interchangeable, all image features showed strong linear correlations (r  >  0.91, p\\ll 0.001 ). Across the time-frames of 4D-PET, all image features followed a normal distribution in most patients. For our patient cohort, the

  6. Distinguishing mixed quantum states: Minimum-error discrimination versus optimum unambiguous discrimination

    International Nuclear Information System (INIS)

    Herzog, Ulrike; Bergou, Janos A.

    2004-01-01

    We consider two different optimized measurement strategies for the discrimination of nonorthogonal quantum states. The first is ambiguous discrimination with a minimum probability of inferring an erroneous result, and the second is unambiguous, i.e., error-free, discrimination with a minimum probability of getting an inconclusive outcome, where the measurement fails to give a definite answer. For distinguishing between two mixed quantum states, we investigate the relation between the minimum-error probability achievable in ambiguous discrimination, and the minimum failure probability that can be reached in unambiguous discrimination of the same two states. The latter turns out to be at least twice as large as the former for any two given states. As an example, we treat the case where the state of the quantum system is known to be, with arbitrary prior probability, either a given pure state, or a uniform statistical mixture of any number of mutually orthogonal states. For this case we derive an analytical result for the minimum probability of error and perform a quantitative comparison with the minimum failure probability

  7. Minimum alcohol pricing policies in practice: A critical examination of implementation in Canada.

    Science.gov (United States)

    Thompson, Kara; Stockwell, Tim; Wettlaufer, Ashley; Giesbrecht, Norman; Thomas, Gerald

    2017-02-01

    There is an interest globally in using Minimum Unit Pricing (MUP) of alcohol to promote public health. Canada is the only country to have both implemented and evaluated some forms of minimum alcohol prices, albeit in ways that fall short of MUP. To inform these international debates, we describe the degree to which minimum alcohol prices in Canada meet recommended criteria for being an effective public health policy. We collected data on the implementation of minimum pricing with respect to (1) breadth of application, (2) indexation to inflation and (3) adjustments for alcohol content. Some jurisdictions have implemented recommended practices with respect to minimum prices; however, the full harm reduction potential of minimum pricing is not fully realised due to incomplete implementation. Key concerns include the following: (1) the exclusion of minimum prices for several beverage categories, (2) minimum prices below the recommended minima and (3) prices are not regularly adjusted for inflation or alcohol content. We provide recommendations for best practices when implementing minimum pricing policy.

  8. Relation between the diffraction pattern visibility and dispersion of particle sizes in an ektacytometer

    International Nuclear Information System (INIS)

    Nikitin, Sergei Yu; Lugovtsov, Andrei E; Priezzhev, A V; Ustinov, V D

    2011-01-01

    We have calculated the angular distribution of the light intensity in the diffraction pattern arising upon scattering of a laser beam on a suspension of red blood cells in an ektacytometer. We have estimated the diffraction pattern visibility in the region of the first diffraction minimum and the first diffraction maximum as a function of particle size variation. It is shown that in this fragment of the diffraction pattern its visibility decreases already twofold in the case of a standard deviation of the particle size from the average value, equal to 8%.

  9. 19 CFR 144.33 - Minimum quantities to be withdrawn.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Minimum quantities to be withdrawn. 144.33 Section 144.33 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT... Warehouse § 144.33 Minimum quantities to be withdrawn. Unless by special authority of the Commissioner of...

  10. The energetics and structure of nickel clusters: Size dependence

    International Nuclear Information System (INIS)

    Cleveland, C.L.; Landman, U.

    1991-01-01

    The energetics of nickel clusters over a broad size range are explored within the context of the many-body potentials obtained via the embedded atom method. Unconstrained local minimum energy configurations are found for single crystal clusters consisting of various truncations of the cube or octahedron, with and without (110) faces, as well as some monotwinnings of these. We also examine multitwinned structures such as icosahedra and various truncations of the decahedron, such as those of Ino and Marks. These clusters range in size from 142 to over 5000 atoms. As in most such previous studies, such as those on Lennard-Jones systems, we find that icosahedral clusters are favored for the smallest cluster sizes and that Marks' decahedra are favored for intermediate sizes (all our atomic systems larger than about 2300 atoms). Of course very large clusters will be single crystal face-centered-cubic (fcc) polyhedra: the onset of optimally stable single-crystal nickel clusters is estimated to occur at 17 000 atoms. We find, via comparisons to results obtained via atomistic calculations, that simple macroscopic expressions using accurate surface, strain, and twinning energies can usefully predict energy differences between different structures even for clusters of much smaller size than expected. These expressions can be used to assess the relative energetic merits of various structural motifs and their dependence on cluster size

  11. Laboratory simulation of infrared astrophysical features

    International Nuclear Information System (INIS)

    Rose, L.A.

    1979-01-01

    Laboratory infrared emission and absorption spectra have been taken of terrestrial silicates, meteorites and lunar soils in the form of micrometer and sub-micrometer grains. The emission spectra were taken in a way that imitates telescopic observations. The purpose was to see which materials best simulate the 10 μm astrophysical feature. The emission spectra of dunite, fayalite and Allende give a good fit to the 10 μm broadband emission feature of comets Bennett and Kohoutek. A study of the effect of grain size on the presence of the 10 μm emission features of dunite shows that for particles larger than 37 μm no feature is seen. The emission spectrum of the Murray meteorite, a Type 2 carbonaceous chondrite, is quite similar to the intermediate resolution spectrum of comet Kohoutek in the 10 μm region. Hydrous silicates or amorphous magnesium silicates in combination with high-temperature condensates, such as olivine or anorthite, would yield spectra that match the intermediate resolution spectrum of comet Kohoutek in the 10 μm region. Glassy olivine and glassy anorthite in approximately equal proportions would also give a spectrum that is a good fit to the cometary 10 μm feature. (Auth.)

  12. A Minimum Spanning Tree Representation of Anime Similarities

    OpenAIRE

    Wibowo, Canggih Puspo

    2016-01-01

    In this work, a new way to represent Japanese animation (anime) is presented. We applied a minimum spanning tree to show the relation between anime. The distance between anime is calculated through three similarity measurements, namely crew, score histogram, and topic similarities. Finally, the centralities are also computed to reveal the most significance anime. The result shows that the minimum spanning tree can be used to determine the similarity anime. Furthermore, by using centralities c...

  13. Start of Eta Car's X-ray Minimum

    Science.gov (United States)

    Corcoran, Michael F.; Liburd, Jamar; Hamaguchi, Kenji; Gull, Theodore; Madura, Thomas; Teodoro, Mairan; Moffat, Anthony; Richardson, Noel; Russell, Chris; Pollock, Andrew; hide

    2014-01-01

    Analysis of Eta Car's X-ray spectrum in the 2-10 keV band using quicklook data from the XRay Telescope on Swift shows that the flux on July 30, 2014 was 4.9 plus or minus 2.0×10(exp-12) ergs s(exp-1)cm(exp-2). This flux is nearly equal to the X-ray minimum flux seen by RXTE in 2009, 2003.5, and 1998, and indicates that Eta Car has reached its X-ray minimum, as expected based on the 2024-day period derived from previous 2-10 keV observations with RXTE.

  14. The Impact Of Minimum Wage On Employment Level And ...

    African Journals Online (AJOL)

    This research work has been carried out to analyze the critical impact of minimum wage of employment level and productivity in Nigeria. A brief literature on wage and its determination was highlighted. Models on minimum wage effect are being look into. This includes research work done by different economist analyzing it ...

  15. 30 CFR 77.606-1 - Rubber gloves; minimum requirements.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Rubber gloves; minimum requirements. 77.606-1... COAL MINES Trailing Cables § 77.606-1 Rubber gloves; minimum requirements. (a) Rubber gloves (lineman's gloves) worn while handling high-voltage trailing cables shall be rated at least 20,000 volts and shall...

  16. Do minimum wages reduce poverty? Evidence from Central America ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    In all three countries, these multiple minimum wages are negotiated among representatives of the central government, labour unions and the chambers of commerce. Minimum wage legislation applies to all private-sector employees, but in all three countries a large part of the work force is self-employed or works as unpaid ...

  17. The Minimum Wage, Restaurant Prices, and Labor Market Structure

    Science.gov (United States)

    Aaronson, Daniel; French, Eric; MacDonald, James

    2008-01-01

    Using store-level and aggregated Consumer Price Index data, we show that restaurant prices rise in response to minimum wage increases under several sources of identifying variation. We introduce a general model of employment determination that implies minimum wage hikes cause prices to rise in competitive labor markets but potentially fall in…

  18. Favouring Small and Medium Sized Enterprises with Directive 2014/24/EU

    OpenAIRE

    Trybus, Martin; Andrecka, Marta

    2017-01-01

    This article argues that the four main measures introduced in the 2014 reform of the Procurement Directives to promote Small andMediumSized Enterprises (SMEs) cannot be classified as measures favouring SMEs. A measure favours SMEs when it compromises the main objectives of competition, non-discrimination and value for money. The discussion covers the regimes on the division of larger contracts into lots, the European Single Procurement Document (ESPD), minimum turnover requirements, and direc...

  19. Popular Nutrition-Related Mobile Apps: A Feature Assessment.

    Science.gov (United States)

    Franco, Rodrigo Zenun; Fallaize, Rosalind; Lovegrove, Julie A; Hwang, Faustina

    2016-08-01

    A key challenge in human nutrition is the assessment of usual food intake. This is of particular interest given recent proposals of eHealth personalized interventions. The adoption of mobile phones has created an opportunity for assessing and improving nutrient intake as they can be used for digitalizing dietary assessments and providing feedback. In the last few years, hundreds of nutrition-related mobile apps have been launched and installed by millions of users. This study aims to analyze the main features of the most popular nutrition apps and to compare their strategies and technologies for dietary assessment and user feedback. Apps were selected from the two largest online stores of the most popular mobile operating systems-the Google Play Store for Android and the iTunes App Store for iOS-based on popularity as measured by the number of installs and reviews. The keywords used in the search were as follows: calorie(s), diet, diet tracker, dietician, dietitian, eating, fit, fitness, food, food diary, food tracker, health, lose weight, nutrition, nutritionist, weight, weight loss, weight management, weight watcher, and ww calculator. The inclusion criteria were as follows: English language, minimum number of installs (1 million for Google Play Store) or reviews (7500 for iTunes App Store), relation to nutrition (ie, diet monitoring or recommendation), and independence from any device (eg, wearable) or subscription. A total of 13 apps were classified as popular for inclusion in the analysis. Nine apps offered prospective recording of food intake using a food diary feature. Food selection was available via text search or barcode scanner technologies. Portion size selection was only textual (ie, without images or icons). All nine of these apps were also capable of collecting physical activity (PA) information using self-report, the global positioning system (GPS), or wearable integrations. Their outputs focused predominantly on energy balance between dietary

  20. Preeminence and prerequisites of sample size calculations in clinical trials

    Directory of Open Access Journals (Sweden)

    Richa Singhal

    2015-01-01

    Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

  1. Application of Steenbeck's minimum principle for three-dimensional modelling of DC arc plasma torches

    International Nuclear Information System (INIS)

    Li Heping; Pfender, E; Chen, Xi

    2003-01-01

    In this paper, physical/mathematical models for the three-dimensional, quasi-steady modelling of the plasma flow and heat transfer inside a non-transferred DC arc plasma torch are described in detail. The Steenbeck's minimum principle (Finkelnburg W and Maecker H 1956 Electric arcs and thermal plasmas Encyclopedia of Physics vol XXII (Berlin: Springer)) is employed to determine the axial position of the anode arc-root at the anode surface. This principle postulates a minimum arc voltage for a given arc current, working gas flow rate, and torch configuration. The modelling results show that the temperature and flow fields inside the DC non-transferred arc plasma torch show significant three-dimensional features. The predicted anode arc-root attachment position and the arc shape by employing Steenbeck's minimum principle are reasonably consistent with experimental observations. The thermal efficiency and the torch power distribution are also calculated in this paper. The results show that the thermal efficiency of the torch always ranges from 30% to 45%, i.e. more than half of the total power input is taken away by the cathode and anode cooling water. The special heat transfer mechanisms at the plasma-anode interface, such as electron condensation, electron enthalpy and radiative heat transfer from the bulk plasma to the anode inner surface, are taken into account in this paper. The calculated results show that besides convective heat transfer, the contributions of electron condensation, electron enthalpy and radiation to the anode heat transfer are also important (∼30% for parameter range of interest in this paper). Additional effects, such as the non-local thermodynamic equilibrium plasma state near the electrodes, the transient phenomena, etc, need to be considered in future physical/mathematical models, including corresponding measurements

  2. CT Imaging Features in the Characterization of Non-Growing Solid Pulmonary Nodules in Non-Smokers

    International Nuclear Information System (INIS)

    Perandini, Simone; Soardi, Gian Alberto; Motton, Massimiliano; Augelli, Raffaele; Zantedeschi, Lisa; Montemezzi, Stefania

    2016-01-01

    A disappearing or persistent solid pulmonary nodule is a neglected clinical entity that still poses serious interpretative issues to date. Traditional knowledge deriving from previous reports suggests particular features, such as smooth edges or regular shape, to be significantly associated with benignity. A large number of benign nodules are reported among smokers in lung cancer screening programmes. The aim of this single-center retrospective study was to correlate specific imaging features to verify if traditional knowledge as well as more recent acquisitions regarding benign SPNs can be considered reliable in a current case series of nodules collected in a non-smoker cohort of patients. Fifty-three solid SPNs proven as non-growing during follow-up imaging were analyzed with regard to their imaging features at thin-section CT, their predicted malignancy risk according to three major risk assessment models, minimum density analysis and contrast enhanced-CT in the relative subgroups of nodules which underwent such tests. Eleven nodules disappeared during follow-up, 29 showed volume loss and 16 had a VDT of 1121 days or higher. There were 48 nodules located peripherally (85.71%). Evaluation of the enhancement after contrast media (n=29) showed mean enhancement ±SD of 25.72±35.03 HU, median of 18 HU, ranging from 0 to 190 HU. Minimum density assessment (n=30) showed mean minimum HU ±SD of −28.27±47.86 HU, median of −25 HU, ranging from −144 to 68 HU. Mean malignancy risk ±SD was 15.05±26.69% for the BIMC model, 17.22±19.00% for the Mayo Clinic model and 19.07±33.16% for the Gurney’s model. Our analysis suggests caution in using traditional knowledge when dealing with current small solid peripheral indeterminate SPNs and highlights how quantitative growth at follow-up should be the cornerstone of characterization

  3. Sizing through simulation of systems for photovoltaic solar energy applied to rural electrification

    International Nuclear Information System (INIS)

    Rodríguez‐Borges, Ciaddy Gina; Sarmiento‐Sera, Antonio

    2011-01-01

    The present work is based on the sizing method by means of simulation of the photovoltaic systems energy behavior, applied to rural electrification in regions far from the electric net. The denomination of infra/over sized systems is made and a requested analysis of one particular case is exposed, where it is considered two energy options of different qualities of electric service and the economic valuation of each option is requested, with its corresponding argument. The quality level is established with the fault index in the electricity service for energy lack in the batteries, besides the quantity of energy autonomy days of the system. As conclusions, in infra-sizing conditions systems, and with established quality level of service, multiple sizing solutions exist, and under certain conditions, not always the systems with more quality level, are those of more cost, as well as the presence of a minimum cost in the sizing can be obtained by simulation methods. (author)

  4. Application of Minimum-time Optimal Control System in Buck-Boost Bi-linear Converters

    Directory of Open Access Journals (Sweden)

    S. M. M. Shariatmadar

    2017-08-01

    Full Text Available In this study, the theory of minimum-time optimal control system in buck-boost bi-linear converters is described, so that output voltage regulation is carried out within minimum time. For this purpose, the Pontryagin's Minimum Principle is applied to find optimal switching level applying minimum-time optimal control rules. The results revealed that by utilizing an optimal switching level instead of classical switching patterns, output voltage regulation will be carried out within minimum time. However, transient energy index of increased overvoltage significantly reduces in order to attain minimum time optimal control in reduced output load. The laboratory results were used in order to verify numerical simulations.

  5. The effect of gamma-enhancing binaural beats on the control of feature bindings.

    Science.gov (United States)

    Colzato, Lorenza S; Steenbergen, Laura; Sellaro, Roberta

    2017-07-01

    Binaural beats represent the auditory experience of an oscillating sound that occurs when two sounds with neighboring frequencies are presented to one's left and right ear separately. Binaural beats have been shown to impact information processing via their putative role in increasing neural synchronization. Recent studies of feature-repetition effects demonstrated interactions between perceptual features and action-related features: repeating only some, but not all features of a perception-action episode hinders performance. These partial-repetition (or binding) costs point to the existence of temporary episodic bindings (event files) that are automatically retrieved by repeating at least one of their features. Given that neural synchronization in the gamma band has been associated with visual feature bindings, we investigated whether the impact of binaural beats extends to the top-down control of feature bindings. Healthy adults listened to gamma-frequency (40 Hz) binaural beats or to a constant tone of 340 Hz (control condition) for ten minutes before and during a feature-repetition task. While the size of visuomotor binding costs (indicating the binding of visual and action features) was unaffected by the binaural beats, the size of visual feature binding costs (which refer to the binding between the two visual features) was considerably smaller during gamma-frequency binaural beats exposure than during the control condition. Our results suggest that binaural beats enhance selectivity in updating episodic memory traces and further strengthen the hypothesis that neural activity in the gamma band is critically associated with the control of feature binding.

  6. CT dose survey in adults: what sample size for what precision?

    International Nuclear Information System (INIS)

    Taylor, Stephen; Muylem, Alain van; Howarth, Nigel; Gevenois, Pierre Alain; Tack, Denis

    2017-01-01

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  7. CT dose survey in adults: what sample size for what precision?

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)

    2017-01-15

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  8. Genome Size Dynamics and Evolution in Monocots

    Directory of Open Access Journals (Sweden)

    Ilia J. Leitch

    2010-01-01

    Full Text Available Monocot genomic diversity includes striking variation at many levels. This paper compares various genomic characters (e.g., range of chromosome numbers and ploidy levels, occurrence of endopolyploidy, GC content, chromosome packaging and organization, genome size between monocots and the remaining angiosperms to discern just how distinctive monocot genomes are. One of the most notable features of monocots is their wide range and diversity of genome sizes, including the species with the largest genome so far reported in plants. This genomic character is analysed in greater detail, within a phylogenetic context. By surveying available genome size and chromosome data it is apparent that different monocot orders follow distinctive modes of genome size and chromosome evolution. Further insights into genome size-evolution and dynamics were obtained using statistical modelling approaches to reconstruct the ancestral genome size at key nodes across the monocot phylogenetic tree. Such approaches reveal that while the ancestral genome size of all monocots was small (1C=1.9 pg, there have been several major increases and decreases during monocot evolution. In addition, notable increases in the rates of genome size-evolution were found in Asparagales and Poales compared with other monocot lineages.

  9. On the Minimum Cable Tensions for the Cable-Based Parallel Robots

    Directory of Open Access Journals (Sweden)

    Peng Liu

    2014-01-01

    Full Text Available This paper investigates the minimum cable tension distributions in the workspace for cable-based parallel robots to find out more information on the stability. First, the kinematic model of a cable-based parallel robot is derived based on the wrench matrix. Then, a noniterative polynomial-based optimization algorithm with the proper optimal objective function is presented based on the convex optimization theory, in which the minimum cable tension at any pose is determined. Additionally, three performance indices are proposed to show the distributions of the minimum cable tensions in a specified region of the workspace. An important thing is that the three performance indices can be used to evaluate the stability of the cable-based parallel robots. Furthermore, a new workspace, the Specified Minimum Cable Tension Workspace (SMCTW, is introduced, within which all the minimum tensions exceed a specified value, therefore meeting the specified stability requirement. Finally, a camera robot parallel driven by four cables for aerial panoramic photographing is selected to illustrate the distributions of the minimum cable tensions in the workspace and the relationship between the three performance indices and the stability.

  10. Energy and environmental norms on Minimum Vital Flux

    International Nuclear Information System (INIS)

    Maran, S.

    2008-01-01

    By the end of the year will come into force the recommendations on Minimum Vital flow and operators of hydroelectric power plants will be required to make available part of water of their derivations in order to protect river ecosystems. In this article the major energy and environmental consequences of these rules, we report some quantitative evaluations and are discusses the proposals for overcoming the weaknesses of the approach in the estimation of Minimum Vital Flux [it

  11. MINIMUM BRACING STIFFNESS FOR MULTI-COLUMN SYSTEMS: THEORY

    OpenAIRE

    ARISTIZÁBAL-OCHOA, J. DARÍO

    2011-01-01

    A method that determines the minimum bracing stiffness required by a multi-column elastic system to achieve non-sway buckling conditions is proposed. Equations that evaluate the required minimum stiffness of the lateral and torsional bracings and the corresponding “braced" critical buckling load for each column of the story level are derived using the modified stability functions. The following effects are included: 1) the types of end connections (rigid, semirigid, and simple); 2) the bluepr...

  12. Towards a mathematical foundation of minimum-variance theory

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, Sussex University, Brighton (United Kingdom); Zhang Kewei [SMS, Sussex University, Brighton (United Kingdom); Wei Gang [Mathematical Department, Baptist University, Hong Kong (China)

    2002-08-30

    The minimum-variance theory which accounts for arm and eye movements with noise signal inputs was proposed by Harris and Wolpert (1998 Nature 394 780-4). Here we present a detailed theoretical analysis of the theory and analytical solutions of the theory are obtained. Furthermore, we propose a new version of the minimum-variance theory, which is more realistic for a biological system. For the new version we show numerically that the variance is considerably reduced. (author)

  13. Minimum Wage Policy and Country’s Technical Efficiency

    OpenAIRE

    Karim, Mohd Zaini Abd; Chan, Sok-Gee; Hassan, Sallahuddin

    2016-01-01

    Recently, the government has decided that Malaysia would introduce a minimum wage policy. However, some quarters argued against the idea of a nationwide minimum wage asserting that it will lead to an increase in the cost of doing business and thus will hurt Malaysian competitiveness. Although standard economic theory unambiguously implies that wage floors have a negative impact on employment, the existing empirical literature is not so clear. Some studies have found the expected negative impa...

  14. Anesthesiologists' perceptions of minimum acceptable work habits of nurse anesthetists.

    Science.gov (United States)

    Logvinov, Ilana I; Dexter, Franklin; Hindman, Bradley J; Brull, Sorin J

    2017-05-01

    Work habits are non-technical skills that are an important part of job performance. Although non-technical skills are usually evaluated on a relative basis (i.e., "grading on a curve"), validity of evaluation on an absolute basis (i.e., "minimum passing score") needs to be determined. Survey and observational study. None. None. The theme of "work habits" was assessed using a modification of Dannefer et al.'s 6-item scale, with scores ranging from 1 (lowest performance) to 5 (highest performance). E-mail invitations were sent to all consultant and fellow anesthesiologists at Mayo Clinic in Florida, Arizona, and Minnesota. Because work habits expectations can be generational, the survey was designed for adjustment based on all invited (responding or non-responding) anesthesiologists' year of graduation from residency. The overall mean±standard deviation of the score for anesthesiologists' minimum expectations of nurse anesthetists' work habits was 3.64±0.66 (N=48). Minimum acceptable scores were correlated with the year of graduation from anesthesia residency (linear regression P=0.004). Adjusting for survey non-response using all N=207 anesthesiologists, the mean of the minimum acceptable work habits adjusted for year of graduation was 3.69 (standard error 0.02). The minimum expectations for nurse anesthetists' work habits were compared with observational data obtained from the University of Iowa. Among 8940 individual nurse anesthetist work habits scores, only 2.6% were habits scores were significantly greater than the Mayo estimate (3.69) for the minimum expectations; all Phabits of nurse anesthetists within departments should not be compared with an appropriate minimum score (i.e., of 3.69). Instead, work habits scores should be analyzed based on relative reporting among anesthetists. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Fermat and the Minimum Principle

    Indian Academy of Sciences (India)

    Arguably, least action and minimum principles were offered or applied much earlier. This (or these) principle(s) is/are among the fundamental, basic, unifying or organizing ones used to describe a variety of natural phenomena. It considers the amount of energy expended in performing a given action to be the least required ...

  16. Second Law Analysis of the Optimal Fin by Minimum Entropy Generation

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Based on the entropy generation concept of thermodynamics, this paper established a general theoretical model for the analysis of entropy generation to optimize fms, in which the minimum entropy generation was selected as the object to be studied. The irreversibility due to heat transfer and friction was taken into account so that the minimum entropygeneration number has been analyzed with respect to second law of thermodynamics in the forced cross-flow. The optimum dimensions of cylinder pins were discussed. It's found that the minimum entropy generation number depends on parameters related to the fluid and fin physical parameters. Variations of the minimum entropy generation number with different parameters were analyzed.

  17. The Effect of Minimum Wages on Adolescent Fertility: A Nationwide Analysis.

    Science.gov (United States)

    Bullinger, Lindsey Rose

    2017-03-01

    To investigate the effect of minimum wage laws on adolescent birth rates in the United States. I used a difference-in-differences approach and vital statistics data measured quarterly at the state level from 2003 to 2014. All models included state covariates, state and quarter-year fixed effects, and state-specific quarter-year nonlinear time trends, which provided plausibly causal estimates of the effect of minimum wage on adolescent birth rates. A $1 increase in minimum wage reduces adolescent birth rates by about 2%. The effects are driven by non-Hispanic White and Hispanic adolescents. Nationwide, increasing minimum wages by $1 would likely result in roughly 5000 fewer adolescent births annually.

  18. Prediction of protein modification sites of pyrrolidone carboxylic acid using mRMR feature selection and analysis.

    Directory of Open Access Journals (Sweden)

    Lu-Lu Zheng

    Full Text Available Pyrrolidone carboxylic acid (PCA is formed during a common post-translational modification (PTM of extracellular and multi-pass membrane proteins. In this study, we developed a new predictor to predict the modification sites of PCA based on maximum relevance minimum redundancy (mRMR and incremental feature selection (IFS. We incorporated 727 features that belonged to 7 kinds of protein properties to predict the modification sites, including sequence conservation, residual disorder, amino acid factor, secondary structure and solvent accessibility, gain/loss of amino acid during evolution, propensity of amino acid to be conserved at protein-protein interface and protein surface, and deviation of side chain carbon atom number. Among these 727 features, 244 features were selected by mRMR and IFS as the optimized features for the prediction, with which the prediction model achieved a maximum of MCC of 0.7812. Feature analysis showed that all feature types contributed to the modification process. Further site-specific feature analysis showed that the features derived from PCA's surrounding sites contributed more to the determination of PCA sites than other sites. The detailed feature analysis in this paper might provide important clues for understanding the mechanism of the PCA formation and guide relevant experimental validations.

  19. The debate on the economic effects of minimum wage legislation

    Directory of Open Access Journals (Sweden)

    Santos Miguel Ruesga-Benito

    2017-12-01

    Full Text Available The minimum wage establishment has its origin in the first third of the last century. Since its creation has been a focus of continuing controversy and an unfinished debate on economics field. This work reviews the effects of the minimum wage on employment and other macroeconomic variables, from both theoretical and empirical perspectives. The method is based on the revision of the literature and the main economic indicators. The central contribution of this paper is providing a general reflection on theoretical and empirical analysis about the debate on minimum wage and its effects. The results showed that some labor policies are taking account the effects of austerity strategies, shifting the attention towards the implementation of minimum wages or their updating, in order to reduce the growing inequalities in the distribution of income, and even poverty levels.

  20. Understanding Legacy Features with Featureous

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Java programs called Featureous that addresses this issue. Featureous allows a programmer to easily establish feature-code traceability links and to analyze their characteristics using a number of visualizations. Featureous is an extension to the NetBeans IDE, and can itself be extended by third...

  1. Minimum power requirement for environmental control of aircraft

    International Nuclear Information System (INIS)

    Ordonez, Juan Carlos; Bejan, Adrian

    2003-01-01

    This paper addresses two basic issues in the thermodynamic optimization of environmental control systems (ECS) for aircraft: realistic limits for the minimal power requirement, and design features that facilitate operation at minimal power consumption. Four models are proposed and optimized. In the first, the ECS operates reversibly, the air stream in the cabin is mixed to one temperature, and the cabin experiences heat transfer with the ambient, across its insulation. The cabin temperature is fixed. In the second model, the fixed cabin temperature is assigned to the internal solid surfaces of the cabin, and a thermal resistance separates these surfaces from the air mixed in the cabin. In the third model, the ECS operates irreversibly, based on the bootstrap air cycle. The fourth model combines the ECS features of the third model with the cabin-environment interaction features of the second model. It is shown that in all models the temperature of the air stream that the ECS delivers to the cabin can be optimized for operation at minimal power. The effect of other design parameters and flying conditions is documented. The optimized air delivery temperature is relatively insensitive to the complexity of the model; for example, it is insensitive to the size of the heat exchanger used in the bootstrap air cycle. This study adds to the view that robustness is a characteristic of optimized complex flow systems, and that thermodynamic optimization results can be used for orientation in the pursuit of more complex and realistic designs

  2. Do minimum wages improve early life health? Evidence from developing countries.

    Science.gov (United States)

    Majid, Muhammad Farhan; Mendoza Rodríguez, José M; Harper, Sam; Frank, John; Nandi, Arijit

    2016-06-01

    The impact of legislated minimum wages on the early-life health of children living in low and middle-income countries has not been examined. For our analyses, we used data from the Demographic and Household Surveys (DHS) from 57 countries conducted between 1999 and 2013. Our analyses focus on height-for-age z scores (HAZ) for children under 5 years of age who were surveyed as part of the DHS. To identify the causal effect of minimum wages, we utilized plausibly exogenous variation in the legislated minimum wages during each child's year of birth, the identifying assumption being that mothers do not time their births around changes in the minimum wage. As a sensitivity exercise, we also made within family comparisons (mother fixed effect models). Our final analysis on 49 countries reveal that a 1% increase in minimum wages was associated with 0.1% (95% CI = -0.2, 0) decrease in HAZ scores. Adverse effects of an increase in the minimum wage were observed among girls and for children of fathers who were less than 35 years old, mothers aged 20-29, parents who were married, parents who were less educated, and parents involved in manual work. We also explored heterogeneity by region and GDP per capita at baseline (1999). Adverse effects were concentrated in lower-income countries and were most pronounced in South Asia. By contrast, increases in the minimum wage improved children's HAZ in Latin America, and among children of parents working in a skilled sector. Our findings are inconsistent with the hypothesis that increases in the minimum wage unconditionally improve child health in lower-income countries, and highlight heterogeneity in the impact of minimum wages around the globe. Future work should involve country and occupation specific studies which can explore not only different outcomes such as infant mortality rates, but also explore the role of parental investments in shaping these effects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Determination of crack size around rivet hole through neural network using ultrasonic Lamb wave

    International Nuclear Information System (INIS)

    Choi, Sang Woo; Lee, Joon Hyun

    1998-01-01

    Rivets are typical structural features that are potential initiation sites for fatigue crack due to combination of local stress concentration around rivet hole and moisture trapping. For the viewpoint of structural assurance, it is crucial to evaluate the size of crack around rivets by appropriate nondestructive techniques. Guided waves, which direct wave energy along the plate, carry information about the material in their path and offer a potentially more efficient tool for nondestructive inspection of structural material. Neural network that is considered to be the most suitable for pattern recognition and has been used by researchers in NDE field to classify different types of flaws and flaw size. In this study, crack size determination around rivet through a neural network based on the back-propagation algorithm has been done by extracting some feature from time-domain waveforms of ultrasonic Lamb wave for Al 2024-T3 skin panel of aircraft. Special attention was paid to reduce the coupling effect between transducer and specimen by extracting some features related to only time component data in ultrasonic waveform. It was demonstrated clearly that features extraction based on time component data of the time-domain waveform of Lamb wave was very useful to determine crack size initiated from rivet hole through neural network.

  4. Features in Microfluidic Paper-Based Devices Made by Laser Cutting: How Small Can They Be?

    Directory of Open Access Journals (Sweden)

    Md. Almostasim Mahmud

    2018-05-01

    Full Text Available In this paper, we determine the smallest feature size that enables fluid flow in microfluidic paper-based analytical devices (µPADs fabricated by laser cutting. The smallest feature sizes fabricated from five commercially available paper types: Whatman filter paper grade 50 (FP-50, Whatman 3MM Chr chromatography paper (3MM Chr, Whatman 1 Chr chromatography paper (1 Chr, Whatman regenerated cellulose membrane 55 (RC-55 and Amershan Protran 0.45 nitrocellulose membrane (NC, were 139 ± 8 µm, 130 ± 11 µm, 103 ± 12 µm, 45 ± 6 µm, and 24 ± 3 µm, respectively, as determined experimentally by successful fluid flow. We found that the fiber width of the paper correlates with the smallest feature size that has the capacity for fluid flow. We also investigated the flow speed of Allura red dye solution through small-scale channels fabricated from different paper types. We found that the flow speed is significantly slower through microscale features and confirmed the similar trends that were reported previously for millimeter-scale channels, namely that wider channels enable quicker flow speed.

  5. Size Matters: Observed and Modeled Camouflage Response of European Cuttlefish (Sepia officinalis) to Different Substrate Patch Sizes during Movement.

    Science.gov (United States)

    Josef, Noam; Berenshtein, Igal; Rousseau, Meghan; Scata, Gabriella; Fiorito, Graziano; Shashar, Nadav

    2016-01-01

    Camouflage is common throughout the phylogenetic tree and is largely used to minimize detection by predator or prey. Cephalopods, and in particular Sepia officinalis cuttlefish, are common models for camouflage studies. Predator avoidance behavior is particularly important in this group of soft-bodied animals that lack significant physical defenses. While previous studies have suggested that immobile cephalopods selectively camouflage to objects in their immediate surroundings, the camouflage characteristics of cuttlefish during movement are largely unknown. In a heterogenic environment, the visual background and substrate feature changes quickly as the animal swim across it, wherein substrate patch is a distinctive and high contrast patch of substrate in the animal's trajectory. In the current study, we examine the effect of substrate patch size on cuttlefish camouflage, and specifically the minimal size of an object for eliciting intensity matching response while moving. Our results indicated that substrate patch size has a positive effect on animal's reflectance change, and that the threshold patch size resulting in camouflage response falls between 10 and 19 cm (width). These observations suggest that the animal's length (7.2-12.3 cm mantle length in our case) serves as a possible threshold filter below which objects are considered irrelevant for camouflage, reducing the frequency of reflectance changes-which may lead to detection. Accordingly, we have constructed a computational model capturing the main features of the observed camouflaging behavior, provided for cephalopod camouflage during movement.

  6. Robust Feature Selection from Microarray Data Based on Cooperative Game Theory and Qualitative Mutual Information

    Directory of Open Access Journals (Sweden)

    Atiyeh Mortazavi

    2016-01-01

    Full Text Available High dimensionality of microarray data sets may lead to low efficiency and overfitting. In this paper, a multiphase cooperative game theoretic feature selection approach is proposed for microarray data classification. In the first phase, due to high dimension of microarray data sets, the features are reduced using one of the two filter-based feature selection methods, namely, mutual information and Fisher ratio. In the second phase, Shapley index is used to evaluate the power of each feature. The main innovation of the proposed approach is to employ Qualitative Mutual Information (QMI for this purpose. The idea of Qualitative Mutual Information causes the selected features to have more stability and this stability helps to deal with the problem of data imbalance and scarcity. In the third phase, a forward selection scheme is applied which uses a scoring function to weight each feature. The performance of the proposed method is compared with other popular feature selection algorithms such as Fisher ratio, minimum redundancy maximum relevance, and previous works on cooperative game based feature selection. The average classification accuracy on eleven microarray data sets shows that the proposed method improves both average accuracy and average stability compared to other approaches.

  7. 77 FR 43196 - Minimum Internal Control Standards and Technical Standards

    Science.gov (United States)

    2012-07-24

    ... NATIONAL INDIAN GAMING COMMISSION 25 CFR Parts 543 and 547 Minimum Internal Control Standards [email protected] . SUPPLEMENTARY INFORMATION: Part 543 addresses minimum internal control standards (MICS) for Class II gaming operations. The regulations require tribes to establish controls and implement...

  8. Strong crystal size effect on deformation twinning

    DEFF Research Database (Denmark)

    Yu, Qian; Shan, Zhi-Wei; Li, Ju

    2010-01-01

    plasticity. Accompanying the transition in deformation mechanism, the maximum flow stress of the submicrometre-sized pillars was observed to saturate at a value close to titanium’s ideal strength9, 10. We develop a ‘stimulated slip’ model to explain the strong size dependence of deformation twinning......Deformation twinning1, 2, 3, 4, 5, 6 in crystals is a highly coherent inelastic shearing process that controls the mechanical behaviour of many materials, but its origin and spatio-temporal features are shrouded in mystery. Using micro-compression and in situ nano-compression experiments, here we...... find that the stress required for deformation twinning increases drastically with decreasing sample size of a titanium alloy single crystal7, 8, until the sample size is reduced to one micrometre, below which the deformation twinning is entirely replaced by less correlated, ordinary dislocation...

  9. Size and spatial distribution of micropores in SBA-15 using CM-SANS

    International Nuclear Information System (INIS)

    Pollock, Rachel A.; Walsh, Brenna R.; Fry, Jason A.; Ghampson, Tyrone; Centikol, Ozgul; Melnichenko, Yuri B.; Kaiser, Helmut; Pynn, Roger; Frederick, Brian G.

    2011-01-01

    Diffraction intensity analysis of small-angle neutron scattering measurements of dry SBA-15 have been combined with nonlocal density functional theory (NLDFT) analysis of nitrogen desorption isotherms to characterize the micropore, secondary mesopore, and primary mesopore structure. The radial dependence of the scattering length density, which is sensitive to isolated surface hydroxyls, can only be modeled if the NLDFT pore size distribution is distributed relatively uniformly throughout the silica framework, not localized in a 'corona' around the primary mesopores. Contrast matching-small angle neutron scattering (CM-SANS) measurements, using water, decane, tributylamine, cyclohexane, and isooctane as direct probes of the size of micropores indicate that the smallest pores in SBA-15 have diameter between 5.7 and 6.2 (angstrom). Correlation of the minimum pore size with the onset of the micropore size distribution provides direct evidence that the shape of the smallest micropores is cylinderlike, which is consistent with their being due to unraveling of the polymer template.

  10. Young Debris Disks With Newly Discovered Emission Features

    Science.gov (United States)

    Ballering, N.

    2014-04-01

    We analyzed the Spitzer/IRS spectra of young A and F stars that host debris disks with previously unidentified silicate emission features. Such features probe small, warm dust grains in the inner regions of these young systems where terrestrial planet formation may be proceeding (Lisse et al. 2009). For most systems, these regions are too near their host star to be directly seen with high-contrast imaging and too warm to be imaged with submillimeter interferometers. Mid-infrared excess spectra - originating from the thermal emission of the debris disk dust - remain the best data to constrain the properties of the debris in these regions. For each target, we fit physically-motivated model spectra to the data. Typical spectra of unresolved debris disks are featureless and suffer severe degeneracies between the dust location and the grain properties; however, spectra with solid-state emission features provide significantly more information, allowing for a more accurate determination of the dust size, composition, and location (e.g. Chen et al. 2006; Olofsson et al. 2012). Our results shed light on the dynamic properties occurring in the terrestrial regions of these systems. For instance, the sizes of the smallest grains and the nature of the grain size distribution reveal whether the dust originates from steady-state collisional cascades or from stochastic collisions. The properties of the dust grains - such as their crystalline or amorphous structure - can inform us of grain processing mechanisms in the disk. The location of this debris illuminates where terrestrial planet forming activity is occurring. We used results from the Beta Pictoris - which has a well-resolved debris disk with emission features (Li et al. 2012) - to place our results in context. References: Chen et al. 2006, ApJS, 166, 351 Li et al. 2012, ApJ, 759, 81 Lisse et al. 2009, ApJ, 701, 2019 Olofsson et al. 2012, A&A, 542, A90

  11. Polycystic ovarian disease: US features in 104 patients.

    Science.gov (United States)

    Yeh, H C; Futterweit, W; Thornton, J C

    1987-04-01

    Ultrasonographic (US) study was performed in 25 healthy women and 104 patients with polycystic ovarian disease (PCOD). Although the average size of ovaries in the PCOD patients was much larger than that of the healthy women, 29.7% of ovaries in the PCOD patients were normal in size. The shapes of the ovaries (roundness index) in PCOD patients were not different from those of the healthy women. There was no significant correlation between the size and shape of the ovaries. Bilaterally enlarged, globular-shaped ovaries were rare and usually asymmetric in size. The most important feature of PCOD on US scans is the bilaterally increased numbers of developing follicles (0.5-0.8 cm in size), usually more than five in each ovary. Although maturing follicles (1.5-2.9 cm) are much rarer in PCOD patients (13.5%) than in healthy women (36%), the incidences of follicular cysts (greater than 3 cm) was about the same in both.

  12. Effect of feature size on dielectric nonlinearity of patterned PbZr{sub 0.52}Ti{sub 0.48}O{sub 3} films

    Energy Technology Data Exchange (ETDEWEB)

    Yang, J. I., E-mail: jxy194@psu.edu; Trolier-McKinstry, S. [Department of Material Science and Engineering, Pennsylvania State University, University Park, Pennsylvania 16802 (United States); Polcawich, R. G.; Sanchez, L. M. [U.S. Army Research Laboratory, Adelphi, Maryland 20783-1197 (United States)

    2015-01-07

    Lead zirconate titanate, PZT (52/48), thin films with a PbTiO{sub 3} seed layer were patterned into features of different widths, including various sizes of squares and 100 μm, 50 μm, and 10 μm serpentine designs, using argon ion beam milling. Patterns with different surface area/perimeter ratios were used to study the relative importance of damage produced by the patterning. It was found that as the pattern dimensions decreased, the remanent polarization increased, presumably due to the fact that the dipoles near the feature perimeter are not as severely clamped to the substrate. This investigation is in agreement with a model in which clamping produces deep wells, which do not allow some fraction of the spontaneous polarization to switch at high field. The domain wall mobility at modest electric fields was investigated using the Rayleigh law. Both the reversible, ε{sub init}, and irreversible, α, Rayleigh coefficients increased with decreasing serpentine line width for de-aged samples. For measurements made immediately after annealing, ε{sub init} of 500 μm square patterns was 1510 ± 13; with decreasing serpentine line width, ε{sub init} rose from 1520 ± 10 for the 100 μm serpentine to 1568 ± 23 for the 10 μm serpentine. The irreversible parameter, α, for the square patterns was 39.4 ± 3.2 cm/kV and it increased to 44.1 ± 3.2 cm/kV as the lateral dimension is reduced. However, it was found that as the width of the serpentine features decreased, the aging rate rose. These observations are consistent with a model in which sidewall damage produces shallow wells that lower the Rayleigh constants of aged samples at small fields. These shallow wells can be overcome by the large fields used to measure the remanent polarization and the large unipolar electric fields typically used to drive thin film piezoelectric actuators.

  13. Minimum-error discrimination of entangled quantum states

    International Nuclear Information System (INIS)

    Lu, Y.; Coish, N.; Kaltenbaek, R.; Hamel, D. R.; Resch, K. J.; Croke, S.

    2010-01-01

    Strategies to optimally discriminate between quantum states are critical in quantum technologies. We present an experimental demonstration of minimum-error discrimination between entangled states, encoded in the polarization of pairs of photons. Although the optimal measurement involves projection onto entangled states, we use a result of J. Walgate et al. [Phys. Rev. Lett. 85, 4972 (2000)] to design an optical implementation employing only local polarization measurements and feed-forward, which performs at the Helstrom bound. Our scheme can achieve perfect discrimination of orthogonal states and minimum-error discrimination of nonorthogonal states. Our experimental results show a definite advantage over schemes not using feed-forward.

  14. Methodical Features of the Field Researches of the Anapa Bay-Bar Sediment Composition

    Science.gov (United States)

    Krylenko, Marina; Krylenko, Viacheslav; Gusakova, Anastasiya; Kosyan, Alisa

    2014-05-01

    Resort Anapa (Black Sea coast, Russia) holds leading positions in the Russian market of sanatorium-resort and children's recreation. The 50-200 m sandy beaches of Anapa bay-bar are the main value of the resort. Anapa bay-bar is an extensive accumulative sandy body having the length about 47 km. Obvious attributes of the beaches degradation demanding immediate measures on their protection and restoration are observed in last years. The main reason of degradation is beach material deficiency. To organize researches of the sediments of this extensive natural object is a difficult challenge. It is necessary to reduce number of tests to minimum. It is important to record differences of separate bay-bar sites and to receive comparable data for different seasons and years. Our researches showed that the grain-size sediment composition significantly depends of position on local relief. Consequently, researching of the alongshore change of the sediment size is effectual to realize at this morphological elements. Shelly detritus makes to 30% of total amount of beach sediments. It is necessary to consider that quantitative shell distribution along the coast significantly depends on a configuration of the coastline and an underwater relief. Quantity of the shells for cross-shore profile is maximal near coastline. For identification of the sediment sources and researching of their fluxes to use minerals markers (heavy minerals) is optimum. The maximum of heavy minerals concentration is characteristic for fraction 0.1-0.05mm at depth more 5 m. The maintenance of this fraction within other morphological zones isn't enough for the analysis or is excessively changeable. Use of the revealed features allowed to conduct the representative field researches of grain-size and mineral sediment composition for all morphological zones of underwater and coast part of the Anapa bay-bar. This methodic recommendations are workable for researches on others coast accumulative body. The work is

  15. UPGMA and the normalized equidistant minimum evolution problem

    OpenAIRE

    Moulton, Vincent; Spillner, Andreas; Wu, Taoyang

    2017-01-01

    UPGMA (Unweighted Pair Group Method with Arithmetic Mean) is a widely used clustering method. Here we show that UPGMA is a greedy heuristic for the normalized equidistant minimum evolution (NEME) problem, that is, finding a rooted tree that minimizes the minimum evolution score relative to the dissimilarity matrix among all rooted trees with the same leaf-set in which all leaves have the same distance to the root. We prove that the NEME problem is NP-hard. In addition, we present some heurist...

  16. Pay equity, minimum wage and equality at work

    OpenAIRE

    Rubery, Jill

    2003-01-01

    Reviews the underlying causes of pay discrimination embedded within the organization of the labour market and structures of pay and reward. Discusses the need to focus on pay equity as part of a general strategy of promoting equity and decent work and examines the case for using minimum wage policies in comparison to more targeted equal pay policies to reduce gender pay equity. Identifies potential obstacles to or support for such policies and describes experiences of the use of minimum wages...

  17. A theory of compliance with minimum wage legislation

    OpenAIRE

    Jellal, Mohamed

    2012-01-01

    In this paper, we introduce firm heterogeneity in the context of a model of non-compliance with minimum wage legislation. The introduction of heterogeneity in the ease with which firms can be monitored for non compliance allows us to show that non-compliance will persist in sectors which are relatively difficult to monitor, despite the government implementing non stochastic monitoring. Moreover, we show that the incentive not to comply is an increasing function of the level of the minimum wag...

  18. The minimum yield in channeling

    International Nuclear Information System (INIS)

    Uguzzoni, A.; Gaertner, K.; Lulli, G.; Andersen, J.U.

    2000-01-01

    A first estimate of the minimum yield was obtained from Lindhard's theory, with the assumption of a statistical equilibrium in the transverse phase-space of channeled particles guided by a continuum axial potential. However, computer simulations have shown that this estimate should be corrected by a fairly large factor, C (approximately equal to 2.5), called the Barrett factor. We have shown earlier that the concept of a statistical equilibrium can be applied to understand this result, with the introduction of a constraint in phase-space due to planar channeling of axially channeled particles. Here we present an extended test of these ideas on the basis of computer simulation of the trajectories of 2 MeV α particles in Si. In particular, the gradual trend towards a full statistical equilibrium is studied. We also discuss the introduction of this modification of standard channeling theory into descriptions of the multiple scattering of channeled particles (dechanneling) by a master equation and show that the calculated minimum yields are in very good agreement with the results of a full computer simulation

  19. The Effects of Minimum Wage Throughout the Wage Distribution in Indonesia

    Directory of Open Access Journals (Sweden)

    Sri Gusvina Dewi

    2018-03-01

    Full Text Available The global financial crisis in 2007 followed by Indonesia’s largest labor demonstration in 2013 encouraged turmoils on Indonesia labor market. This paper examines the effect of the minimum wage on wage distribution in 2007 and 2014 and how the minimum wage increases in 2014 affected the distribution of wage differences between 2007 and 2014. This study employs recentered influence function (RIF regression method to estimate the wage function by using unconditional quantile regression. Furthermore, to measure the effect of the minimum wage increase in 2014 on the distribution of wage differences, it uses the Oaxaca–Blinder decomposition method. Using balanced panel data from the Indonesian Family Life Survey (IFLS, it found that the minimum wage mitigates wage disparity in 2007 and 2014. The minimum wage policy in 2014 leads to an increase in the wage difference between 2007 and 2014, with the largest wage difference being in the middle distribution.DOI: 10.15408/sjie.v7i2.6125

  20. In situ droplet size and speed determination in a fluid-bed granulator.

    Science.gov (United States)

    Ehlers, Henrik; Larjo, Jussi; Antikainen, Osmo; Räikkönen, Heikki; Heinämäki, Jyrki; Yliruusi, Jouko

    2010-05-31

    The droplet size affects the final product in fluid-bed granulation and coating. In the present study, spray characteristics of aqueous granulation liquid (purified water) were determined in situ in a fluid-bed granulator. Droplets were produced by a pneumatic nozzle. Diode laser stroboscopy (DLS) was used for droplet detection and particle tracking velocimetry (PTV) was used for determination of droplet size and speed. Increased atomization pressure decreased the droplet size and the effect was most strongly visible in the 90% size fractile. The droplets seemed to undergo coalescence after which only slight evaporation occurred. Furthermore, the droplets were subjected to a strong turbulence at the event of atomization, after which the turbulence reached a minimum value in the lower halve of the chamber. The turbulence increased as speed and droplet size decreased due to the effects of the fluidizing air. The DLS and PTV system used was found to be a useful and rapid tool in determining spray characteristics and in monitoring and predicting nozzle performance. Copyright (c) 2010 Elsevier B.V. All rights reserved.