WorldWideScience

Sample records for uniform scaling approach

  1. Uniform Statistical Convergence on Time Scales

    Directory of Open Access Journals (Sweden)

    Yavuz Altin

    2014-01-01

    Full Text Available We will introduce the concept of m- and (λ,m-uniform density of a set and m- and (λ,m-uniform statistical convergence on an arbitrary time scale. However, we will define m-uniform Cauchy function on a time scale. Furthermore, some relations about these new notions are also obtained.

  2. Synthetic approaches to uniform polymers.

    Science.gov (United States)

    Ali, Monzur; Brocchini, Steve

    2006-12-30

    Uniform polymers are characterised by a narrow molecular weight distribution (MWD). Uniformity is also defined by chemical structure in respect of (1) monomer orientation, sequence and stereo-regularity, (2) polymer shape and morphology and (3) chemical functionality. The function of natural polymers such as polypeptides and polynucleotides is related to their conformational structure (e.g. folded tertiary structure). This is only possible because of their high degree of uniformity. While completely uniform synthetic polymers are rare, polymers with broad structure and MWD are widely used in medicine and the biomedical sciences. They are integral components in final dosage forms, drug delivery systems (DDS) and in implantable devices. Increasingly uniform polymers are being used to develop more complex medicines (e.g. delivery of biopharmaceuticals, enhanced formulations or DDS's for existing actives). In addition to the function imparted by any new polymer it will be required to meet stringent specifications in terms of cost containment, scalability, biocompatibility and performance. Synthetic polymers with therapeutic activity are also being developed to exploit their polyvalent properties, which is not possible with low molecular weight molecules. There is need to utilise uniform polymers for applications where the polymer may interact with the systemic circulation, tissues or cellular environment. There are also potential applications (e.g. stimuli responsive coatings) where uniform polymers may be used for their more defined property profile. While it is not yet practical to prepare synthetic polymers to the same high degree of uniformity as proteins, nature also effectively utilises many polymers with lower degrees of uniformity (e.g. polysaccharides, poly(amino acids), polyhydroxyalkanoates). In recent years it has become possible to prepare with practical experimental protocols sufficient quantities of polymers that display many aspects of uniformity. This

  3. Dynamic Uniform Scaling for Multiobjective Genetic Algorithms

    DEFF Research Database (Denmark)

    Pedersen, Gerulf; Goldberg, David E.

    2004-01-01

    Before Multiobjective Evolutionary Algorithms (MOEAs) can be used as a widespread tool for solving arbitrary real world problems there are some salient issues which require further investigation. One of these issues is how a uniform distribution of solutions along the Pareto non-dominated front c...

  4. Dynamic Uniform Scaling for Multiobjective Genetic Algorithms

    DEFF Research Database (Denmark)

    Pedersen, Gerulf; Goldberg, D.E.

    2004-01-01

    Before Multiobjective Evolutionary Algorithms (MOEAs) can be used as a widespread tool for solving arbitrary real world problems there are some salient issues which require further investigation. One of these issues is how a uniform distribution of solutions along the Pareto non-dominated front can...

  5. Deviations from uniform power law scaling in nonstationary time series

    Science.gov (United States)

    Viswanathan, G. M.; Peng, C. K.; Stanley, H. E.; Goldberger, A. L.

    1997-01-01

    A classic problem in physics is the analysis of highly nonstationary time series that typically exhibit long-range correlations. Here we test the hypothesis that the scaling properties of the dynamics of healthy physiological systems are more stable than those of pathological systems by studying beat-to-beat fluctuations in the human heart rate. We develop techniques based on the Fano factor and Allan factor functions, as well as on detrended fluctuation analysis, for quantifying deviations from uniform power-law scaling in nonstationary time series. By analyzing extremely long data sets of up to N = 10(5) beats for 11 healthy subjects, we find that the fluctuations in the heart rate scale approximately uniformly over several temporal orders of magnitude. By contrast, we find that in data sets of comparable length for 14 subjects with heart disease, the fluctuations grow erratically, indicating a loss of scaling stability.

  6. Non-uniform plastic deformation of micron scale objects

    DEFF Research Database (Denmark)

    Niordson, Christian Frithiof; Hutchinson, J. W.

    2003-01-01

    Significant increases in apparent flow strength are observed when non-uniform plastic deformation of metals occurs at the scale ranging from roughly one to ten microns. Several basic plane strain problems are analyzed numerically in this paper based on a new formulation of strain gradient...... plasticity. The problems are the tangential and normal loading of a finite rectangular block of material bonded to rigid platens and having traction-free ends, and the normal loading of a half-space by a flat, rigid punch. The solutions illustrate fundamental features of plasticity at the micron scale...... that are not captured by conventional plasticity theory. These include the role of material length parameters in establishing the size dependence of strength and the elevation of resistance to plastic flow resulting from constraint on plastic flow at boundaries. Details of the finite element method employed...

  7. On locally uniformly linearizable high breakdown location and scale functionals

    NARCIS (Netherlands)

    Davies, P.L.

    1998-01-01

    This article gives two constructions of a weighted mean which has a large domain, is affinely equivariant, has a locally high breakdown point and is locally uniformly linearizable. One construction is based on $M$-functionals with smooth defining $\\psi$- and $\\chi$ -functions which are used to

  8. Evidence for large-scale uniformity of physical laws

    International Nuclear Information System (INIS)

    Tubbs, A.D.; Wolfe, A.M.

    1980-01-01

    The coincidence of redshifts deduced from 21 cm and resonance transitions in absorbing gas detected in front of four quasi-stellar objects results in stringent limits on the variation of the product of three physical constants both in space and in time. We find that α 2 g/sub p/(m/M) is spatially uniform, to a few parts in 10 4 , throughout the observable universe. This uniformity holds subsequent to an epoch corresponding to less than 5% of the current age of the universe t 0 . Moreover, time variations in α 2 g/sub p/m/M are excluded to the same accuracy subsequent to an epoch corresponding to > or approx. =0.20 t 0 . These limits are largely model independent, relying only upon the cosmoligical interpretation of redshifts, and the isotropy of the 3 K background radiation. That a quantity as complex as g/sub p/, which depends on all the details of strong interaction physics, is uniform throughout most of spacetime, even in causally disjoint regions, suggests that all physical laws are globally invariant

  9. Integrated approach to improving local CD uniformity in EUV patterning

    Science.gov (United States)

    Liang, Andrew; Hermans, Jan; Tran, Timothy; Viatkina, Katja; Liang, Chen-Wei; Ward, Brandon; Chuang, Steven; Yu, Jengyi; Harm, Greg; Vandereyken, Jelle; Rio, David; Kubis, Michael; Tan, Samantha; Dusa, Mircea; Singhal, Akhil; van Schravendijk, Bart; Dixit, Girish; Shamma, Nader

    2017-03-01

    Extreme ultraviolet (EUV) lithography is crucial to enabling technology scaling in pitch and critical dimension (CD). Currently, one of the key challenges of introducing EUV lithography to high volume manufacturing (HVM) is throughput, which requires high source power and high sensitivity chemically amplified photoresists. Important limiters of high sensitivity chemically amplified resists (CAR) are the effects of photon shot noise and resist blur on the number of photons received and of photoacids generated per feature, especially at the pitches required for 7 nm and 5 nm advanced technology nodes. These stochastic effects are reflected in via structures as hole-to-hole CD variation or local CD uniformity (LCDU). Here, we demonstrate a synergy of film stack deposition, EUV lithography, and plasma etch techniques to improve LCDU, which allows the use of high sensitivity resists required for the introduction of EUV HVM. Thus, to improve LCDU to a level required by 5 nm node and beyond, film stack deposition, EUV lithography, and plasma etch processes were combined and co-optimized to enhance LCDU reduction from synergies. Test wafers were created by depositing a pattern transfer stack on a substrate representative of a 5 nm node target layer. The pattern transfer stack consisted of an atomically smooth adhesion layer and two hardmasks and was deposited using the Lam VECTOR PECVD product family. These layers were designed to mitigate hole roughness, absorb out-of-band radiation, and provide additional outlets for etch to improve LCDU and control hole CD. These wafers were then exposed through an ASML NXE3350B EUV scanner using a variety of advanced positive tone EUV CAR. They were finally etched to the target substrate using Lam Flex dielectric etch and Kiyo conductor etch systems. Metrology methodologies to assess dimensional metrics as well as chip performance and defectivity were investigated to enable repeatable patterning process development. Illumination

  10. An efficient method based on the uniformity principle for synthesis of large-scale heat exchanger networks

    International Nuclear Information System (INIS)

    Zhang, Chunwei; Cui, Guomin; Chen, Shang

    2016-01-01

    Highlights: • Two dimensionless uniformity factors are presented to heat exchange network. • The grouping of process streams reduces the computational complexity of large-scale HENS problems. • The optimal sub-network can be obtained by Powell particle swarm optimization algorithm. • The method is illustrated by a case study involving 39 process streams, with a better solution. - Abstract: The optimal design of large-scale heat exchanger networks is a difficult task due to the inherent non-linear characteristics and the combinatorial nature of heat exchangers. To solve large-scale heat exchanger network synthesis (HENS) problems, two dimensionless uniformity factors to describe the heat exchanger network (HEN) uniformity in terms of the temperature difference and the accuracy of process stream grouping are deduced. Additionally, a novel algorithm that combines deterministic and stochastic optimizations to obtain an optimal sub-network with a suitable heat load for a given group of streams is proposed, and is named the Powell particle swarm optimization (PPSO). As a result, the synthesis of large-scale heat exchanger networks is divided into two corresponding sub-parts, namely, the grouping of process streams and the optimization of sub-networks. This approach reduces the computational complexity and increases the efficiency of the proposed method. The robustness and effectiveness of the proposed method are demonstrated by solving a large-scale HENS problem involving 39 process streams, and the results obtained are better than those previously published in the literature.

  11. Compressor Performance Scaling in the Presence of Non-Uniform Flow

    Science.gov (United States)

    Hill, David Jarrod

    Fuselage-embedded engines in future aircraft will see increased flow distortions due to the ingestion of airframe boundary layers. This reduces the required propulsive power compared to podded engines. Inlet flow distortions mean that localized regions of flow within the fan and first stage compressor are operating at off-design conditions. It is important to weigh the benefit of increased vehicle propulsive efficiency against the resultant reduction in engine efficiency. High computational cost has limited most past research to single distortion studies. The objective of this thesis is to extract scaling laws for transonic compressor performance in the presence of various distortion patterns and intensities. The machine studied is the NASA R67 transonic compressor. Volumetric source terms are used to model rotor and stator blade rows. The modelling approach is an innovative combination of existing flow turning and loss models, combined with a compressible flow correction. This approach allows for a steady calculation to capture distortion transfer; as a result, the computational cost is reduced by two orders of magnitude. At peak efficiency, the rotor work coefficient and isentropic efficiency are matched within 1.4% of previously published experimental results. A key finding of this thesis is that, in non-uniform flow, the state-of-the-art loss model employed is unable to capture the impact of variations in local flow coefficient, limiting the analysis of local entropy generation. New insight explains the mechanism governing the interaction between a total temperature distortion and a compressor rotor. A parametric study comprising 16 inlet distortions reveals that for total temperature distortions, upstream flow redistribution and rotor diffusion factor changes are shown to scale linearly with distortion severity. Linear diffusion factor scaling does not hold true for total pressure distortions. For combined total temperature and total pressure distortions, the

  12. Consequences of atomic layer etching on wafer scale uniformity in inductively coupled plasmas

    Science.gov (United States)

    Huard, Chad M.; Lanham, Steven J.; Kushner, Mark J.

    2018-04-01

    Atomic layer etching (ALE) typically divides the etching process into two self-limited reactions. One reaction passivates a single layer of material while the second preferentially removes the passivated layer. As such, under ideal conditions the wafer scale uniformity of ALE should be independent of the uniformity of the reactant fluxes onto the wafers, provided all surface reactions are saturated. The passivation and etch steps should individually asymptotically saturate after a characteristic fluence of reactants has been delivered to each site. In this paper, results from a computational investigation are discussed regarding the uniformity of ALE of Si in Cl2 containing inductively coupled plasmas when the reactant fluxes are both non-uniform and non-ideal. In the parameter space investigated for inductively coupled plasmas, the local etch rate for continuous processing was proportional to the ion flux. When operated with saturated conditions (that is, both ALE steps are allowed to self-terminate), the ALE process is less sensitive to non-uniformities in the incoming ion flux than continuous etching. Operating ALE in a sub-saturation regime resulted in less uniform etching. It was also found that ALE processing with saturated steps requires a larger total ion fluence than continuous etching to achieve the same etch depth. This condition may result in increased resist erosion and/or damage to stopping layers using ALE. While these results demonstrate that ALE provides increased etch depth uniformity, they do not show an improved critical dimension uniformity in all cases. These possible limitations to ALE processing, as well as increased processing time, will be part of the process optimization that includes the benefits of atomic resolution and improved uniformity.

  13. On achieving a uniform approach to radiation control in Australia

    International Nuclear Information System (INIS)

    Swindon, T. N.

    1995-01-01

    Legislation and the associated regulatory processes to control radiation exposure of persons in the workplace, of patients undergoing medical exposures and of members of the public have been in place in all Australian States, including the ACT and NT, for some decades. Most States have completely rewritten their original legislation and all have made minor modifications from time to time. As a consequence, the legislation and the regulatory processes and controls used in all the States differ considerably, although they all have the same intent. For many years now, attempts have been made to overcome problems arising from the differences in the radiation control legislation and practices. These have been through the preparation of recommendations and codes of practice by the Radiation Health Standing Committee (RHSC) of the NHMRC and through discussions by State radiation control officers in the Radiation Control Implementation Panel, which reports to the RHSC. The recommendations and codes of practice can be utilised by the States in their radiation control activities, but this procedure can be restricted by different requirements in State legislation. Despite the efforts to overcome the problems, the main stumbling block to the implementation of uniform control derives from the legislation currently in use in each state. It is seen that changes will take a number of years to implements and that changes to legislation would be a top priority

  14. Long-term dimensional stability and longitudinal uniformity of line scales made of glass ceramics

    International Nuclear Information System (INIS)

    Takahashi, Akira

    2010-01-01

    Line scales are commonly used as a working standard of length for the calibration of optical measuring instruments such as profile projectors, measuring microscopes and video measuring systems. For high-precision calibration, line scales with low thermal expansion are commonly used. Glass ceramics have a very low coefficient of thermal expansion (CTE) and are widely used for precision line scales. From a previous study, it is known that glass ceramics decrease in length from the time of production or heat treatment. The line scale measurement method can evaluate more than one section of the line scale and is capable of the evaluation of the longitudinal uniformity of the secular change of glass ceramics. In this paper, an arithmetic model of the secular change of a line scale and its longitudinal uniformity is proposed. Six line scales made of Zerodur®, Clearceram® and synthetic quartz were manufactured at the same time. The dimensional changes of the six line scales were experimentally evaluated over 2 years using a line scale calibration system

  15. A Uniform Approach to Analogies, Synonyms, Antonyms, and Associations

    OpenAIRE

    Turney, Peter D.

    2008-01-01

    Recognizing analogies, synonyms, antonyms, and associations appear to be four distinct tasks, requiring distinct NLP algorithms. In the past, the four tasks have been treated independently, using a wide variety of algorithms. These four semantic classes, however, are a tiny sample of the full range of semantic phenomena, and we cannot afford to create ad hoc algorithms for each semantic phenomenon; we need to seek a unified approach. We propose to subsume a broad range of phenomena under anal...

  16. A uniform approach for programming distributed heterogeneous computing systems.

    Science.gov (United States)

    Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas

    2014-12-01

    Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.

  17. Large-scale syntheses of uniform ZnO nanorods and ethanol gas sensors application

    International Nuclear Information System (INIS)

    Chen Jin; Li Jin; Li Jiahui; Xiao Guoqing; Yang Xiaofeng

    2011-01-01

    Research highlights: → The uniform ZnO nanorods could be synthesized by a low temperature, solution-based method. → The results showed that the sample had uniform rod-like morphology with a narrow size distribution and highly crystallinity. → Room-temperature photoluminescence spectra of these nanorods show an exciton emission around 382 nm and a weak deep level emission, indicating the nanorods have high quality. → The sensor exhibited high sensitivity and fast response to ethanol gas at a work temperature of 400 deg. C. - Abstract: Uniform ZnO nanorods with a gram scale were prepared by a low temperature and solution-based method. The samples are characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM) and photoluminescence (PL). The results showed that the sample had uniform rod-like morphology with a narrow size distribution and highly crystallinity. Room-temperature PL spectra of these nanorods show an exciton emission around 382 nm and a negligible deep level emission, indicating the nanorods have high quality. The gas-sensing properties of the materials have been investigated. The results indicate that the as-prepared nanorods show much better sensitivity and stability. The n-type semiconductor gas sensor exhibited high sensitivity and fast response to ethanol gas at a work temperature of 400 deg. C. ZnO nanorods are excellent potential candidates for highly sensitive gas sensors and ultraviolet laser.

  18. Uniform functional structure across spatial scales in an intertidal benthic assemblage.

    Science.gov (United States)

    Barnes, R S K; Hamylton, Sarah

    2015-05-01

    To investigate the causes of the remarkable similarity of emergent assemblage properties that has been demonstrated across disparate intertidal seagrass sites and assemblages, this study examined whether their emergent functional-group metrics are scale related by testing the null hypothesis that functional diversity and the suite of dominant functional groups in seagrass-associated macrofauna are robust structural features of such assemblages and do not vary spatially across nested scales within a 0.4 ha area. This was carried out via a lattice of 64 spatially referenced stations. Although densities of individual components were patchily dispersed across the locality, rank orders of importance of the 14 functional groups present, their overall functional diversity and evenness, and the proportions of the total individuals contained within each showed, in contrast, statistically significant spatial uniformity, even at areal scales functional groups in their geospatial context also revealed weaker than expected levels of spatial autocorrelation, and then only at the smaller scales and amongst the most dominant groups, and only a small number of negative correlations occurred between the proportional importances of the individual groups. In effect, such patterning was a surface veneer overlying remarkable stability of assemblage functional composition across all spatial scales. Although assemblage species composition is known to be homogeneous in some soft-sediment marine systems over equivalent scales, this combination of patchy individual components yet basically constant functional-group structure seems as yet unreported. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Highly Uniform Carbon Nanotube Field-Effect Transistors and Medium Scale Integrated Circuits.

    Science.gov (United States)

    Chen, Bingyan; Zhang, Panpan; Ding, Li; Han, Jie; Qiu, Song; Li, Qingwen; Zhang, Zhiyong; Peng, Lian-Mao

    2016-08-10

    Top-gated p-type field-effect transistors (FETs) have been fabricated in batch based on carbon nanotube (CNT) network thin films prepared from CNT solution and present high yield and highly uniform performance with small threshold voltage distribution with standard deviation of 34 mV. According to the property of FETs, various logical and arithmetical gates, shifters, and d-latch circuits were designed and demonstrated with rail-to-rail output. In particular, a 4-bit adder consisting of 140 p-type CNT FETs was demonstrated with higher packing density and lower supply voltage than other published integrated circuits based on CNT films, which indicates that CNT based integrated circuits can reach to medium scale. In addition, a 2-bit multiplier has been realized for the first time. Benefitted from the high uniformity and suitable threshold voltage of CNT FETs, all of the fabricated circuits based on CNT FETs can be driven by a single voltage as small as 2 V.

  20. Effect of air gap on uniformity of large-scale surface-wave plasma

    International Nuclear Information System (INIS)

    Lan Chaohui; Hu Xiwei; Jiang Zhonghe; Liu Minghai

    2009-01-01

    The effect of air gap on the uniformity of large-scale surface-wave plasma (SWP) in a rectangular chamber device is studied by using three-dimensional numerical analyses based on the finite difference time-domain (FDTD) approximation to Maxwell's equations and plasma fluid model. The spatial distributions of surface wave excited by slot-antenna array and the plasma parameters such as electron density and temperature are presented. For different air gap thicknesses, the results show that the existence of air gap would severely weaken the excitations of the surface wave and thereby the SWP. Thus the air gap should be eliminated completely in the design of the SWP source, which is opposite to the former research results. (authors)

  1. Large-scale synthesis of reduced graphene oxides with uniformly coated polyaniline for supercapacitor applications.

    Science.gov (United States)

    Salunkhe, Rahul R; Hsu, Shao-Hui; Wu, Kevin C W; Yamauchi, Yusuke

    2014-06-01

    We report an effective route for the preparation of layered reduced graphene oxide (rGO) with uniformly coated polyaniline (PANI) layers. These nanocomposites are synthesized by chemical oxidative polymerization of aniline monomer in the presence of layered rGO. SEM, TEM, X-ray photoelectron spectroscopy (XPS), FTIR, and Raman spectroscopy analysis results demonstrated that reduced graphene oxide-polyaniline (rGO-PANI) nanocomposites are successfully synthesized. Because of synergistic effects, rGO-PANI nanocomposites prepared by this approach exhibit excellent capacitive performance with a high specific capacitance of 286 F g(-1) and high cycle reversibility of 94 % after 2000 cycles. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. SCALING LAWS AND TEMPERATURE PROFILES FOR SOLAR AND STELLAR CORONAL LOOPS WITH NON-UNIFORM HEATING

    International Nuclear Information System (INIS)

    Martens, P. C. H.

    2010-01-01

    The bulk of solar coronal radiative loss consists of soft X-ray emission from quasi-static loops at the cores of active regions. In order to develop diagnostics for determining the heating mechanism of these loops from observations by coronal imaging instruments, I have developed analytical solutions for the temperature structure and scaling laws of loop strands for a set of temperature- and pressure-dependent heating functions that encompass heating concentrated at the footpoints, uniform heating, and heating concentrated at the loop apex. Key results are that the temperature profile depends only weakly on the heating distribution-not sufficiently to be of significant diagnostic value-and that the scaling laws survive for this wide range of heating distributions, but with the constant of proportionality in the Rosner-Tucker-Vaiana scaling law (P 0 L ∼ T 3 max ) depending on the specific heating function. Furthermore, quasi-static solutions do not exist for an excessive concentration of heating near the loop footpoints, a result in agreement with recent numerical simulations. It is demonstrated that a generalization of the results to a set of solutions for strands with a functionally prescribed variable diameter leads to only relatively small correction factors in the scaling laws and temperature profiles for constant diameter loop strands. A quintet of leading theoretical coronal heating mechanisms is shown to be captured by the formalism of this paper, and the differences in thermal structure between them may be verified through observations. Preliminary results from full numerical simulations demonstrate that, despite the simplifying assumptions, the analytical solutions from this paper are accurate and stable.

  3. Scale-free behavior of networks with the copresence of preferential and uniform attachment rules

    Science.gov (United States)

    Pachon, Angelica; Sacerdote, Laura; Yang, Shuyi

    2018-05-01

    Complex networks in different areas exhibit degree distributions with a heavy upper tail. A preferential attachment mechanism in a growth process produces a graph with this feature. We herein investigate a variant of the simple preferential attachment model, whose modifications are interesting for two main reasons: to analyze more realistic models and to study the robustness of the scale-free behavior of the degree distribution. We introduce and study a model which takes into account two different attachment rules: a preferential attachment mechanism (with probability 1 - p) that stresses the rich get richer system, and a uniform choice (with probability p) for the most recent nodes, i.e. the nodes belonging to a window of size w to the left of the last born node. The latter highlights a trend to select one of the last added nodes when no information is available. The recent nodes can be either a given fixed number or a proportion (αn) of the total number of existing nodes. In the first case, we prove that this model exhibits an asymptotically power-law degree distribution. The same result is then illustrated through simulations in the second case. When the window of recent nodes has a constant size, we herein prove that the presence of the uniform rule delays the starting time from which the asymptotic regime starts to hold. The mean number of nodes of degree k and the asymptotic degree distribution are also determined analytically. Finally, a sensitivity analysis on the parameters of the model is performed.

  4. Uniform Distance Scaling Behavior of Planet-Satellite Nanostructures Made by Star Polymers.

    Science.gov (United States)

    Rossner, Christian; Tang, Qiyun; Glatter, Otto; Müller, Marcus; Vana, Philipp

    2017-02-28

    Planet-satellite nanostructures from RAFT star polymers and larger (planet) as well as smaller (satellite) gold nanoparticles are analyzed in experiments and computer simulations regarding the influence of arm number of star polymers. A uniform scaling behavior of planet-satellite distances as a function of arm length was found both in the dried state (via transmission electron microscopy) after casting the nanostructures on surfaces and in the colloidally dispersed state (via simulations and small-angle X-ray scattering) when 2-, 3-, and 6-arm star polymers were employed. This indicates that the planet-satellite distances are mainly determined by the arm length of star polymers. The observed discrepancy between TEM and simulated distances can be attributed to the difference of polymer configurations in dried and dispersed state. Our results also show that these distances are controlled by the density of star polymers end groups, and the number of grabbed satellite particles is determined by the magnitude of the corresponding density. These findings demonstrate the feasibility to precisely control the planet-satellite structures at the nanoscale.

  5. Acceptability of impregnated school uniforms for dengue control in Thailand: a mixed methods approach.

    Science.gov (United States)

    Murray, Natasha; Jansarikij, Suphachai; Olanratmanee, Phanthip; Maskhao, Pongsri; Souares, Aurélia; Wilder-Smith, Annelies; Kittayapong, Pattamaporn; Louis, Valérie R

    2014-01-01

    As current dengue control strategies have been shown to be largely ineffective in reducing dengue in school-aged children, novel approaches towards dengue control need to be studied. Insecticide-impregnated school uniforms represent an innovative approach with the theoretical potential to reduce dengue infections in school children. This study took place in the context of a randomised control trial (RCT) to test the effectiveness of permethrin-impregnated school uniforms (ISUs) for dengue prevention in Chachoengsao Province, Thailand. The objective was to assess the acceptability of ISUs among parents, teachers, and principals of school children involved in the trial. Quantitative and qualitative tools were used in a mixed methods approach. Class-clustered randomised samples of school children enrolled in the RCT were selected and their parents completed 321 self-administered questionnaires. Descriptive statistics and logistic regression were used to analyse the quantitative data. Focus group discussions and individual semi-structured interviews were conducted with parents, teachers, and principals. Qualitative data analysis involved content analysis with coding and thematic development. The knowledge and experience of dengue was substantial. The acceptability of ISUs was high. Parents (87.3%; 95% CI 82.9-90.8) would allow their child to wear an ISU and 59.9% (95% CI 53.7-65.9) of parents would incur additional costs for an ISU over a normal uniform. This was significantly associated with the total monthly income of a household and the educational level of the respondent. Parents (62.5%; 95% CI 56.6-68.1) indicated they would be willing to recommend ISUs to other parents. Acceptability of the novel tool of ISUs was high as defined by the lack of concern along with the willingness to pay and recommend. Considering issues of effectiveness and scalability, assessing acceptability of ISUs over time is recommended.

  6. Acceptability of impregnated school uniforms for dengue control in Thailand: a mixed methods approach

    Directory of Open Access Journals (Sweden)

    Natasha Murray

    2014-09-01

    Full Text Available Background: As current dengue control strategies have been shown to be largely ineffective in reducing dengue in school-aged children, novel approaches towards dengue control need to be studied. Insecticide-impregnated school uniforms represent an innovative approach with the theoretical potential to reduce dengue infections in school children. Objectives: This study took place in the context of a randomised control trial (RCT to test the effectiveness of permethrin-impregnated school uniforms (ISUs for dengue prevention in Chachoengsao Province, Thailand. The objective was to assess the acceptability of ISUs among parents, teachers, and principals of school children involved in the trial. Methodology: Quantitative and qualitative tools were used in a mixed methods approach. Class-clustered randomised samples of school children enrolled in the RCT were selected and their parents completed 321 self-administered questionnaires. Descriptive statistics and logistic regression were used to analyse the quantitative data. Focus group discussions and individual semi-structured interviews were conducted with parents, teachers, and principals. Qualitative data analysis involved content analysis with coding and thematic development. Results: The knowledge and experience of dengue was substantial. The acceptability of ISUs was high. Parents (87.3%; 95% CI 82.9–90.8 would allow their child to wear an ISU and 59.9% (95% CI 53.7–65.9 of parents would incur additional costs for an ISU over a normal uniform. This was significantly associated with the total monthly income of a household and the educational level of the respondent. Parents (62.5%; 95% CI 56.6–68.1 indicated they would be willing to recommend ISUs to other parents. Conclusions: Acceptability of the novel tool of ISUs was high as defined by the lack of concern along with the willingness to pay and recommend. Considering issues of effectiveness and scalability, assessing acceptability of ISUs

  7. Measuring dissimilarity between respiratory effort signals based on uniform scaling for sleep staging

    International Nuclear Information System (INIS)

    Long, Xi; Fonseca, Pedro; Aarts, Ronald M; Yang, Jie; Weysen, Tim; Haakma, Reinder; Foussier, Jérôme

    2014-01-01

    Polysomnography (PSG) has been extensively studied for sleep staging, where sleep stages are usually classified as wake, rapid-eye-movement (REM) sleep, or non-REM (NREM) sleep (including light and deep sleep). Respiratory information has been proven to correlate with autonomic nervous activity that is related to sleep stages. For example, it is known that the breathing rate and amplitude during NREM sleep, in particular during deep sleep, are steadier and more regular compared to periods of wakefulness that can be influenced by body movements, conscious control, or other external factors. However, the respiratory morphology has not been well investigated across sleep stages. We thus explore the dissimilarity of respiratory effort with respect to its signal waveform or morphology. The dissimilarity measure is computed between two respiratory effort signal segments with the same number of consecutive breaths using a uniform scaling distance. To capture the property of signal morphological dissimilarity, we propose a novel window-based feature in a framework of sleep staging. Experiments were conducted with a data set of 48 healthy subjects using a linear discriminant classifier and a ten-fold cross validation. It is revealed that this feature can help discriminate between sleep stages, but with an exception of separating wake and REM sleep. When combining the new feature with 26 existing respiratory features, we achieved a Cohen’s Kappa coefficient of 0.48 for 3-stage classification (wake, REM sleep and NREM sleep) and of 0.41 for 4-stage classification (wake, REM sleep, light sleep and deep sleep), which outperform the results obtained without using this new feature. (paper)

  8. Large-scale uniform bilayer graphene prepared by vacuum graphitization of 6H-SiC(0001) substrates

    Science.gov (United States)

    Wang, Qingyan; Zhang, Wenhao; Wang, Lili; He, Ke; Ma, Xucun; Xue, Qikun

    2013-03-01

    We report on the preparation of large-scale uniform bilayer graphenes on nominally flat Si-polar 6H-SiC(0001) substrates by flash annealing in ultrahigh vacuum. The resulting graphenes have a single thickness of one bilayer and consist of regular terraces separated by the triple SiC bilayer steps on the 6H-SiC(0001) substrates. In situ scanning tunneling microscopy reveals that suppression of pit formation on terraces and uniformity of SiC decomposition at step edges are the key factors to the uniform thickness. By studying the surface morphologies prepared under different annealing rates, it is found that the annealing rate is directly related to SiC decomposition, diffusion of the released Si/C atoms and strain relaxation, which together determine the final step structure and density of defects.

  9. Synthesis of Highly Uniform and Compact Lithium Zinc Ferrite Ceramics via an Efficient Low Temperature Approach.

    Science.gov (United States)

    Xu, Fang; Liao, Yulong; Zhang, Dainan; Zhou, Tingchuan; Li, Jie; Gan, Gongwen; Zhang, Huaiwu

    2017-04-17

    LiZn ferrite ceramics with high saturation magnetization (4πM s ) and low ferromagnetic resonance line widths (ΔH) represent a very critical class of material for microwave ferrite devices. Many existing approaches emphasize promotion of the grain growth (average size is 10-50 μm) of ferrite ceramics to improve the gyromagnetic properties at relatively low sintering temperatures. This paper describes a new strategy for obtaining uniform and compact LiZn ferrite ceramics (average grains size is ∼2 μm) with enhanced magnetic performance by suppressing grain growth in great detail. The LiZn ferrites with a formula of Li 0.415 Zn 0.27 Mn 0.06 Ti 0.1 Fe 2.155 O 4 were prepared by solid reaction routes with two new sintering strategies. Interestingly, results show that uniform, compact, and pure spinel ferrite ceramics were synthesized at a low temperature (∼850 °C) without obvious grain growth. We also find that a fast second sintering treatment (FSST) can further improve their gyromagnetic properties, such as higher 4πM s and lower ΔH. The two new strategies are facile and efficient for densification of LiZn ferrite ceramics via suppressing grain growth at low temperatures. The sintering strategy reported in this study also provides a referential experience for other ceramics, such as soft magnetism ferrite ceramics or dielectric ceramics.

  10. Weight gain in pregnancy and application of the 2009 IOM guidelines: toward a uniform approach.

    Science.gov (United States)

    Gilmore, L Anne; Redman, Leanne M

    2015-03-01

    There is an urgent need to adopt standardized nomenclature as it relates to gestational weight gain (GWG), a more uniform approach to calculate it, and hence quantifying adherence to the 2009 Institute of Medicine (IOM) guidelines. This perspective highlights the varying methods used to estimate GWG and discuss the advantages and limitations of each. While these calculations could be argued to have a minimal impact on data at the population level, on the patient level, incorrectly estimating weight at conception can result in misclassification of preconception body mass index (BMI) and assignment of the IOM guidelines which inherently affect the prospective management of weight gain (and potential outcomes) during the current pregnancy. This study recommends that preconception BMI and total GWG be determined objectively and total GWG be adjusted for length of gestation before assessing adherence to the IOM GWG guidelines. © 2014 The Obesity Society.

  11. Controlling droplet-based deposition uniformity of long silver nanowires by micrometer scale substrate patterning

    International Nuclear Information System (INIS)

    Basu, Nandita; Cross, Graham L W

    2015-01-01

    We report control of droplet-deposit uniformity of long silver nanowires suspended in solutions by microscopic influence of the liquid contact line. Substrates with microfabricated line patterns with a pitch far smaller than mean wire length lead to deposit thickness uniformity compared to unpatterned substrates. For high boiling-point solvents, two significant effects were observed: The substrate patterns suppressed coffee ring staining, and the wire deposits exhibited a common orientation lying perpendicular over top the lines. The latter result is completely distinct from previously reported substrate groove channeling effects. This work shows that microscopic influence of the droplet contact line geometry including the contact angle by altered substrate wetting allows significant and advantageous influence of deposition patterns of wire-like solutes as the drop dries. (paper)

  12. Wafer-scale fabrication of uniform Si nanowire arrays using the Si wafer with UV/Ozone pretreatment

    International Nuclear Information System (INIS)

    Bai, Fan; Li, Meicheng; Huang, Rui; Yu, Yue; Gu, Tiansheng; Chen, Zhao; Fan, Huiyang; Jiang, Bing

    2013-01-01

    The electroless etching technique combined with the process of UV/Ozone pretreatment is presented for wafer-scale fabrication of the silicon nanowire (SiNW) arrays. The high-level uniformity of the SiNW arrays is estimated by the value below 0.2 of the relative standard deviation of the reflection spectra on the 4-in. wafer. Influence of the UV/Ozone pretreatment on the formation of SiNW arrays is investigated. It is seen that a very thin SiO 2 produced by the UV/Ozone pretreatment improves the uniform nucleation of Ag nanoparticles (NPs) on the Si surface because of the effective surface passivation. Meanwhile, the SiO 2 located among the adjacent Ag NPs can obstruct the assimilation growth of Ag NPs, facilitating the deposition of the uniform and dense Ag NPs catalysts, which induces the formation of the SiNW arrays with good uniformity and high filling ratio. Furthermore, the remarkable antireflective and hydrophobic properties are observed for the SiNW arrays which display great potential in self-cleaning antireflection applications

  13. A mechanochemical approach to get stunningly uniform particles of magnesium-aluminum-layered double hydroxides

    Science.gov (United States)

    Zhang, Xiaoqing; Qi, Fenglin; Li, Shuping; Wei, Shaohua; Zhou, Jiahong

    2012-10-01

    A mechanochemical approach is developed in preparing a series of magnesium-aluminum-layered double hydroxides (Mg-Al-LDHs). This approach includes a mechanochemical process which involved manual grinding of solid salts in an agate mortar and afterwards peptization process. In order to verify the LDHs structure synthesized in the grinding process, X-ray diffraction (XRD) patterns, transmission electron microscopy (TEM) photos and thermogravimetry/differential scanning calorimetry (TG-DSC) property of the product without peptization were characterized and the results show that amorphous particles with low crystallinity and poor thermal stability are obtained, and the effect of peptization is to improve the properties, more accurately, regular particles with high crystallinity and good thermal stability can be gained after peptization. Furthermore, the fundamental experimental parameters including grinding time, the molar ratio of Mg to Al element (defined as R value) and the water content were systematically examined in order to control the size and morphologies of LDHs particles, regular hexagonal particles or the spherical nanostructures can be efficiently obtained and the particle sizes were controlled in the range of 52-130 nm by carefully adjusting these parameters. At last, stunningly uniform Mg-Al-LDHs particles can be synthesized under proper R values, suitable grinding time and high degree of supersaturation.

  14. The Scaled SLW model of gas radiation in non-uniform media based on Planck-weighted moments of gas absorption cross-section

    Science.gov (United States)

    Solovjov, Vladimir P.; Andre, Frederic; Lemonnier, Denis; Webb, Brent W.

    2018-02-01

    The Scaled SLW model for prediction of radiation transfer in non-uniform gaseous media is presented. The paper considers a new approach for construction of a Scaled SLW model. In order to maintain the SLW method as a simple and computationally efficient engineering method special attention is paid to explicit non-iterative methods of calculation of the scaling coefficient. The moments of gas absorption cross-section weighted by the Planck blackbody emissive power (in particular, the first moment - Planck mean, and first inverse moment - Rosseland mean) are used as the total characteristics of the absorption spectrum to be preserved by scaling. Generalized SLW modelling using these moments including both discrete gray gases and the continuous formulation is presented. Application of line-by-line look-up table for corresponding ALBDF and inverse ALBDF distribution functions (such that no solution of implicit equations is needed) ensures that the method is flexible and efficient. Predictions for radiative transfer using the Scaled SLW model are compared to line-by-line benchmark solutions, and predictions using the Rank Correlated SLW model and SLW Reference Approach. Conclusions and recommendations regarding application of the Scaled SLW model are made.

  15. A uniform approach for on-site training and qualification of health physics technicians

    International Nuclear Information System (INIS)

    Till, J.E.

    1977-01-01

    Estimates show that in the U.S. approx. 75% of the health physics technicians received their training through courses offered by their employer. The quality and the extent of this training vary considerably among nuclear facilities. This paper describes a uniform approach for on-site training and qualification of health physics technicians applicable to all nuclear facilities. The program consists of four levels of qualification: Health Physics Technician Trainee, Technician I, Technician II and Senior Technician. The training is divided into modules that are composed of formal lectures, practical factors, experience, and a comprehensive examination. The minimum time required from hiring of inexperienced trainees to qualification as Senior Technicians is approx. 24 months. A qualification guide lists each step a technician must complete in the training program and provides documentation which facilitates audits by internal and external groups. Although items in the program would differ between facilities, the program provides specific titles for technicians, based on their training and experience, which would be applicable throughout the nuclear industry. (author)

  16. Optimal Focusing and Scaling Law for Uniform Photo-Polymerization in a Thick Medium Using a Focused UV Laser

    Directory of Open Access Journals (Sweden)

    Jui-Teng Lin

    2014-02-01

    Full Text Available We present a modeling study of photoinitiated polymerization in a thick polymer-absorbing medium using a focused UV laser. Transient profiles of the initiator concentration at various focusing conditions are analyzed to define the polymerization boundary. Furthermore, we demonstrate the optimal focusing conditions that yield more uniform polymerization over a larger volume than the collimated or non-optimal cases. Too much focusing with the focal length f < f* (an optimal focal length yields a fast process; however, it provides a smaller polymerization volume at a given time than in the optimal focusing case. Finally, a scaling law is derived and shows that f* is inverse proportional to the product of the extinction coefficient and the initiator initial concentration. The scaling law provides useful guidance for the prediction of optimal conditions for photoinitiated polymerization under a focused UV laser irradiation. The focusing technique also provides a novel and unique means for obtaining uniform photo-polymerization within a limited irradiation time.

  17. Creating a systems engineering approach for the manual on uniform traffic control devices.

    Science.gov (United States)

    2011-03-01

    The Manual on Uniform Traffic Control Devices (MUTCD) provides basic principles for use of traffic : control devices (TCD). However, most TCDs are not explicitly required, and the decision to use a given : TCD in a given situation is typically made b...

  18. Multi-scale modelling of non-uniform consolidation of uncured toughened unidirectional prepregs

    Science.gov (United States)

    Sorba, G.; Binetruy, C.; Syerko, E.; Leygue, A.; Comas-Cardona, S.; Belnoue, J. P.-H.; Nixon-Pearson, O. J.; Ivanov, D. S.; Hallett, S. R.; Advani, S. G.

    2018-05-01

    Consolidation is a crucial step in manufacturing of composite parts with prepregs because its role is to eliminate inter- and intra-ply gaps and porosity. Some thermoset prepreg systems are toughened with thermoplastic particles. Depending on their size, thermoplastic particles can be either located in between plies or distributed within the inter-fibre regions. When subjected to transverse compaction, resin will bleed out of low-viscosity unidirectional prepregs along the fibre direction, whereas one would expect transverse squeeze flow to dominate for higher viscosity prepregs. Recent experimental work showed that the consolidation of uncured toughened prepregs involves complex flow and deformation mechanisms where both bleeding and squeeze flow patterns are observed [1]. Micrographs of compacted and cured samples confirm these features as shown in Fig.1. A phenomenological model was proposed [2] where bleeding flow and squeeze flow are combined. A criterion for the transition from shear flow to resin bleeding was also proposed. However, the micrographs also reveal a resin rich layer between plies which may be contributing to the complex flow mechanisms during the consolidation process. In an effort to provide additional insight into these complex mechanisms, this work focuses on the 3D numerical modelling of the compaction of uncured toughened prepregs in the cross-ply configuration described in [1]. A transversely isotropic fluid model is used to describe the flow behaviour of the plies coupled with interplay resin flow of an isotropic fluid. The multi-scale flow model used is based on [3, 4]. A numerical parametric study is carried out where the resin viscosity, permeability and inter-ply thickness are varied to identify the role of important variables. The squeezing flow and the bleeding flow are compared for a range of process parameters to investigate the coupling and competition between the two flow mechanisms. Figure 4 shows the predicted displacement of

  19. Low-Temperature Soft-Cover Deposition of Uniform Large-Scale Perovskite Films for High-Performance Solar Cells.

    Science.gov (United States)

    Ye, Fei; Tang, Wentao; Xie, Fengxian; Yin, Maoshu; He, Jinjin; Wang, Yanbo; Chen, Han; Qiang, Yinghuai; Yang, Xudong; Han, Liyuan

    2017-09-01

    Large-scale high-quality perovskite thin films are crucial to produce high-performance perovskite solar cells. However, for perovskite films fabricated by solvent-rich processes, film uniformity can be prevented by convection during thermal evaporation of the solvent. Here, a scalable low-temperature soft-cover deposition (LT-SCD) method is presented, where the thermal convection-induced defects in perovskite films are eliminated through a strategy of surface tension relaxation. Compact, homogeneous, and convection-induced-defects-free perovskite films are obtained on an area of 12 cm 2 , which enables a power conversion efficiency (PCE) of 15.5% on a solar cell with an area of 5 cm 2 . This is the highest efficiency at this large cell area. A PCE of 15.3% is also obtained on a flexible perovskite solar cell deposited on the polyethylene terephthalate substrate owing to the advantage of presented low-temperature processing. Hence, the present LT-SCD technology provides a new non-spin-coating route to the deposition of large-area uniform perovskite films for both rigid and flexible perovskite devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Favorable noise uniformity properties of Fourier-based interpolation and reconstruction approaches in single-slice helical computed tomography

    International Nuclear Information System (INIS)

    La Riviere, Patrick J.; Pan Xiaochuan

    2002-01-01

    Volumes reconstructed by standard methods from single-slice helical computed tomography (CT) data have been shown to have noise levels that are highly nonuniform relative to those in conventional CT. These noise nonuniformities can affect low-contrast object detectability and have also been identified as the cause of the zebra artifacts that plague maximum intensity projection (MIP) images of such volumes. While these spatially variant noise levels have their root in the peculiarities of the helical scan geometry, there is also a strong dependence on the interpolation and reconstruction algorithms employed. In this paper, we seek to develop image reconstruction strategies that eliminate or reduce, at its source, the nonuniformity of noise levels in helical CT relative to that in conventional CT. We pursue two approaches, independently and in concert. We argue, and verify, that Fourier-based longitudinal interpolation approaches lead to more uniform noise ratios than do the standard 360LI and 180LI approaches. We also demonstrate that a Fourier-based fan-to-parallel rebinning algorithm, used as an alternative to fanbeam filtered backprojection for slice reconstruction, also leads to more uniform noise ratios, even when making use of the 180LI and 360LI interpolation approaches

  1. A generic Approach for Reliability Predictions considering non-uniformly Deterioration Behaviour

    International Nuclear Information System (INIS)

    Krause, Jakob; Kabitzsch, Klaus

    2012-01-01

    Predictive maintenance offers the possibility to prognosticate the remaining time until a maintenance action of a machine has to be scheduled. Unfortunately, current predictive maintenance solutions are only suitable for very specific use cases like reliability predictions based on vibration monitoring. Furthermore, they do not consider the fact that machines may deteriorate non-uniformly, depending on external influences (e.g., the work piece material in a milling machine or the changing fruit acid concentration in a bottling plant). In this paper two concepts for a generic predictive maintenance solution which also considers non-uniformly aging behaviour are introduced. The first concept is based on system models representing the health state of a technical system. As these models are usually statically (viz. without a timely dimension) their coefficients are determined periodically and the resulting time series is used as aging indicator. The second concept focuses on external influences (contexts) which change the behaviour of the previous mentioned aging indicators in order to increase the accuracy of reliability predictions. Therefore, context-depended time series models are determined and used to predict machine reliability. Both concepts were evaluated on data of an air ventilation system. Thereby, it could be shown that they are suitable to determine aging indicators in a generic way and to incorporate external influences in the reliability prediction. Through this, the quality of reliability predictions can be significantly increased. In reality this leads to a more accurate scheduling of maintenance actions. Furthermore, the generic character of the solutions makes the concepts suitable for a wide range of aging processes.

  2. Large-scale uniform ZnO tetrapods on catalyst free glass substrate by thermal evaporation method

    Energy Technology Data Exchange (ETDEWEB)

    Alsultany, Forat H., E-mail: foratusm@gmail.com [School of Physics, USM, 11800 Penang (Malaysia); Hassan, Z. [Institute of Nano-Optoelectronics Research and Technology Laboratory (INOR), USM, 11800 Penang (Malaysia); Ahmed, Naser M. [School of Physics, USM, 11800 Penang (Malaysia)

    2016-07-15

    Highlights: • Investigate the growth of ZnO-Ts on glass substrate by thermal evaporation method. • Glass substrate without any catalyst or a seed layer. • The morphology was controlled by adjusting the temperature of the material and the substrate. • Glass substrate was placed vertically in the quartz tube. - Abstract: Here, we report for the first time the catalyst-free growth of large-scale uniform shape and size ZnO tetrapods on a glass substrate via thermal evaporation method. Three-dimensional networks of ZnO tetrapods have needle–wire junctions, an average leg length of 2.1–2.6 μm, and a diameter of 35–240 nm. The morphology and structure of ZnO tetrapods were investigated by controlling the preparation temperature of each of the Zn powder and the glass substrate under O{sub 2} and Ar gases. Studies were carried out on ZnO tetrapods using X-ray diffraction, field emission scanning electron microscopy, UV–vis spectrophotometer, and a photoluminescence. The results showed that the sample grow in the hexagonal wurtzite structure with preferentially oriented along (002) direction, good crystallinity and high transmittance. The band gap value is about 3.27 eV. Photoluminescence spectrum exhibits a very sharp peak at 378 nm and a weak broad green emission.

  3. A novel non-uniform control vector parameterization approach with time grid refinement for flight level tracking optimal control problems.

    Science.gov (United States)

    Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua

    2018-02-01

    High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. A design approach to achieving the field uniformity requirements for the SSC dipole magnets

    International Nuclear Information System (INIS)

    Pavlik, D.; Krefta, M.P.; Johnson, D.C.

    1991-01-01

    This work describes a design approach for the calculation of the magnetic field quality in the SSC dipole magnets. A description of different analytical techniques including two and three dimensional finite element, finite difference and closed form methods is presented. Their application to the field quality problem is discussed showing how each can be relevant to a portion of the problem. Sources of field quality error and their impact on magnet operation are presented. Included are geometric variations of the conductors, yoke and collar, variabilities in material properties, persistent currents, saturation effects and the influence of boundary conditions. An approach to integrating the analytical methods and codes into a comprehensive design plan and set of manufacturing specifications is described

  5. Physicians' Non-Uniform Approach to Prescribing Drugs to Older Patients – A Qualitative Study

    DEFF Research Database (Denmark)

    Christensen, Line Due; Petersen, Janne; Andersen, Ove

    2017-01-01

    with 50 medical specialists in 23 different specialities throughout Denmark who had contact with older patients. Content analysis was performed to identify the relevant themes. Regardless of their medical or surgical background and how often they prescribed drugs for older patients in daily work, all...... that a cautious approach was needed when prescribing drugs for older people, there was no consensus about how to best accomplish this in practice. This article is protected by copyright. All rights reserved....

  6. Growth of uniform nanoparticles of platinum by an economical approach at relatively low temperature

    KAUST Repository

    Shah, M.A.

    2012-01-01

    Current chemical methods of synthesis have shown limited success in the fabrication of nanomaterials, which involves environmentally malignant chemicals. Environmental friendly synthesis requires alternative solvents, and it is expected that the use of soft options of green approaches may overcome these obstacles. Water, which is regarded as a benign solvent, has been used in the present work for the preparation of platinum nanoparticles. The average particle diameter is in the range of ∼13±5 nm and particles are largely agglomerated. The advantages of preparing nanoparticles with this method include ease, flexibility and cost effectiveness. The prospects of the process are bright, and the technique could be extended to prepare many other important metal and metal oxide nanostructures. © 2012 Sharif University of Technology. Production and hosting by Elsevier B.V. All rights reserved.

  7. Growth of uniform nanoparticles of platinum by an economical approach at relatively low temperature

    KAUST Repository

    Shah, M.A.

    2012-06-01

    Current chemical methods of synthesis have shown limited success in the fabrication of nanomaterials, which involves environmentally malignant chemicals. Environmental friendly synthesis requires alternative solvents, and it is expected that the use of soft options of green approaches may overcome these obstacles. Water, which is regarded as a benign solvent, has been used in the present work for the preparation of platinum nanoparticles. The average particle diameter is in the range of ∼13±5 nm and particles are largely agglomerated. The advantages of preparing nanoparticles with this method include ease, flexibility and cost effectiveness. The prospects of the process are bright, and the technique could be extended to prepare many other important metal and metal oxide nanostructures. © 2012 Sharif University of Technology. Production and hosting by Elsevier B.V. All rights reserved.

  8. Scaling Consumers' Purchase Involvement: A New Approach

    Directory of Open Access Journals (Sweden)

    Jörg Kraigher-Krainer

    2012-06-01

    Full Text Available A two-dimensional scale, called ECID Scale, is presented in this paper. The scale is based on a comprehensive model and captures the two antecedent factors of purchase-related involvement, namely whether motivation is intrinsic or extrinsic and whether risk is perceived as low or high. The procedure of scale development and item selection is described. The scale turns out to perform well in terms of validity, reliability, and objectivity despite the use of a small set of items – four each – allowing for simultaneous measurements of up to ten purchases per respondent. The procedure of administering the scale is described so that it can now easily be applied by both, scholars and practitioners. Finally, managerial implications of data received from its application which provide insights into possible strategic marketing conclusions are discussed.

  9. Probabilistic uniformities of uniform spaces

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez Lopez, J.; Romaguera, S.; Sanchis, M.

    2017-07-01

    The theory of metric spaces in the fuzzy context has shown to be an interesting area of study not only from a theoretical point of view but also for its applications. Nevertheless, it is usual to consider these spaces as classical topological or uniform spaces and there are not too many results about constructing fuzzy topological structures starting from a fuzzy metric. Maybe, H/{sup o}hle was the first to show how to construct a probabilistic uniformity and a Lowen uniformity from a probabilistic pseudometric /cite{Hohle78,Hohle82a}. His method can be directly translated to the context of fuzzy metrics and allows to characterize the categories of probabilistic uniform spaces or Lowen uniform spaces by means of certain families of fuzzy pseudometrics /cite{RL}. On the other hand, other different fuzzy uniformities can be constructed in a fuzzy metric space: a Hutton $[0,1]$-quasi-uniformity /cite{GGPV06}; a fuzzifiying uniformity /cite{YueShi10}, etc. The paper /cite{GGRLRo} gives a study of several methods of endowing a fuzzy pseudometric space with a probabilistic uniformity and a Hutton $[0,1]$-quasi-uniformity. In 2010, J. Guti/'errez Garc/'{/i}a, S. Romaguera and M. Sanchis /cite{GGRoSanchis10} proved that the category of uniform spaces is isomorphic to a category formed by sets endowed with a fuzzy uniform structure, i. e. a family of fuzzy pseudometrics satisfying certain conditions. We will show here that, by means of this isomorphism, we can obtain several methods to endow a uniform space with a probabilistic uniformity. Furthermore, these constructions allow to obtain a factorization of some functors introduced in /cite{GGRoSanchis10}. (Author)

  10. DISC Predictive Scales (DPS): Factor Structure and Uniform Differential Item Functioning Across Gender and Three Racial/Ethnic Groups for ADHD, Conduct Disorder, and Oppositional Defiant Disorder Symptoms

    OpenAIRE

    Wiesner, Margit; Kanouse, David E.; Elliott, Marc N.; Windle, Michael; Schuster, Mark A.

    2015-01-01

    The factor structure and potential uniform differential item functioning (DIF) among gender and three racial/ethnic groups of adolescents (African American, Latino, White) were evaluated for attention deficit/hyperactivity disorder (ADHD), conduct disorder (CD), and oppositional defiant disorder (ODD) symptom scores of the DISC Predictive Scales (DPS; Leung et al., 2005; Lucas et al., 2001). Primary caregivers reported on DSM–IV ADHD, CD, and ODD symptoms for a probability sample of 4,491 chi...

  11. DISC Predictive Scales (DPS): Factor structure and uniform differential item functioning across gender and three racial/ethnic groups for ADHD, conduct disorder, and oppositional defiant disorder symptoms.

    Science.gov (United States)

    Wiesner, Margit; Windle, Michael; Kanouse, David E; Elliott, Marc N; Schuster, Mark A

    2015-12-01

    The factor structure and potential uniform differential item functioning (DIF) among gender and three racial/ethnic groups of adolescents (African American, Latino, White) were evaluated for attention deficit/hyperactivity disorder (ADHD), conduct disorder (CD), and oppositional defiant disorder (ODD) symptom scores of the DISC Predictive Scales (DPS; Leung et al., 2005; Lucas et al., 2001). Primary caregivers reported on DSM-IV ADHD, CD, and ODD symptoms for a probability sample of 4,491 children from three geographical regions who took part in the Healthy Passages study (mean age = 12.60 years, SD = 0.66). Confirmatory factor analysis indicated that the expected 3-factor structure was tenable for the data. Multiple indicators multiple causes (MIMIC) modeling revealed uniform DIF for three ADHD and 9 ODD item scores, but not for any of the CD item scores. Uniform DIF was observed predominantly as a function of child race/ethnicity, but minimally as a function of child gender. On the positive side, uniform DIF had little impact on latent mean differences of ADHD, CD, and ODD symptomatology among gender and racial/ethnic groups. Implications of the findings for researchers and practitioners are discussed. (c) 2015 APA, all rights reserved).

  12. Optimization of the plasma parameters for the high current and uniform large-scale pulse arc ion source of the VEST-NBI system

    International Nuclear Information System (INIS)

    Jung, Bongki; Park, Min; Heo, Sung Ryul; Kim, Tae-Seong; Jeong, Seung Ho; Chang, Doo-Hee; Lee, Kwang Won; In, Sang-Ryul

    2016-01-01

    Highlights: • High power magnetic bucket-type arc plasma source for the VEST NBI system is developed with modifications based on the prototype plasma source for KSTAR. • Plasma parameters in pulse duration are measured to characterize the plasma source. • High plasma density and good uniformity is achieved at the low operating pressure below 1 Pa. • Required ion beam current density is confirmed by analysis of plasma parameters and results of a particle balance model. - Abstract: A large-scale hydrogen arc plasma source was developed at the Korea Atomic Energy Research Institute for a high power pulsed NBI system of VEST which is a compact spherical tokamak at Seoul national university. One of the research target of VEST is to study innovative tokamak operating scenarios. For this purpose, high current density and uniform large-scale pulse plasma source is required to satisfy the target ion beam power efficiently. Therefore, optimizing the plasma parameters of the ion source such as the electron density, temperature, and plasma uniformity is conducted by changing the operating conditions of the plasma source. Furthermore, ion species of the hydrogen plasma source are analyzed using a particle balance model to increase the monatomic fraction which is another essential parameter for increasing the ion beam current density. Conclusively, efficient operating conditions are presented from the results of the optimized plasma parameters and the extractable ion beam current is calculated.

  13. What is at stake in multi-scale approaches

    International Nuclear Information System (INIS)

    Jamet, Didier

    2008-01-01

    Full text of publication follows: Multi-scale approaches amount to analyzing physical phenomena at small space and time scales in order to model their effects at larger scales. This approach is very general in physics and engineering; one of the best examples of success of this approach is certainly statistical physics that allows to recover classical thermodynamics and to determine the limits of application of classical thermodynamics. Getting access to small scale information aims at reducing the models' uncertainty but it has a cost: fine scale models may be more complex than larger scale models and their resolution may require the development of specific and possibly expensive methods, numerical simulation techniques and experiments. For instance, in applications related to nuclear engineering, the application of computational fluid dynamics instead of cruder models is a formidable engineering challenge because it requires resorting to high performance computing. Likewise, in two-phase flow modeling, the techniques of direct numerical simulation, where all the interfaces are tracked individually and where all turbulence scales are captured, are getting mature enough to be considered for averaged modeling purposes. However, resolving small scale problems is a necessary step but it is not sufficient in a multi-scale approach. An important modeling challenge is to determine how to treat small scale data in order to get relevant information for larger scale models. For some applications, such as single-phase turbulence or transfers in porous media, this up-scaling approach is known and is now used rather routinely. However, in two-phase flow modeling, the up-scaling approach is not as mature and specific issues must be addressed that raise fundamental questions. This will be discussed and illustrated. (author)

  14. Towards a uniform and large-scale deposition of MoS2 nanosheets via sulfurization of ultra-thin Mo-based solid films.

    Science.gov (United States)

    Vangelista, Silvia; Cinquanta, Eugenio; Martella, Christian; Alia, Mario; Longo, Massimo; Lamperti, Alessio; Mantovan, Roberto; Basset, Francesco Basso; Pezzoli, Fabio; Molle, Alessandro

    2016-04-29

    Large-scale integration of MoS2 in electronic devices requires the development of reliable and cost-effective deposition processes, leading to uniform MoS2 layers on a wafer scale. Here we report on the detailed study of the heterogeneous vapor-solid reaction between a pre-deposited molybdenum solid film and sulfur vapor, thus resulting in a controlled growth of MoS2 films onto SiO2/Si substrates with a tunable thickness and cm(2)-scale uniformity. Based on Raman spectroscopy and photoluminescence, we show that the degree of crystallinity in the MoS2 layers is dictated by the deposition temperature and thickness. In particular, the MoS2 structural disorder observed at low temperature (<750 °C) and low thickness (two layers) evolves to a more ordered crystalline structure at high temperature (1000 °C) and high thickness (four layers). From an atomic force microscopy investigation prior to and after sulfurization, this parametrical dependence is associated with the inherent granularity of the MoS2 nanosheet that is inherited by the pristine morphology of the pre-deposited Mo film. This work paves the way to a closer control of the synthesis of wafer-scale and atomically thin MoS2, potentially extendable to other transition metal dichalcogenides and hence targeting massive and high-volume production for electronic device manufacturing.

  15. Fast-forward scaling theory for phase imprinting on a BEC: creation of a wave packet with uniform momentum density and loading to Bloch states without disturbance

    Science.gov (United States)

    Masuda, Shumpei; Nakamura, Katsuhiro; Nakahara, Mikio

    2018-02-01

    We study phase imprinting on Bose-Einstein condensates (BECs) with the fast-forward scaling theory revealing a nontrivial scaling property in quantum dynamics. We introduce a wave packet with uniform momentum density (WPUM) which has peculiar properties but is short-lived. The fast-forward scaling theory is applied to derive the driving potential for creation of the WPUMs in a predetermined time. Fast manipulation is essential for the creation of WPUMs because of the instability of the state. We also study loading of a BEC into a predetermined Bloch state in the lowest band from the ground state of a periodic potential. Controlled linear potential is not sufficient for creation of the Bloch state with large wavenumber because the change in the amplitude of the order parameter is not negligible. We derive the exact driving potential for creation of predetermined Bloch states using the obtained theory.

  16. Structured ecosystem-scale approach to marine water quality management

    CSIR Research Space (South Africa)

    Taljaard, Susan

    2006-10-01

    Full Text Available and implement environmental management programmes. A structured ecosystem-scale approach for the design and implementation of marine water quality management programmes developed by the CSIR (South Africa) in response to recent advances in policies...

  17. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction

    International Nuclear Information System (INIS)

    Wels, Michael; Hornegger, Joachim; Zheng Yefeng; Comaniciu, Dorin; Huber, Martin

    2011-01-01

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average

  18. Scaling laws for trace impurity confinement: a variational approach

    International Nuclear Information System (INIS)

    Thyagaraja, A.; Haas, F.A.

    1990-01-01

    A variational approach is outlined for the deduction of impurity confinement scaling laws. Given the forms of the diffusive and convective components to the impurity particle flux, we present a variational principle for the impurity confinement time in terms of the diffusion time scale and the convection parameter, which is a non-dimensional measure of the size of the convective flux relative to the diffusive flux. These results are very general and apply irrespective of whether the transport fluxes are of theoretical or empirical origin. The impurity confinement time scales exponentially with the convection parameter in cases of practical interest. (orig.)

  19. Generating Long Scale-Length Plasma Jets Embedded in a Uniform, Multi-Tesla Magnetic-Field

    Science.gov (United States)

    Manuel, Mario; Kuranz, Carolyn; Rasmus, Alex; Klein, Sallee; Fein, Jeff; Belancourt, Patrick; Drake, R. P.; Pollock, Brad; Hazi, Andrew; Park, Jaebum; Williams, Jackson; Chen, Hui

    2013-10-01

    Collimated plasma jets emerge in many classes of astrophysical objects and are of great interest to explore in the laboratory. In many cases, these astrophysical jets exist within a background magnetic field where the magnetic pressure approaches the plasma pressure. Recent experiments performed at the Jupiter Laser Facility utilized a custom-designed solenoid to generate the multi-tesla fields necessary to achieve proper magnetization of the plasma. Time-gated interferometry, Schlieren imaging, and proton radiography were used to characterize jet evolution and collimation under varying degrees of magnetization. Experimental results will be presented and discussed. This work is funded by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-NA0001840, by the National Laser User Facility Program, grant number DE-NA0000850, by the Predictive Sciences Academic Alliances Program in NNSA-ASC, grant number DEFC52-08NA28616, and by NASA through Einstein Postdoctoral Fellowship grant number PF3-140111 awarded by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-03060.

  20. A structured ecosystem-scale approach to marine water quality ...

    African Journals Online (AJOL)

    These, in turn, created the need for holistic and integrated frameworks within which to design and implement environmental management programmes. A structured ecosystem-scale approach for the design and implementation of marine water quality management programmes developed by the CSIR (South Africa) in ...

  1. On the computation of the demagnetization tensor for uniformly magnetized particles of arbitrary shape. Part I: Analytical approach

    International Nuclear Information System (INIS)

    Tandon, S.; Beleggia, M.; Zhu, Y.; De Graef, M.

    2004-01-01

    A Fourier space formalism based on the shape amplitude of a particle is used to compute the demagnetization tensor field for uniformly magnetized particles of arbitrary shape. We provide a list of explicit shape amplitudes for important particle shapes, among others: the sphere, the cylindrical tube, an arbitrary polyhedral shape, a truncated paraboloid, and a cone truncated by a spherical cap. In Part I of this two-part paper, an analytical representation of the demagnetization tensor field for particles with cylindrical symmetry is provided, as well as expressions for the magnetostatic energy and the volumetric demagnetization factors

  2. Multiple-scale approach for the expansion scaling of superfluid quantum gases

    International Nuclear Information System (INIS)

    Egusquiza, I. L.; Valle Basagoiti, M. A.; Modugno, M.

    2011-01-01

    We present a general method, based on a multiple-scale approach, for deriving the perturbative solutions of the scaling equations governing the expansion of superfluid ultracold quantum gases released from elongated harmonic traps. We discuss how to treat the secular terms appearing in the usual naive expansion in the trap asymmetry parameter ε and calculate the next-to-leading correction for the asymptotic aspect ratio, with significant improvement over the previous proposals.

  3. The Stokes number approach to support scale-up and technology transfer of a mixing process.

    Science.gov (United States)

    Willemsz, Tofan A; Hooijmaijers, Ricardo; Rubingh, Carina M; Frijlink, Henderik W; Vromans, Herman; van der Voort Maarschalk, Kees

    2012-09-01

    Transferring processes between different scales and types of mixers is a common operation in industry. Challenges within this operation include the existence of considerable differences in blending conditions between mixer scales and types. Obtaining the correct blending conditions is crucial for the ability to break up agglomerates in order to achieve the desired blend uniformity. Agglomerate break up is often an abrasion process. In this study, the abrasion rate potential of agglomerates is described by the Stokes abrasion (St(Abr)) number of the system. The St(Abr) number equals the ratio between the kinetic energy density of the moving powder bed and the work of fracture of the agglomerate. In this study, the St(Abr) approach demonstrates to be a useful tool to predict the abrasion of agglomerates during blending when technology is transferred between mixer scales/types. Applying the St(Abr) approach revealed a transition point between parameters that determined agglomerate abrasion. This study gave evidence that (1) below this transition point, agglomerate abrasion is determined by a combination of impeller effects and by the kinetic energy density of the powder blend, whereas (2) above this transition point, agglomerate abrasion is mainly determined by the kinetic energy density of the powder blend.

  4. Compensating effect of sap velocity for stand density leads to uniform hillslope-scale forest transpiration across a steep valley cross-section

    Science.gov (United States)

    Renner, Maik; Hassler, Sibylle; Blume, Theresa; Weiler, Markus; Hildebrandt, Anke; Guderle, Marcus; Schymanski, Stan; Kleidon, Axel

    2016-04-01

    Roberts (1983) found that forest transpiration is relatively uniform across different climatic conditions and suggested that forest transpiration is a conservative process compensating for environmental heterogeneity. Here we test this hypothesis at a steep valley cross-section composed of European Beech in the Attert basin in Luxemburg. We use sapflow, soil moisture, biometric and meteorological data from 6 sites along a transect to estimate site scale transpiration rates. Despite opposing hillslope orientation, different slope angles and forest stand structures, we estimated relatively similar transpiration responses to atmospheric demand and seasonal transpiration totals. This similarity is related to a negative correlation between sap velocity and site-average sapwood area. At the south facing sites with an old, even-aged stand structure and closed canopy layer, we observe significantly lower sap velocities but similar stand-average transpiration rates compared to the north-facing sites with open canopy structure, tall dominant trees and dense understorey. This suggests that plant hydraulic co-ordination allows for flexible responses to environmental conditions leading to similar transpiration rates close to the water and energy limits despite the apparent heterogeneity in exposition, stand density and soil moisture. References Roberts, J. (1983). Forest transpiration: A conservative hydrological process? Journal of Hydrology 66, 133-141.

  5. Uniform Single Valued Neutrosophic Graphs

    Directory of Open Access Journals (Sweden)

    S. Broumi

    2017-09-01

    Full Text Available In this paper, we propose a new concept named the uniform single valued neutrosophic graph. An illustrative example and some properties are examined. Next, we develop an algorithmic approach for computing the complement of the single valued neutrosophic graph. A numerical example is demonstrated for computing the complement of single valued neutrosophic graphs and uniform single valued neutrosophic graph.

  6. Receptivity to Kinetic Fluctuations: A Multiple Scales Approach

    Science.gov (United States)

    Edwards, Luke; Tumin, Anatoli

    2017-11-01

    The receptivity of high-speed compressible boundary layers to kinetic fluctuations (KF) is considered within the framework of fluctuating hydrodynamics. The formulation is based on the idea that KF-induced dissipative fluxes may lead to the generation of unstable modes in the boundary layer. Fedorov and Tumin solved the receptivity problem using an asymptotic matching approach which utilized a resonant inner solution in the vicinity of the generation point of the second Mack mode. Here we take a slightly more general approach based on a multiple scales WKB ansatz which requires fewer assumptions about the behavior of the stability spectrum. The approach is modeled after the one taken by Luchini to study low speed incompressible boundary layers over a swept wing. The new framework is used to study examples of high-enthalpy, flat plate boundary layers whose spectra exhibit nuanced behavior near the generation point, such as first mode instabilities and near-neutral evolution over moderate length scales. The configurations considered exhibit supersonic unstable second Mack modes despite the temperature ratio Tw /Te > 1 , contrary to prior expectations. Supported by AFOSR and ONR.

  7. Two-scale approach to oscillatory singularly perturbed transport equations

    CERN Document Server

    Frénod, Emmanuel

    2017-01-01

    This book presents the classical results of the two-scale convergence theory and explains – using several figures – why it works. It then shows how to use this theory to homogenize ordinary differential equations with oscillating coefficients as well as oscillatory singularly perturbed ordinary differential equations. In addition, it explores the homogenization of hyperbolic partial differential equations with oscillating coefficients and linear oscillatory singularly perturbed hyperbolic partial differential equations. Further, it introduces readers to the two-scale numerical methods that can be built from the previous approaches to solve oscillatory singularly perturbed transport equations (ODE and hyperbolic PDE) and demonstrates how they can be used efficiently. This book appeals to master’s and PhD students interested in homogenization and numerics, as well as to the Iter community.

  8. Examining Similarity Structure: Multidimensional Scaling and Related Approaches in Neuroimaging

    Directory of Open Access Journals (Sweden)

    Svetlana V. Shinkareva

    2013-01-01

    Full Text Available This paper covers similarity analyses, a subset of multivariate pattern analysis techniques that are based on similarity spaces defined by multivariate patterns. These techniques offer several advantages and complement other methods for brain data analyses, as they allow for comparison of representational structure across individuals, brain regions, and data acquisition methods. Particular attention is paid to multidimensional scaling and related approaches that yield spatial representations or provide methods for characterizing individual differences. We highlight unique contributions of these methods by reviewing recent applications to functional magnetic resonance imaging data and emphasize areas of caution in applying and interpreting similarity analysis methods.

  9. Statistical distance and the approach to KNO scaling

    International Nuclear Information System (INIS)

    Diosi, L.; Hegyi, S.; Krasznovszky, S.

    1990-05-01

    A new method is proposed for characterizing the approach to KNO scaling. The essence of our method lies in the concept of statistical distance between nearby KNO distributions which reflects their distinguishability in spite of multiplicity fluctuations. It is shown that the geometry induced by the distance function defines a natural metric on the parameter space of a certain family of KNO distributions. Some examples are given in which the energy dependences of distinguishability of neighbouring KNO distributions are compared in nondiffractive hadron-hadron collisions and electron-positron annihilation. (author) 19 refs.; 4 figs

  10. Fast Laplace solver approach to pore-scale permeability

    Science.gov (United States)

    Arns, C. H.; Adler, P. M.

    2018-02-01

    We introduce a powerful and easily implemented method to calculate the permeability of porous media at the pore scale using an approximation based on the Poiseulle equation to calculate permeability to fluid flow with a Laplace solver. The method consists of calculating the Euclidean distance map of the fluid phase to assign local conductivities and lends itself naturally to the treatment of multiscale problems. We compare with analytical solutions as well as experimental measurements and lattice Boltzmann calculations of permeability for Fontainebleau sandstone. The solver is significantly more stable than the lattice Boltzmann approach, uses less memory, and is significantly faster. Permeabilities are in excellent agreement over a wide range of porosities.

  11. Parametric Approach in Designing Large-Scale Urban Architectural Objects

    Directory of Open Access Journals (Sweden)

    Arne Riekstiņš

    2011-04-01

    Full Text Available When all the disciplines of various science fields converge and develop, new approaches to contemporary architecture arise. The author looks towards approaching digital architecture from parametric viewpoint, revealing its generative capacity, originating from the fields of aeronautical, naval, automobile and product-design industries. The author also goes explicitly through his design cycle workflow for testing the latest methodologies in architectural design. The design process steps involved: extrapolating valuable statistical data about the site into three-dimensional diagrams, defining certain materiality of what is being produced, ways of presenting structural skin and structure simultaneously, contacting the object with the ground, interior program definition of the building with floors and possible spaces, logic of fabrication, CNC milling of the proto-type. The author’s developed tool that is reviewed in this article features enormous performative capacity and is applicable to various architectural design scales.Article in English

  12. Giant monopole transition densities within the local scale ATDHF approach

    International Nuclear Information System (INIS)

    Dimitrova, S.S.; Petkov, I.Zh.; Stoitsov, M.V.

    1986-01-01

    Transition densities for 12 C, 16 O, 28 Si, 32 S, 40 Ca, 48 Ca, 56 Ni, 90 Zr, 208 Pb even-even nuclei corresponding to nuclear glant monopole resonances obtained within a local-scale adiabatic time-dependent Hartree-Fook approach in terms of effective Skyrme-type forces SkM and S3. The approach, the particular form and all necessary coefficients of these transition densities are reported. They are of a simple analytical form and may be directly used for example in analyses of particle inelastic scattering on nuclei by distorted wave method and a such a way allowing a test of the theoretical interpretation of giant monopole resonances

  13. Whole-body voxel-based personalized dosimetry: Multiple voxel S-value approach for heterogeneous media with non-uniform activity distributions.

    Science.gov (United States)

    Lee, Min Sun; Kim, Joong Hyun; Paeng, Jin Chul; Kang, Keon Wook; Jeong, Jae Min; Lee, Dong Soo; Lee, Jae Sung

    2017-12-14

    Personalized dosimetry with high accuracy is becoming more important because of the growing interests in personalized medicine and targeted radionuclide therapy. Voxel-based dosimetry using dose point kernel or voxel S-value (VSV) convolution is available. However, these approaches do not consider medium heterogeneity. Here, we propose a new method for whole-body voxel-based personalized dosimetry for heterogeneous media with non-uniform activity distributions, which is referred to as the multiple VSV approach. Methods: The multiple numbers (N) of VSVs for media with different densities covering the whole-body density ranges were used instead of using only a single VSV for water. The VSVs were pre-calculated using GATE Monte Carlo simulation; those were convoluted with the time-integrated activity to generate density-specific dose maps. Computed tomography-based segmentation was conducted to generate binary maps for each density region. The final dose map was acquired by the summation of N segmented density-specific dose maps. We tested several sets of VSVs with different densities: N = 1 (single water VSV), 4, 6, 8, 10, and 20. To validate the proposed method, phantom and patient studies were conducted and compared with direct Monte Carlo, which was considered the ground truth. Finally, patient dosimetry (10 subjects) was conducted using the multiple VSV approach and compared with the single VSV and organ-based dosimetry approaches. Errors at the voxel- and organ-levels were reported for eight organs. Results: In the phantom and patient studies, the multiple VSV approach showed significant improvements regarding voxel-level errors, especially for the lung and bone regions. As N increased, voxel-level errors decreased, although some overestimations were observed at lung boundaries. In the case of multiple VSVs ( N = 8), we achieved voxel-level errors of 2.06%. In the dosimetry study, our proposed method showed much improved results compared to the single VSV and

  14. An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach.

    Science.gov (United States)

    Nasir, Muhammad; Attique Khan, Muhammad; Sharif, Muhammad; Lali, Ikram Ullah; Saba, Tanzila; Iqbal, Tassawar

    2018-02-21

    Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for highly equipped environment. The recent advancements in computerized solutions for these diagnoses are highly promising with improved accuracy and efficiency. In this article, we proposed a method for the classification of melanoma and benign skin lesions. Our approach integrates preprocessing, lesion segmentation, features extraction, features selection, and classification. Preprocessing is executed in the context of hair removal by DullRazor, whereas lesion texture and color information are utilized to enhance the lesion contrast. In lesion segmentation, a hybrid technique has been implemented and results are fused using additive law of probability. Serial based method is applied subsequently that extracts and fuses the traits such as color, texture, and HOG (shape). The fused features are selected afterwards by implementing a novel Boltzman Entropy method. Finally, the selected features are classified by Support Vector Machine. The proposed method is evaluated on publically available data set PH2. Our approach has provided promising results of sensitivity 97.7%, specificity 96.7%, accuracy 97.5%, and F-score 97.5%, which are significantly better than the results of existing methods available on the same data set. The proposed method detects and classifies melanoma significantly good as compared to existing methods. © 2018 Wiley Periodicals, Inc.

  15. Turbulent structure of three-dimensional flow behind a model car: 1. Exposed to uniform approach flow

    Science.gov (United States)

    Kozaka, Orçun E.; Özkan, Gökhan; Özdemir, Bedii I.

    2004-01-01

    Turbulent structure of flow behind a model car is investigated with local velocity measurements with emphasis on large structures and their relevance to aerodynamic forces. Results show that two counter-rotating helical vortices, which are formed within the inner wake region, play a key role in determining the flux of kinetic energy. The turbulence is generated within the outermost shear layers due to the instabilities, which also seem to be the basic drive for these relatively organized structures. The measured terms of the turbulent kinetic energy production, which are only part of the full expression, indicate that vortex centres act similar to the manifolds draining the energy in the streamwise direction. As the approach velocity increases, the streamwise convection becomes the dominant means of turbulent transport and, thus, the acquisition of turbulence by relatively non-turbulent flow around the wake region is suppressed.

  16. A micromechanical approach of suffusion based on a length scale analysis of the grain detachment and grain transport processes.

    Science.gov (United States)

    Wautier, Antoine; Bonelli, Stéphane; Nicot, François

    2017-06-01

    Suffusion is the selective erosion of the finest particles of a soil subjected to an internal flow. Among the four types of internal erosion and piping identified today, suffusion is the least understood. Indeed, there is a lack of micromechanical approaches for identifying the critical microstructural parameters responsible for this process. Based on a discrete element modeling of non cohesive granular assemblies, specific micromechanical tools are developed in a unified framework to account for the two first steps of suffusion, namely the grain detachment and the grain transport processes. Thanks to the use of an enhanced force chain definition and autocorrelation functions the typical lengths scales associated with grain detachment are characterized. From the definition of transport paths based on a graph description of the pore space the typical lengths scales associated with grain transport are recovered. For a uniform grain size distribution, a separation of scales between these two processes exists for the finest particles of a soil

  17. Scaling up biomass gasifier use: an application-specific approach

    International Nuclear Information System (INIS)

    Ghosh, Debyani; Sagar, Ambuj D.; Kishore, V.V.N.

    2006-01-01

    Biomass energy accounts for about 11% of the global primary energy supply, and it is estimated that about 2 billion people worldwide depend on biomass for their energy needs. Yet, most of the use of biomass is in a primitive and inefficient manner, primarily in developing countries, leading to a host of adverse implications on human health, environment, workplace conditions, and social well being. Therefore, the utilization of biomass in a clean and efficient manner to deliver modern energy services to the world's poor remains an imperative for the development community. One possible approach to do this is through the use of biomass gasifiers. Although significant efforts have been directed towards developing and deploying biomass gasifiers in many countries, scaling up their dissemination remains an elusive goal. Based on an examination of biomass gasifier development, demonstration, and deployment efforts in India-a country with more than two decades of experiences in biomass gasifier development and dissemination, this article identifies a number of barriers that have hindered widespread deployment of biomass gasifier-based energy systems. It also suggests a possible approach for moving forward, which involves a focus on specific application areas that satisfy a set of criteria that are critical to deployment of biomass gasifiers, and then tailoring the scaling up strategy to the characteristics of the user groups for that application. Our technical, financial, economic and institutional analysis suggests an initial focus on four categories of applications-small and medium enterprises, the informal sector, biomass-processing industries, and some rural areas-may be particularly feasible and fruitful

  18. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  19. Bridging the PSI Knowledge Gap: A Multi-Scale Approach

    Energy Technology Data Exchange (ETDEWEB)

    Wirth, Brian D. [Univ. of Tennessee, Knoxville, TN (United States)

    2015-01-08

    Plasma-surface interactions (PSI) pose an immense scientific hurdle in magnetic confinement fusion and our present understanding of PSI in confinement environments is highly inadequate; indeed, a recent Fusion Energy Sciences Advisory Committee report found that 4 out of the 5 top five fusion knowledge gaps were related to PSI. The time is appropriate to develop a concentrated and synergistic science effort that would expand, exploit and integrate the wealth of laboratory ion-beam and plasma research, as well as exciting new computational tools, towards the goal of bridging the PSI knowledge gap. This effort would broadly advance plasma and material sciences, while providing critical knowledge towards progress in fusion PSI. This project involves the development of a Science Center focused on a new approach to PSI science; an approach that both exploits access to state-of-the-art PSI experiments and modeling, as well as confinement devices. The organizing principle is to develop synergistic experimental and modeling tools that treat the truly coupled multi-scale aspect of the PSI issues in confinement devices. This is motivated by the simple observation that while typical lab experiments and models allow independent manipulation of controlling variables, the confinement PSI environment is essentially self-determined with few outside controls. This means that processes that may be treated independently in laboratory experiments, because they involve vastly different physical and time scales, will now affect one another in the confinement environment. Also, lab experiments cannot simultaneously match all exposure conditions found in confinement devices typically forcing a linear extrapolation of lab results. At the same time programmatic limitations prevent confinement experiments alone from answering many key PSI questions. The resolution to this problem is to usefully exploit access to PSI science in lab devices, while retooling our thinking from a linear and de

  20. Performance Analysis of Machine-Learning Approaches for Modeling the Charging/Discharging Profiles of Stationary Battery Systems with Non-Uniform Cell Aging

    Directory of Open Access Journals (Sweden)

    Nandha Kumar Kandasamy

    2017-06-01

    Full Text Available The number of Stationary Battery Systems (SBS connected to various power distribution networks across the world has increased drastically. The increase in the integration of renewable energy sources is one of the major contributors to the increase in the number of SBS. SBS are also used in other applications such as peak load management, load-shifting, voltage regulation and power quality improvement. Accurately modeling the charging/discharging characteristics of such SBS at various instances (charging/discharging profile is vital for many applications. Capacity loss due to the aging of the batteries is an important factor to be considered for estimating the charging/discharging profile of SBS more accurately. Empirical modeling is a common approach used in the literature for estimating capacity loss, which is further used for estimating the charging/discharging profiles of SBS. However, in the case of SBS used for renewable integration and other grid related applications, machine-learning (ML based models provide extreme flexibility and require minimal resources for implementation. The models can even leverage existing smart meter data to estimate the charging/discharging profile of SBS. In this paper, an analysis on the performance of different ML approaches that can be applied for lithium iron phosphate battery systems and vanadium redox flow battery systems used as SBS is presented for the scenarios where the aging of individual cells is non-uniform.

  1. Biodiversity conservation in agriculture requires a multi-scale approach.

    Science.gov (United States)

    Gonthier, David J; Ennis, Katherine K; Farinas, Serge; Hsieh, Hsun-Yi; Iverson, Aaron L; Batáry, Péter; Rudolphi, Jörgen; Tscharntke, Teja; Cardinale, Bradley J; Perfecto, Ivette

    2014-09-22

    Biodiversity loss--one of the most prominent forms of modern environmental change--has been heavily driven by terrestrial habitat loss and, in particular, the spread and intensification of agriculture. Expanding agricultural land-use has led to the search for strong conservation strategies, with some suggesting that biodiversity conservation in agriculture is best maximized by reducing local management intensity, such as fertilizer and pesticide application. Others highlight the importance of landscape-level approaches that incorporate natural or semi-natural areas in landscapes surrounding farms. Here, we show that both of these practices are valuable to the conservation of biodiversity, and that either local or landscape factors can be most crucial to conservation planning depending on which types of organisms one wishes to save. We performed a quantitative review of 266 observations taken from 31 studies that compared the impacts of localized (within farm) management strategies and landscape complexity (around farms) on the richness and abundance of plant, invertebrate and vertebrate species in agro-ecosystems. While both factors significantly impacted species richness, the richness of sessile plants increased with less-intensive local management, but did not significantly respond to landscape complexity. By contrast, the richness of mobile vertebrates increased with landscape complexity, but did not significantly increase with less-intensive local management. Invertebrate richness and abundance responded to both factors. Our analyses point to clear differences in how various groups of organisms respond to differing scales of management, and suggest that preservation of multiple taxonomic groups will require multiple scales of conservation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  2. Facile approach to synthesize uniform Au@mesoporous SnO{sub 2} yolk–shell nanoparticles and their excellent catalytic activity in 4-nitrophenol reduction

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Ya [Changchun University of Science and Technology, School of Chemistry & Environmental Engineering (China); Li, Lu; Wang, Chungang, E-mail: wangcg925@nenu.edu.cn [Northeast Normal University, Faculty of Chemistry (China); Wang, Tingting, E-mail: wangtt@cust.edu.cn [Changchun University of Science and Technology, School of Chemistry & Environmental Engineering (China)

    2016-01-15

    Monodispersed and uniform Au@mesoporous SnO{sub 2} yolk–shell nanoparticles (Au@mSnO{sub 2} yolk–shell NPs) composed of the moveable Au NP cores and mSnO{sub 2} shells have been successfully fabricated via a facile and reproducible approach. The outside mSnO{sub 2} shells of Au@mSnO{sub 2} yolk–shell NPs not only prevent Au NPs from aggregating and corroding by the reaction solution but also allow the Au NPs to contact with reactant molecules easily through the mesoporous channels. The obtained Au@mSnO{sub 2} yolk–shell NPs are characterized by means of transmission electron microscope, scanning electron microscopy, X-ray powder diffraction, X-ray photoelectron spectrum, and UV–vis absorption spectroscopy. The synthesized materials exhibit excellent catalytic performance and high stability towards the reduction of 4-nitrophenol with NaBH{sub 4} as a reducing agent, which may be ascribed to their high specific surface area and unique mesoporous structure. Moreover, the synthetic strategy reported in this paper can be extended to fabricate a series of multifunctional noble metal@metal oxide yolk–shell nanocomposite materials with unique properties for various applications.

  3. A Novel Approach of Using Ground CNTs as the Carbon Source to Fabricate Uniformly Distributed Nano-Sized TiCx/2009Al Composites.

    Science.gov (United States)

    Wang, Lei; Qiu, Feng; Ouyang, Licheng; Wang, Huiyuan; Zha, Min; Shu, Shili; Zhao, Qinglong; Jiang, Qichuan

    2015-12-17

    Nano-sized TiC x /2009Al composites (with 5, 7, and 9 vol% TiC x ) were fabricated via the combustion synthesis of the 2009Al-Ti-CNTs system combined with vacuum hot pressing followed by hot extrusion. In the present study, CNTs were used as the carbon source to synthesize nano-sized TiC x particles. An attempt was made to correlate the effect of ground CNTs by milling and the distribution of synthesized nano-sized TiC x particles in 2009Al as well as the tensile properties of nano-sized TiC x /2009Al composites. Microstructure analysis showed that when ground CNTs were used, the synthesized nano-sized TiC x particles dispersed more uniformly in the 2009Al matrix. Moreover, when 2 h-milled CNTs were used, the 5, 7, and 9 vol% nano-sized TiC x /2009Al composites had the highest tensile properties, especially, the 9 vol% nano-sized TiC x /2009Al composites. The results offered a new approach to improve the distribution of in situ nano-sized TiC x particles and tensile properties of composites.

  4. Noise pollution mapping approach and accuracy on landscape scales.

    Science.gov (United States)

    Iglesias Merchan, Carlos; Diaz-Balteiro, Luis

    2013-04-01

    Noise mapping allows the characterization of environmental variables, such as noise pollution or soundscape, depending on the task. Strategic noise mapping (as per Directive 2002/49/EC, 2002) is a tool intended for the assessment of noise pollution at the European level every five years. These maps are based on common methods and procedures intended for human exposure assessment in the European Union that could be also be adapted for assessing environmental noise pollution in natural parks. However, given the size of such areas, there could be an alternative approach to soundscape characterization rather than using human noise exposure procedures. It is possible to optimize the size of the mapping grid used for such work by taking into account the attributes of the area to be studied and the desired outcome. This would then optimize the mapping time and the cost. This type of optimization is important in noise assessment as well as in the study of other environmental variables. This study compares 15 models, using different grid sizes, to assess the accuracy of the noise mapping of the road traffic noise at a landscape scale, with respect to noise and landscape indicators. In a study area located in the Manzanares High River Basin Regional Park in Spain, different accuracy levels (Kappa index values from 0.725 to 0.987) were obtained depending on the terrain and noise source properties. The time taken for the calculations and the noise mapping accuracy results reveal the potential for setting the map resolution in line with decision-makers' criteria and budget considerations. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Quasi-uniform Space

    Directory of Open Access Journals (Sweden)

    Coghetto Roland

    2016-09-01

    Full Text Available In this article, using mostly Pervin [9], Kunzi [6], [8], [7], Williams [11] and Bourbaki [3] works, we formalize in Mizar [2] the notions of quasiuniform space, semi-uniform space and locally uniform space.

  6. Quasi-uniform Space

    OpenAIRE

    Coghetto Roland

    2016-01-01

    In this article, using mostly Pervin [9], Kunzi [6], [8], [7], Williams [11] and Bourbaki [3] works, we formalize in Mizar [2] the notions of quasiuniform space, semi-uniform space and locally uniform space.

  7. Spacetime transformations from a uniformly accelerated frame

    International Nuclear Information System (INIS)

    Friedman, Yaakov; Scarr, Tzvi

    2013-01-01

    We use the generalized Fermi–Walker transport to construct a one-parameter family of inertial frames which are instantaneously comoving to a uniformly accelerated observer. We explain the connection between our approach and that of Mashhoon. We show that our solutions of uniformly accelerated motion have constant acceleration in the comoving frame. Assuming the weak hypothesis of locality, we obtain local spacetime transformations from a uniformly accelerated frame K′ to an inertial frame K. The spacetime transformations between two uniformly accelerated frames with the same acceleration are Lorentz. We compute the metric at an arbitrary point of a uniformly accelerated frame. (paper)

  8. School Uniforms Redux.

    Science.gov (United States)

    Dowling-Sendor, Benjamin

    2002-01-01

    Reviews a recent decision in "Littlefield" by the 5th Circuit upholding a school uniform policy. Advises board member who wish to adopt a school uniform policy to solicit input from parents and students, research the experiences of other school districts with uniform policies, and articulate the interests they wish to promote through uniform…

  9. Do School Uniforms Fit?

    Science.gov (United States)

    White, Kerry A.

    2000-01-01

    In 1994, Long Beach (California) Unified School District began requiring uniforms in all elementary and middle schools. Now, half of all urban school systems and many suburban schools have uniform policies. Research on uniforms' effectiveness is mixed. Tightened dress codes may be just as effective and less litigious. (MLH)

  10. Mandatory School Uniforms.

    Science.gov (United States)

    Cohn, Carl A.

    1996-01-01

    Shortly after implementing a mandatory school uniform policy, the Long Beach (California) Public Schools can boast 99% compliance and a substantial reduction in school crime. The uniforms can't be confused with gang colors, save parents money, and help identify outsiders. A sidebar lists ingredients for a mandatory uniform policy. (MLH)

  11. On approach to double asymptotic scaling at low x

    International Nuclear Information System (INIS)

    Choudhury, D.K.

    1994-10-01

    We obtain the finite x correlations to the gluon structure function which exhibits double asymptotic scaling at low x. The technique used is the GLAP equation for gluon approximated at low x by a Taylor expansion. (author). 27 refs

  12. Validity of the Neuromuscular Recovery Scale: a measurement model approach.

    Science.gov (United States)

    Velozo, Craig; Moorhouse, Michael; Ardolino, Elizabeth; Lorenz, Doug; Suter, Sarah; Basso, D Michele; Behrman, Andrea L

    2015-08-01

    To determine how well the Neuromuscular Recovery Scale (NRS) items fit the Rasch, 1-parameter, partial-credit measurement model. Confirmatory factor analysis (CFA) and principal components analysis (PCA) of residuals were used to determine dimensionality. The Rasch, 1-parameter, partial-credit rating scale model was used to determine rating scale structure, person/item fit, point-measure item correlations, item discrimination, and measurement precision. Seven NeuroRecovery Network clinical sites. Outpatients (N=188) with spinal cord injury. Not applicable. NRS. While the NRS met 1 of 3 CFA criteria, the PCA revealed that the Rasch measurement dimension explained 76.9% of the variance. Ten of 11 items and 91% of the patients fit the Rasch model, with 9 of 11 items showing high discrimination. Sixty-nine percent of the ratings met criteria. The items showed a logical item-difficulty order, with Stand retraining as the easiest item and Walking as the most challenging item. The NRS showed no ceiling or floor effects and separated the sample into almost 5 statistically distinct strata; individuals with an American Spinal Injury Association Impairment Scale (AIS) D classification showed the most ability, and those with an AIS A classification showed the least ability. Items not meeting the rating scale criteria appear to be related to the low frequency counts. The NRS met many of the Rasch model criteria for construct validity. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  13. Scaling production and improving efficiency in DEA: an interactive approach

    Science.gov (United States)

    Rödder, Wilhelm; Kleine, Andreas; Dellnitz, Andreas

    2017-10-01

    DEA models help a DMU to detect its (in-)efficiency and to improve activities, if necessary. Efficiency is only one economic aim for a decision-maker; however, up- or downsizing might be a second one. Improving efficiency is the main topic in DEA; the long-term strategy towards the right production size should attract our attention as well. Not always the management of a DMU primarily focuses on technical efficiency but rather is interested in gaining scale effects. In this paper, a formula for returns to scale (RTS) is developed, and this formula is even applicable for interior points of technology. Particularly, technical and scale inefficient DMUs need sophisticated instruments to improve their situation. Considering RTS as well as efficiency, in this paper, we give an advice for each DMU to find an economically reliable path from its actual situation to better activities and finally to most productive scale size (mpss), perhaps. For realizing this path, we propose an interactive algorithm, thus harmonizing the scientific findings and the interests of the management. Small numerical examples illustrate such paths for selected DMUs; an empirical application in theatre management completes the contribution.

  14. Pulsed-coil magnet systems for applying uniform 10-30 T fields to centimeter-scale targets on Sandia's Z facility.

    Science.gov (United States)

    Rovang, D C; Lamppa, D C; Cuneo, M E; Owen, A C; McKenney, J; Johnson, D W; Radovich, S; Kaye, R J; McBride, R D; Alexander, C S; Awe, T J; Slutz, S A; Sefkow, A B; Haill, T A; Jones, P A; Argo, J W; Dalton, D G; Robertson, G K; Waisman, E M; Sinars, D B; Meissner, J; Milhous, M; Nguyen, D N; Mielke, C H

    2014-12-01

    Sandia has successfully integrated the capability to apply uniform, high magnetic fields (10-30 T) to high energy density experiments on the Z facility. This system uses an 8-mF, 15-kV capacitor bank to drive large-bore (5 cm diameter), high-inductance (1-3 mH) multi-turn, multi-layer electromagnets that slowly magnetize the conductive targets used on Z over several milliseconds (time to peak field of 2-7 ms). This system was commissioned in February 2013 and has been used successfully to magnetize more than 30 experiments up to 10 T that have produced exciting and surprising physics results. These experiments used split-magnet topologies to maintain diagnostic lines of sight to the target. We describe the design, integration, and operation of the pulsed coil system into the challenging and harsh environment of the Z Machine. We also describe our plans and designs for achieving fields up to 20 T with a reduced-gap split-magnet configuration, and up to 30 T with a solid magnet configuration in pursuit of the Magnetized Liner Inertial Fusion concept.

  15. Data report: the wake of a horizontal-axis wind turbine model, measurements in uniform approach flow and in a simulated atmospheric boundary layer

    NARCIS (Netherlands)

    Talmon, A.M.

    1985-01-01

    Wake effects will cause power loss when wínd turbínes are grouped in so called wind turbine parks. Wind tunnel measurements of the wake of a wind turbíne model are conducted in order to refine calculatíons of wake effects. Wake effects caused by tower and nacelle are studied in uniform flow. Wake

  16. Quantitative approach to small-scale nonequilibrium systems

    DEFF Research Database (Denmark)

    Dreyer, Jakob K; Berg-Sørensen, Kirstine; Oddershede, Lene B

    2006-01-01

    In a nano-scale system out of thermodynamic equilibrium, it is important to account for thermal fluctuations. Typically, the thermal noise contributes fluctuations, e.g., of distances that are substantial in comparison to the size of the system and typical distances measured. If the thermal...... propose an approximate but quantitative way of dealing with such an out-of-equilibrium system. The limits of this approximate description of the escape process are determined through optical tweezers experiments and comparison to simulations. Also, this serves as a recipe for how to use the proposed...

  17. An approach to an acute emotional stress reference scale.

    Science.gov (United States)

    Garzon-Rey, J M; Arza, A; de-la-Camara, C; Lobo, A; Armario, A; Aguilo, J

    2017-06-16

    The clinical diagnosis aims to identify the degree of affectation of the psycho-physical state of the patient as a guide to therapeutic intervention. In stress, the lack of a measurement tool based on a reference makes it difficult to quantitatively assess this degree of affectation. To define and perform a primary assessment of a standard reference in order to measure acute emotional stress from the markers identified as indicators of the degree. Psychometric tests and biochemical variables are, in general, the most accepted stress measurements by the scientific community. Each one of them probably responds to different and complementary processes related to the reaction to a stress stimulus. The reference that is proposed is a weighted mean of these indicators by assigning them relative weights in accordance with a principal components analysis. An experimental study was conducted on 40 healthy young people subjected to the psychosocial stress stimulus of the Trier Social Stress Test in order to perform a primary assessment and consistency check of the proposed reference. The proposed scale clearly differentiates between the induced relax and stress states. Accepting the subjectivity of the definition and the lack of a subsequent validation with new experimental data, the proposed standard differentiates between a relax state and an emotional stress state triggered by a moderate stress stimulus, as it is the Trier Social Stress Test. The scale is robust. Although the variations in the percentage composition slightly affect the score, but they do not affect the valid differentiation between states.

  18. School Uniforms. Research Brief

    Science.gov (United States)

    Walker, Karen

    2007-01-01

    Does clothing make the person or does the person make the clothing? How does what attire a student wears to school affect their academic achievement? In 1996, President Clinton cited examples of school violence and discipline issues that might have been avoided had the students been wearing uniforms ("School uniforms: Prevention or suppression?").…

  19. Games Uniforms Unveiled

    Institute of Scientific and Technical Information of China (English)

    Linda

    2008-01-01

    The uniforms for Beijing Olympics’ workers, technical staff and volunteers have been unveiled to mark the 200-day countdown to the Games. The uniforms feature the key element of the clouds of promise and will be in three colors:red for Beijing Olympic Games Committee staff, blue

  20. Truncated conformal space approach to scaling Lee-Yang model

    International Nuclear Information System (INIS)

    Yurov, V.P.; Zamolodchikov, Al.B.

    1989-01-01

    A numerical approach to 2D relativstic field theories is suggested. Considering a field theory model as an ultraviolet conformal field theory perturbed by suitable relevant scalar operator one studies it in finite volume (on a circle). The perturbed Hamiltonian acts in the conformal field theory space of states and its matrix elements can be extracted from the conformal field theory. Truncation of the space at reasonable level results in a finite dimensional problem for numerical analyses. The nonunitary field theory with the ultraviolet region controlled by the minimal conformal theory μ(2/5) is studied in detail. 9 refs.; 17 figs

  1. A simple approach to uniform PdAg alloy membranes: Comparative study of conventional and silver concentration-controlled co-plating

    KAUST Repository

    Zeng, Gaofeng

    2014-03-01

    An Ag-controlled co-plating method was developed for the preparation of palladium/silver alloy membranes on porous tubular alumina supports. By controlling the feed rate of Ag to the Pd bath, the concentration of the silver in the plating bath was restricted during the course of plating. As a result, preferential deposition of silver at the beginning was suppressed and uniform dispersion of silver inside the membrane with silver composition in the desired range was achieved. Ultrathin (∼2.5 μm) PdAg alloy membranes with uniform silver composition of ∼25% were successfully obtained. The membrane showed a hydrogen permeance of 0.88 mol m-2 s-1 and pure-gas H2/N2 selectivity of 2140 at 823 K with ΔP = 100 kPa. Only one hydride phase existed in the studied temperature range from 373 to 823 K with ΔPH=100kPa. Direct comparisons with the conventional simply-mixed co-plating method showed that membranes made by the novel Ag-controlled co-plating method had much more uniform silver distribution, smoother surface, denser membrane structure, higher utilization rate of metal sources, and shorter alloying time. © 2013, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.

  2. A simple approach to uniform PdAg alloy membranes: Comparative study of conventional and silver concentration-controlled co-plating

    KAUST Repository

    Zeng, Gaofeng; Shi, Lei; Liu, Yunyang; Zhang, Yanfeng; Sun, Yuhan

    2014-01-01

    An Ag-controlled co-plating method was developed for the preparation of palladium/silver alloy membranes on porous tubular alumina supports. By controlling the feed rate of Ag to the Pd bath, the concentration of the silver in the plating bath was restricted during the course of plating. As a result, preferential deposition of silver at the beginning was suppressed and uniform dispersion of silver inside the membrane with silver composition in the desired range was achieved. Ultrathin (∼2.5 μm) PdAg alloy membranes with uniform silver composition of ∼25% were successfully obtained. The membrane showed a hydrogen permeance of 0.88 mol m-2 s-1 and pure-gas H2/N2 selectivity of 2140 at 823 K with ΔP = 100 kPa. Only one hydride phase existed in the studied temperature range from 373 to 823 K with ΔPH=100kPa. Direct comparisons with the conventional simply-mixed co-plating method showed that membranes made by the novel Ag-controlled co-plating method had much more uniform silver distribution, smoother surface, denser membrane structure, higher utilization rate of metal sources, and shorter alloying time. © 2013, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.

  3. PATTERN CLASSIFICATION APPROACHES TO MATCHING BUILDING POLYGONS AT MULTIPLE SCALES

    Directory of Open Access Journals (Sweden)

    X. Zhang

    2012-07-01

    Full Text Available Matching of building polygons with different levels of detail is crucial in the maintenance and quality assessment of multi-representation databases. Two general problems need to be addressed in the matching process: (1 Which criteria are suitable? (2 How to effectively combine different criteria to make decisions? This paper mainly focuses on the second issue and views data matching as a supervised pattern classification. Several classifiers (i.e. decision trees, Naive Bayes and support vector machines are evaluated for the matching task. Four criteria (i.e. position, size, shape and orientation are used to extract information for these classifiers. Evidence shows that these classifiers outperformed the weighted average approach.

  4. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    Science.gov (United States)

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The

  5. OBJECT-ORIENTED CHANGE DETECTION BASED ON MULTI-SCALE APPROACH

    Directory of Open Access Journals (Sweden)

    Y. Jia

    2016-06-01

    Full Text Available The change detection of remote sensing images means analysing the change information quantitatively and recognizing the change types of the surface coverage data in different time phases. With the appearance of high resolution remote sensing image, object-oriented change detection method arises at this historic moment. In this paper, we research multi-scale approach for high resolution images, which includes multi-scale segmentation, multi-scale feature selection and multi-scale classification. Experimental results show that this method has a stronger advantage than the traditional single-scale method of high resolution remote sensing image change detection.

  6. Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...

  7. Approaches to recreational landscape scaling of mountain resorts

    Science.gov (United States)

    Chalaya, Elena; Efimenko, Natalia; Povolotskaia, Nina; Slepih, Vladimir

    2013-04-01

    19 Hz, gamma 19 … 25Hz by 9-17%; the increase in adaptation layer of the organism by 21% and a versatility indicator of health - by 19%; the decrease in systolic (from 145 to 131 mm of mercury) and diastolic (from 96 to 82 mm of mercury) arterial pressure, the increase in indicators of carpal dynamometry (on the right hand from 27 to 36 kg, on the left hand from 25 to 34 kg), the increase in speed of thermogenesis (from 0.0633 to 0.0944 K/s) and quality of neurovascular reactivity (from 48% to 81%). In the whole the patient`s cenesthesia has improved. We have also studied the responses of adaptive reactions with the recipients at other options of RL. But researches are still being carried out in this direction. Their results will be used as a base of RL scaling of North Caucasus mountain territories. This problem is interdisciplinary, multidimensional and deals with both medical and geophysical issues. The studies were performed by support of the Program "Basic Sciences for Medicine" and RFBR project No.10-05-01014_a.

  8. Instruction sequence based non-uniform complexity classes

    NARCIS (Netherlands)

    Bergstra, J.A.; Middelburg, C.A.

    2013-01-01

    We present an approach to non-uniform complexity in which single-pass instruction sequences play a key part, and answer various questions that arise from this approach. We introduce several kinds of non-uniform complexity classes. One kind includes a counterpart of the well-known non-uniform

  9. National-Scale Hydrologic Classification & Agricultural Decision Support: A Multi-Scale Approach

    Science.gov (United States)

    Coopersmith, E. J.; Minsker, B.; Sivapalan, M.

    2012-12-01

    Classification frameworks can help organize catchments exhibiting similarity in hydrologic and climatic terms. Focusing this assessment of "similarity" upon specific hydrologic signatures, in this case the annual regime curve, can facilitate the prediction of hydrologic responses. Agricultural decision-support over a diverse set of catchments throughout the United States depends upon successful modeling of the wetting/drying process without necessitating separate model calibration at every site where such insights are required. To this end, a holistic classification framework is developed to describe both climatic variability (humid vs. arid, winter rainfall vs. summer rainfall) and the draining, storing, and filtering behavior of any catchment, including ungauged or minimally gauged basins. At the national scale, over 400 catchments from the MOPEX database are analyzed to construct the classification system, with over 77% of these catchments ultimately falling into only six clusters. At individual locations, soil moisture models, receiving only rainfall as input, produce correlation values in excess of 0.9 with respect to observed soil moisture measurements. By deploying physical models for predicting soil moisture exclusively from precipitation that are calibrated at gauged locations, overlaying machine learning techniques to improve these estimates, then generalizing the calibration parameters for catchments in a given class, agronomic decision-support becomes available where it is needed rather than only where sensing data are located.lassifications of 428 U.S. catchments on the basis of hydrologic regime data, Coopersmith et al, 2012.

  10. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  11. Pellicle transmission uniformity requirements

    Science.gov (United States)

    Brown, Thomas L.; Ito, Kunihiro

    1998-12-01

    Controlling critical dimensions of devices is a constant battle for the photolithography engineer. Current DUV lithographic process exposure latitude is typically 12 to 15% of the total dose. A third of this exposure latitude budget may be used up by a variable related to masking that has not previously received much attention. The emphasis on pellicle transmission has been focused on increasing the average transmission. Much less, attention has been paid to transmission uniformity. This paper explores the total demand on the photospeed latitude budget, the causes of pellicle transmission nonuniformity and examines reasonable expectations for pellicle performance. Modeling is used to examine how the two primary errors in pellicle manufacturing contribute to nonuniformity in transmission. World-class pellicle transmission uniformity standards are discussed and a comparison made between specifications of other components in the photolithographic process. Specifications for other materials or parameters are used as benchmarks to develop a proposed industry standard for pellicle transmission uniformity.

  12. SEVERE CHRONIC ALLERGIC (AND RELATED DISEASES: A UNIFORM APPROACH — A MEDALL-GA2LEN-ARIA POSITION PAPER IN COLLABORATION WITH THE WHO COLLABORATING CENTER FOR ASTHMA AND RHINITIS (ENGLISH & RUSSIAN VARIANTS

    Directory of Open Access Journals (Sweden)

    J. Bousquet

    2011-01-01

    Full Text Available Concepts of disease severity, activity, control and responsiveness to treatment are linked but different. Severity refers to the loss of function of the organs induced by the disease process or to the occurrence of severe acute exacerbations. Severity may vary over time and needs regular follow up. Control is the degree to which therapy goals are currently met. These concepts have evolved over time for asthma in guidelines, task forces or consensus meetings. The aim of this paper is to generalize the approach of the uniform definition of severe asthma presented to WHO for chronic allergic and associated diseases (rhinitis, chronic rhinosinusitis, chronic urticaria, atopic dermatitis in order to have a uniform definition of severity, control and risk, usable in most situations. It is based on the appropriate diagnosis, availability and accessibility of treatments, treatment responsiveness and associated factors such as co-morbidities and risk factors. This uniform definition will allow a better definition of the phenotypes of severe allergic (and related diseases for clinical practice, research (including epidemiology, public health purposes, education and the discovery of novel therapies.Key words: IgE, allergy, severity, control, risk, asthma, rhinitis, rhinosinusitis, urticaria, atopic dermatitis.

  13. A new approach to designing reduced scale thermal-hydraulic experiments

    International Nuclear Information System (INIS)

    Lapa, Celso M.F.; Sampaio, Paulo A.B. de; Pereira, Claudio M.N.A.

    2004-01-01

    Reduced scale experiments are often employed in engineering because they are much cheaper than real scale testing. Unfortunately, though, it is difficult to design a thermal-hydraulic circuit or equipment in reduced scale capable of reproducing, both accurately and simultaneously, all the physical phenomena that occur in real scale and operating conditions. This paper presents a methodology to designing thermal-hydraulic experiments in reduced scale based on setting up a constrained optimization problem that is solved using genetic algorithms (GAs). In order to demonstrate the application of the methodology proposed, we performed some investigations in the design of a heater aimed to simulate the transport of heat and momentum in the core of a pressurized water reactor (PWR) at 100% of nominal power and non-accident operating conditions. The results obtained show that the proposed methodology is a promising approach for designing reduced scale experiments

  14. Uniform random number generators

    Science.gov (United States)

    Farr, W. R.

    1971-01-01

    Methods are presented for the generation of random numbers with uniform and normal distributions. Subprogram listings of Fortran generators for the Univac 1108, SDS 930, and CDC 3200 digital computers are also included. The generators are of the mixed multiplicative type, and the mathematical method employed is that of Marsaglia and Bray.

  15. Restricting uniformly open surjections

    Czech Academy of Sciences Publication Activity Database

    Kania, Tomasz; Rmoutil, M.

    2017-01-01

    Roč. 355, č. 9 (2017), s. 925-928 ISSN 1631-073X Institutional support: RVO:67985840 Keywords : Banach space * uniform spaces Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.396, year: 2016 http://www.sciencedirect.com/science/article/pii/S1631073X17302261?via%3Dihub

  16. Uniformly irradiated polymer film

    International Nuclear Information System (INIS)

    Fowler, S.L.

    1979-01-01

    Irradiated film having substantial uniformity in the radiation dosage profile is produced by irradiating the film within a trough having lateral deflection blocks disposed adjacent the film edges for deflecting electrons toward the surface of the trough bottom for further deflecting the electrons toward the film edge

  17. Synthesis of highly uniform Cu2O spheres by a two-step approach and their assembly to form photonic crystals with a brilliant color.

    Science.gov (United States)

    Su, Xin; Chang, Jie; Wu, Suli; Tang, Bingtao; Zhang, Shufen

    2016-03-21

    Monodisperse semiconductor colloidal spheres with a high refractive index hold great potential for building photonic crystals with a strong band gap, but the difficulty in separating the nucleation and growth processes makes it challenging to prepare highly uniform semiconductor colloidal spheres. Herein, real monodisperse Cu2O spheres were prepared via a hot-injection & heating-up two-step method using diethylene glycol as a milder reducing agent. The diameter of the as prepared Cu2O spheres can be tuned from 90 nm to 190 nm precisely. The SEM images reveal that the obtained Cu2O spheres have a narrow size distribution, which permits their self-assembly to form photonic crystals. The effects of precursor concentration and heating rates on the size and morphology of the Cu2O spheres were investigated in detail. The results indicate that the key points of the method include the burst nucleation to form seeds at a high temperature followed by rapid cooling to prevent agglomeration, and appropriate precursor concentration as well as a moderate growth rate during the further growth process. Importantly, photonic crystal films exhibiting a brilliant structural color were fabricated with the obtained monodisperse Cu2O spheres as building blocks, proving the possibility of making photonic crystals with a strong band gap. The developed method was also successfully applied to prepare monodisperse CdS spheres with diameters in the range from 110 nm to 210 nm.

  18. Heat and mass transfer intensification and shape optimization a multi-scale approach

    CERN Document Server

    2013-01-01

    Is the heat and mass transfer intensification defined as a new paradigm of process engineering, or is it just a common and old idea, renamed and given the current taste? Where might intensification occur? How to achieve intensification? How the shape optimization of thermal and fluidic devices leads to intensified heat and mass transfers? To answer these questions, Heat & Mass Transfer Intensification and Shape Optimization: A Multi-scale Approach clarifies  the definition of the intensification by highlighting the potential role of the multi-scale structures, the specific interfacial area, the distribution of driving force, the modes of energy supply and the temporal aspects of processes.   A reflection on the methods of process intensification or heat and mass transfer enhancement in multi-scale structures is provided, including porous media, heat exchangers, fluid distributors, mixers and reactors. A multi-scale approach to achieve intensification and shape optimization is developed and clearly expla...

  19. A Dynamical System Approach Explaining the Process of Development by Introducing Different Time-scales.

    Science.gov (United States)

    Hashemi Kamangar, Somayeh Sadat; Moradimanesh, Zahra; Mokhtari, Setareh; Bakouie, Fatemeh

    2018-06-11

    A developmental process can be described as changes through time within a complex dynamic system. The self-organized changes and emergent behaviour during development can be described and modeled as a dynamical system. We propose a dynamical system approach to answer the main question in human cognitive development i.e. the changes during development happens continuously or in discontinuous stages. Within this approach there is a concept; the size of time scales, which can be used to address the aforementioned question. We introduce a framework, by considering the concept of time-scale, in which "fast" and "slow" is defined by the size of time-scales. According to our suggested model, the overall pattern of development can be seen as one continuous function, with different time-scales in different time intervals.

  20. Approaches to large scale unsaturated flow in heterogeneous, stratified, and fractured geologic media

    International Nuclear Information System (INIS)

    Ababou, R.

    1991-08-01

    This report develops a broad review and assessment of quantitative modeling approaches and data requirements for large-scale subsurface flow in radioactive waste geologic repository. The data review includes discussions of controlled field experiments, existing contamination sites, and site-specific hydrogeologic conditions at Yucca Mountain. Local-scale constitutive models for the unsaturated hydrodynamic properties of geologic media are analyzed, with particular emphasis on the effect of structural characteristics of the medium. The report further reviews and analyzes large-scale hydrogeologic spatial variability from aquifer data, unsaturated soil data, and fracture network data gathered from the literature. Finally, various modeling strategies toward large-scale flow simulations are assessed, including direct high-resolution simulation, and coarse-scale simulation based on auxiliary hydrodynamic models such as single equivalent continuum and dual-porosity continuum. The roles of anisotropy, fracturing, and broad-band spatial variability are emphasized. 252 refs

  1. Uniform color space is not homogeneous

    Science.gov (United States)

    Kuehni, Rolf G.

    2002-06-01

    Historical data of chroma scaling and hue scaling are compared and evidence is shown that we do not have a reliable basis in either case. Several data sets indicate explicitly or implicitly that the number of constant sized hue differences between unique hues as well as in the quadrants of the a*, b* diagram differs making what is commonly regarded as uniform color space inhomogeneous. This problem is also shown to affect the OSA-UCS space. A Euclidean uniform psychological or psychophysical color space appears to be impossible.

  2. The Multi-Scale Model Approach to Thermohydrology at Yucca Mountain

    International Nuclear Information System (INIS)

    Glascoe, L; Buscheck, T A; Gansemer, J; Sun, Y

    2002-01-01

    The Multi-Scale Thermo-Hydrologic (MSTH) process model is a modeling abstraction of them1 hydrology (TH) of the potential Yucca Mountain repository at multiple spatial scales. The MSTH model as described herein was used for the Supplemental Science and Performance Analyses (BSC, 2001) and is documented in detail in CRWMS M and O (2000) and Glascoe et al. (2002). The model has been validated to a nested grid model in Buscheck et al. (In Review). The MSTH approach is necessary for modeling thermal hydrology at Yucca Mountain for two reasons: (1) varying levels of detail are necessary at different spatial scales to capture important TH processes and (2) a fully-coupled TH model of the repository which includes the necessary spatial detail is computationally prohibitive. The MSTH model consists of six ''submodels'' which are combined in a manner to reduce the complexity of modeling where appropriate. The coupling of these models allows for appropriate consideration of mountain-scale thermal hydrology along with the thermal hydrology of drift-scale discrete waste packages of varying heat load. Two stages are involved in the MSTH approach, first, the execution of submodels, and second, the assembly of submodels using the Multi-scale Thermohydrology Abstraction Code (MSTHAC). MSTHAC assembles the submodels in a five-step process culminating in the TH model output of discrete waste packages including a mountain-scale influence

  3. A novel approach to the automatic control of scale model airplanes

    OpenAIRE

    Hua , Minh-Duc; Pucci , Daniele; Hamel , Tarek; Morin , Pascal; Samson , Claude

    2014-01-01

    International audience; — This paper explores a new approach to the control of scale model airplanes as an extension of previous studies addressing the case of vehicles presenting a symmetry of revolution about the thrust axis. The approach is intrinsically nonlinear and, with respect to other contributions on aircraft nonlinear control, no small attack angle assumption is made in order to enlarge the controller's operating domain. Simulation results conducted on a simplified, but not overly ...

  4. A large-scale multi-objective flights conflict avoidance approach supporting 4D trajectory operation

    OpenAIRE

    Guan, Xiangmin; Zhang, Xuejun; Lv, Renli; Chen, Jun; Weiszer, Michal

    2017-01-01

    Recently, the long-term conflict avoidance approaches based on large-scale flights scheduling have attracted much attention due to their ability to provide solutions from a global point of view. However, the current approaches which focus only on a single objective with the aim of minimizing the total delay and the number of conflicts, cannot provide the controllers with variety of optional solutions, representing different trade-offs. Furthermore, the flight track error is often overlooked i...

  5. Women in service uniforms

    OpenAIRE

    Hanna Karaszewska; Maciej Muskała

    2012-01-01

    The article discusses the problems of women who work in the uniformed services with the particular emphasis on the performing of the occupation of the prison service. It presents the legal issues relating to equal treatment of men and women in the workplace, formal factors influencing their employment, the status of women in prison, and the problems of their conducting in the professional role. The article also presents the results of research conducted in Poland and all over the world, on th...

  6. Highly Uniform Atomic Layer-Deposited MoS2@3D-Ni-Foam: A Novel Approach To Prepare an Electrode for Supercapacitors.

    Science.gov (United States)

    Nandi, Dip K; Sahoo, Sumanta; Sinha, Soumyadeep; Yeo, Seungmin; Kim, Hyungjun; Bulakhe, Ravindra N; Heo, Jaeyeong; Shim, Jae-Jin; Kim, Soo-Hyun

    2017-11-22

    This article takes an effort to establish the potential of atomic layer deposition (ALD) technique toward the field of supercapacitors by preparing molybdenum disulfide (MoS 2 ) as its electrode. While molybdenum hexacarbonyl [Mo(CO) 6 ] serves as a novel precursor toward the low-temperature synthesis of ALD-grown MoS 2 , H 2 S plasma helps to deposit its polycrystalline phase at 200 °C. Several ex situ characterizations such as X-ray diffractometry (XRD), Raman spectroscopy, X-ray photoelectron spectroscopy (XPS), and so forth are performed in detail to study the as-grown MoS 2 film on a Si/SiO 2 substrate. While stoichiometric MoS 2 with very negligible amount of C and O impurities was evident from XPS, the XRD and high-resolution transmission electron microscopy analyses confirmed the (002)-oriented polycrystalline h-MoS 2 phase of the as-grown film. A comparative study of ALD-grown MoS 2 as a supercapacitor electrode on 2-dimensional stainless steel and on 3-dimensional (3D) Ni-foam substrates clearly reflects the advantage and the potential of ALD for growing a uniform and conformal electrode material on a 3D-scaffold layer. Cyclic voltammetry measurements showed both double-layer capacitance and capacitance contributed by the faradic reaction at the MoS 2 electrode surface. The optimum number of ALD cycles was also found out for achieving maximum capacitance for such a MoS 2 @3D-Ni-foam electrode. A record high areal capacitance of 3400 mF/cm 2 was achieved for MoS 2 @3D-Ni-foam grown by 400 ALD cycles at a current density of 3 mA/cm 2 . Moreover, the ALD-grown MoS 2 @3D-Ni-foam composite also retains high areal capacitance, even up to a high current density of 50 mA/cm 2 . Finally, this directly grown MoS 2 electrode on 3D-Ni-foam by ALD shows high cyclic stability (>80%) over 4500 charge-discharge cycles which must invoke the research community to further explore the potential of ALD for such applications.

  7. A uniform procedure for reimbursing the off-label use of antineoplastic drugs according to the value-for-money approach.

    Science.gov (United States)

    Messori, A; Fadda, V; Trippoli, S

    2011-04-01

    National healthcare systems as well as local institutions generally reimburse numerous off-label uses of anticancer drugs, but an explicit framework for managing these payments is still lacking. As in the case of on-label uses, an optimal management of off-label uses should be aimed at a direct proportionality between cost and clinical benefit. Within this framework, assessing the incremental cost/effectiveness ratio becomes mandatory, and measuring the magnitude of the clinical benefit (e.g. gain in overall survival or progression-free survival) is essential.This paper discusses how the standard principles of cost-effectiveness and value-for-money can be applied to manage the reimbursement of off-label treatments in oncology. It also describes a detailed operational scheme to appropriately implement this aim. Two separate approaches are considered: a) a trial-based approach, which is designed for situations where enough information is available from clinical studies about the expected effectiveness of the off-label treatment; b) an individualized payment-by-results approach, which is designed for situations in which adequate information on effectiveness is lacking; this latter approach requires that each patient receiving off-label treatment is followed-up to determine individual outcomes and tailor the extent of payment to individual results.Some examples of application of both approaches are presented in detail, which have been extracted from a list of 184 off-label indications approved in 2010 by the Region of tuscany in italy. these examples support the feasibility of the two methods proposed.In conclusion, the scheme described in this paper represents an operational solution to an unsettled problem in the area of oncology drugs. © E.S.I.F.T. srl - Firenze

  8. Detecting Uniform Areas for Vicarious Calibration using Landsat TM Imagery: A Study using the Arabian and Saharan Deserts

    Science.gov (United States)

    Hilbert, Kent; Pagnutti, Mary; Ryan, Robert; Zanoni, Vicki

    2002-01-01

    This paper discusses a method for detecting spatially uniform sites need for radiometric characterization of remote sensing satellites. Such information is critical for scientific research applications of imagery having moderate to high resolutions (African Saharan and Arabian deserts contained extremely uniform sites with respect to spatial characteristics. We developed an algorithm for detecting site uniformity and applied it to orthorectified Landsat Thematic Mapper (TM) imagery over eight uniform regions of interest. The algorithm's results were assessed using both medium-resolution (30-m GSD) Landsat 7 ETM+ and fine-resolution (research shows that Landsat TM products appear highly useful for detecting potential calibration sites for system characterization. In particular, the approach detected spatially uniform regions that frequently occur at multiple scales of observation.

  9. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    -scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel

  10. A multiscale analytical approach for bone remodeling simulations : linking scales from collagen to trabeculae

    NARCIS (Netherlands)

    Colloca, M.; Blanchard, R.; Hellmich, C.; Ito, K.; Rietbergen, van B.

    2014-01-01

    Bone is a dynamic and hierarchical porous material whose spatial and temporal mechanical properties can vary considerably due to differences in its microstructure and due to remodeling. Hence, a multiscale analytical approach, which combines bone structural information at multiple scales to the

  11. How efficient is sliding-scale insulin therapy? Problems with a 'cookbook' approach in hospitalized patients.

    Science.gov (United States)

    Katz, C M

    1991-04-01

    Sliding-scale insulin therapy is seldom the best way to treat hospitalized diabetic patients. In the few clinical situations in which it is appropriate, close attention to details and solidly based scientific principles is absolutely necessary. Well-organized alternative approaches to insulin therapy usually offer greater efficiency and effectiveness.

  12. Biocultural approaches to well-being and sustainability indicators across scales

    Science.gov (United States)

    Eleanor J. Sterling; Christopher Filardi; Anne Toomey; Amanda Sigouin; Erin Betley; Nadav Gazit; Jennifer Newell; Simon Albert; Diana Alvira; Nadia Bergamini; Mary Blair; David Boseto; Kate Burrows; Nora Bynum; Sophie Caillon; Jennifer E. Caselle; Joachim Claudet; Georgina Cullman; Rachel Dacks; Pablo B. Eyzaguirre; Steven Gray; James Herrera; Peter Kenilorea; Kealohanuiopuna Kinney; Natalie Kurashima; Suzanne Macey; Cynthia Malone; Senoveva Mauli; Joe McCarter; Heather McMillen; Pua’ala Pascua; Patrick Pikacha; Ana L. Porzecanski; Pascale de Robert; Matthieu Salpeteur; Myknee Sirikolo; Mark H. Stege; Kristina Stege; Tamara Ticktin; Ron Vave; Alaka Wali; Paige West; Kawika B. Winter; Stacy D. Jupiter

    2017-01-01

    Monitoring and evaluation are central to ensuring that innovative, multi-scale, and interdisciplinary approaches to sustainability are effective. The development of relevant indicators for local sustainable management outcomes, and the ability to link these to broader national and international policy targets, are key challenges for resource managers, policymakers, and...

  13. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  14. Approaching a universal scaling relationship between fracture stiffness and fluid flow

    Science.gov (United States)

    Pyrak-Nolte, Laura J.; Nolte, David D.

    2016-02-01

    A goal of subsurface geophysical monitoring is the detection and characterization of fracture alterations that affect the hydraulic integrity of a site. Achievement of this goal requires a link between the mechanical and hydraulic properties of a fracture. Here we present a scaling relationship between fluid flow and fracture-specific stiffness that approaches universality. Fracture-specific stiffness is a mechanical property dependent on fracture geometry that can be monitored remotely using seismic techniques. A Monte Carlo numerical approach demonstrates that a scaling relationship exists between flow and stiffness for fractures with strongly correlated aperture distributions, and continues to hold for fractures deformed by applied stress and by chemical erosion as well. This new scaling relationship provides a foundation for simulating changes in fracture behaviour as a function of stress or depth in the Earth and will aid risk assessment of the hydraulic integrity of subsurface sites.

  15. A Confirmatory Factor Analysis on the Attitude Scale of Constructivist Approach for Science Teachers

    Directory of Open Access Journals (Sweden)

    E. Evrekli

    2010-11-01

    Full Text Available Underlining the importance of teachers for the constructivist approach, the present study attempts to develop “Attitude Scale of Construc¬tivist Approach for Science Teachers (ASCAST”. The pre-applications of the scale were administered to a total of 210 science teachers; however, the data obtained from 5 teachers were excluded from the analysis. As a result of the analysis of the data obtained from the pre-applications, it was found that the scale could have a single factor structure, which was tested using the confir¬matory factor analysis. As a result of the initial confirmatory factor analysis, the values of fit were examined and found to be low. Subsequently, by exam¬ining the modification indices, error covariance was added between items 23 and 24 and the model was tested once again. The added error covariance led to a significant improvement in the model, producing values of fit suitable for limit values. Thus, it was concluded that the scale could be employed with a single factor. The explained variance value for the scale developed with a sin¬gle factor structure was calculated to be 50.43% and its reliability was found to be .93. The results obtained suggest that the scale possesses reliable-valid characteristics and could be used in further studies.

  16. A feasible approach to implement a commercial scale CANDU fuel manufacturing plant in Egypt

    International Nuclear Information System (INIS)

    El-Shehawy, I.; El-Sharaky, M.; Yasso, K.; Selim, I.; Graham, N.; Newington, D.

    1995-01-01

    Many planning scenarios have been examined to assess and evaluate the economic estimates for implementing a commercial scale CANDU fuel manufacturing plant in Egypt. The cost estimates indicated strong influence of the annual capital costs on total fuel manufacturing cost; this is particularly evident in a small initial plant where the proposed design output is only sufficient to supply reload fuel for a single CANDU-6 reactor. A modular approach is investigated as a possible way, to reduce the capital costs for a small initial fuel plant. In this approach the plant would do fuel assembly operations only and the remainder of a plant would be constructed and equipped in the stages when high production volumes can justify the capital expenses. Such approach seems economically feasible for implementing a small scale CANDU fuel manufacturing plant in developing countries such as Egypt and further improvement could be achieved over the years of operation. (author)

  17. College students with Internet addiction decrease fewer Behavior Inhibition Scale and Behavior Approach Scale when getting online.

    Science.gov (United States)

    Ko, Chih-Hung; Wang, Peng-Wei; Liu, Tai-Ling; Yen, Cheng-Fang; Chen, Cheng-Sheng; Yen, Ju-Yu

    2015-09-01

    The aim of the study is to compare the reinforcement sensitivity between online and offline interaction. The effect of gender, Internet addiction, depression, and online gaming on the difference of reinforcement sensitivity between online and offline were also evaluated. The subjects were 2,258 college students (1,066 men and 1,192 women). They completed the Behavior Inhibition Scale and Behavior Approach Scale (BIS/BAS) according to their experience online or offline. Internet addiction, depression, and Internet activity type were evaluated simultaneously. The results showed that reinforcement sensitivity was lower when interacting online than when interacting offline. College students with Internet addiction decrease fewer score on BIS and BAS after getting online than did others. The higher reward and aversion sensitivity are associated with the risk of Internet addiction. The fun seeking online might contribute to the maintenance of Internet addiction. This suggests that reinforcement sensitivity would change after getting online and would contribute to the risk and maintenance of Internet addiction. © 2014 Wiley Publishing Asia Pty Ltd.

  18. Creep-fatigue crack initiation assessment on thick circumferentially notched 316L tubes under cyclic thermal shocks and uniform tension with the σd approach

    International Nuclear Information System (INIS)

    Michel, B.; Poette, C.

    1997-01-01

    For crack initiation assessment under creep fatigue loading, in high temperature Fast Reactor's components, specific approaches based on fracture mechanics analysis had to be developed. In the present paper the crack initiation assessment method proposed in the A16 document is presented. The so called ''σ d method'' is also validated on experimental results for tubular specimens with internal axisymmetric surface cracks. Experimental data are extracted from the TERFIS program carried out on a sodium test device at the CEA Cadarache. Metallurgical examinations on TERFIS specimens confirm that the initiation assessment of the ''σ d '' approach is conservative even for a different geometry than the CT specimen on which the method was set up. However, the conservatism is reduced when the creep residual stress field is relaxed during the hold time. An investigation concerning this last point is needed in order to know if relaxing the stress, when using a lower bound of the mechanical properties, always keeps a safety margin. (author). 14 refs, 10 figs, 4 tabs

  19. A multi-scale metrics approach to forest fragmentation for Strategic Environmental Impact Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eunyoung, E-mail: eykim@kei.re.kr [Korea Environment Institute, 215 Jinheungno, Eunpyeong-gu, Seoul 122-706 (Korea, Republic of); Song, Wonkyong, E-mail: wksong79@gmail.com [Suwon Research Institute, 145 Gwanggyo-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-270 (Korea, Republic of); Lee, Dongkun, E-mail: dklee7@snu.ac.kr [Department of Landscape Architecture and Rural System Engineering, Seoul National University, 599 Gwanakro, Gwanak-gu, Seoul 151-921 (Korea, Republic of); Research Institute for Agriculture and Life Sciences, Seoul National University, Seoul 151-921 (Korea, Republic of)

    2013-09-15

    Forests are becoming severely fragmented as a result of land development. South Korea has responded to changing community concerns about environmental issues. The nation has developed and is extending a broad range of tools for use in environmental management. Although legally mandated environmental compliance requirements in South Korea have been implemented to predict and evaluate the impacts of land-development projects, these legal instruments are often insufficient to assess the subsequent impact of development on the surrounding forests. It is especially difficult to examine impacts on multiple (e.g., regional and local) scales in detail. Forest configuration and size, including forest fragmentation by land development, are considered on a regional scale. Moreover, forest structure and composition, including biodiversity, are considered on a local scale in the Environmental Impact Assessment process. Recently, the government amended the Environmental Impact Assessment Act, including the SEA, EIA, and small-scale EIA, to require an integrated approach. Therefore, the purpose of this study was to establish an impact assessment system that minimizes the impacts of land development using an approach that is integrated across multiple scales. This study focused on forest fragmentation due to residential development and road construction sites in selected Congestion Restraint Zones (CRZs) in the Greater Seoul Area of South Korea. Based on a review of multiple-scale impacts, this paper integrates models that assess the impacts of land development on forest ecosystems. The applicability of the integrated model for assessing impacts on forest ecosystems through the SEIA process is considered. On a regional scale, it is possible to evaluate the location and size of a land-development project by considering aspects of forest fragmentation, such as the stability of the forest structure and the degree of fragmentation. On a local scale, land-development projects should

  20. A multi-scale approach of fluvial biogeomorphic dynamics using photogrammetry.

    Science.gov (United States)

    Hortobágyi, Borbála; Corenblit, Dov; Vautier, Franck; Steiger, Johannes; Roussel, Erwan; Burkart, Andreas; Peiry, Jean-Luc

    2017-11-01

    Over the last twenty years, significant technical advances turned photogrammetry into a relevant tool for the integrated analysis of biogeomorphic cross-scale interactions within vegetated fluvial corridors, which will largely contribute to the development and improvement of self-sustainable river restoration efforts. Here, we propose a cost-effective, easily reproducible approach based on stereophotogrammetry and Structure from Motion (SfM) technique to study feedbacks between fluvial geomorphology and riparian vegetation at different nested spatiotemporal scales. We combined different photogrammetric methods and thus were able to investigate biogeomorphic feedbacks at all three spatial scales (i.e., corridor, alluvial bar and micro-site) and at three different temporal scales, i.e., present, recent past and long term evolution on a diversified riparian landscape mosaic. We evaluate the performance and the limits of photogrammetric methods by targeting a set of fundamental parameters necessary to study biogeomorphic feedbacks at each of the three nested spatial scales and, when possible, propose appropriate solutions. The RMSE varies between 0.01 and 2 m depending on spatial scale and photogrammetric methods. Despite some remaining difficulties to properly apply them with current technologies under all circumstances in fluvial biogeomorphic studies, e.g. the detection of vegetation density or landform topography under a dense vegetation canopy, we suggest that photogrammetry is a promising instrument for the quantification of biogeomorphic feedbacks at nested spatial scales within river systems and for developing appropriate river management tools and strategies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. A new scaling approach for the mesoscale simulation of magnetic domain structures using Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Radhakrishnan, B., E-mail: radhakrishnb@ornl.gov; Eisenbach, M.; Burress, T.A.

    2017-06-15

    Highlights: • Developed new scaling technique for dipole–dipole interaction energy. • Developed new scaling technique for exchange interaction energy. • Used scaling laws to extend atomistic simulations to micrometer length scale. • Demonstrated transition from mono-domain to vortex magnetic structure. • Simulated domain wall width and transition length scale agree with experiments. - Abstract: A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. The transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted and experimentally measured values for Fe.

  2. A practical approach to compute short-wave irradiance interacting with subgrid-scale buildings

    Energy Technology Data Exchange (ETDEWEB)

    Sievers, Uwe; Frueh, Barbara [Deutscher Wetterdienst, Offenbach am Main (Germany)

    2012-08-15

    A numerical approach for the calculation of short-wave irradiances at the ground as well as the walls and roofs of buildings in an environment with unresolved built-up is presented. In this radiative parameterization scheme the properties of the unresolved built-up are assigned to settlement types which are characterized by mean values of the volume density of the buildings and their wall area density. Therefore it is named wall area approach. In the vertical direction the range of building heights may be subdivided into several layers. In the case of non-uniform building heights the shadowing of the lower roofs by the taller buildings is taken into account. The method includes the approximate calculation of sky view and sun view factors. For an idealized building arrangement it is shown that the obtained approximate factors are in good agreement with exact calculations just as for the comparison of the calculated and measured effective albedo values. For arrangements with isolated single buildings the presented wall area approach yields a better agreement with the observations than similar methods where the unresolved built-up is characterized by the aspect ratio of a representative street canyon (aspect ratio approach). In the limiting case where the built-up is well represented by an ensemble of idealized street canyons both approaches become equivalent. The presented short-wave radiation scheme is part of the microscale atmospheric model MUKLIMO 3 where it contributes to the calculation of surface temperatures on the basis of energy-flux equilibrium conditions. (orig.)

  3. Women in service uniforms

    Directory of Open Access Journals (Sweden)

    Hanna Karaszewska

    2012-12-01

    Full Text Available The article discusses the problems of women who work in the uniformed services with the particular emphasis on the performing of the occupation of the prison service. It presents the legal issues relating to equal treatment of men and women in the workplace, formal factors influencing their employment, the status of women in prison, and the problems of their conducting in the professional role. The article also presents the results of research conducted in Poland and all over the world, on the functioning of women in prison and their relations with officers of the Prison Service, as well as with inmates.

  4. Uniform gradient expansions

    CERN Document Server

    Giovannini, Massimo

    2015-01-01

    Cosmological singularities are often discussed by means of a gradient expansion that can also describe, during a quasi-de Sitter phase, the progressive suppression of curvature inhomogeneities. While the inflationary event horizon is being formed the two mentioned regimes coexist and a uniform expansion can be conceived and applied to the evolution of spatial gradients across the protoinflationary boundary. It is argued that conventional arguments addressing the preinflationary initial conditions are necessary but generally not sufficient to guarantee a homogeneous onset of the conventional inflationary stage.

  5. A multi-scale relevance vector regression approach for daily urban water demand forecasting

    Science.gov (United States)

    Bai, Yun; Wang, Pu; Li, Chuan; Xie, Jingjing; Wang, Yin

    2014-09-01

    Water is one of the most important resources for economic and social developments. Daily water demand forecasting is an effective measure for scheduling urban water facilities. This work proposes a multi-scale relevance vector regression (MSRVR) approach to forecast daily urban water demand. The approach uses the stationary wavelet transform to decompose historical time series of daily water supplies into different scales. At each scale, the wavelet coefficients are used to train a machine-learning model using the relevance vector regression (RVR) method. The estimated coefficients of the RVR outputs for all of the scales are employed to reconstruct the forecasting result through the inverse wavelet transform. To better facilitate the MSRVR forecasting, the chaos features of the daily water supply series are analyzed to determine the input variables of the RVR model. In addition, an adaptive chaos particle swarm optimization algorithm is used to find the optimal combination of the RVR model parameters. The MSRVR approach is evaluated using real data collected from two waterworks and is compared with recently reported methods. The results show that the proposed MSRVR method can forecast daily urban water demand much more precisely in terms of the normalized root-mean-square error, correlation coefficient, and mean absolute percentage error criteria.

  6. Practice-oriented optical thin film growth simulation via multiple scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Turowski, Marcus, E-mail: m.turowski@lzh.de [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); Jupé, Marco [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany); Melzig, Thomas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Moskovkin, Pavel [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Daniel, Alain [Centre for Research in Metallurgy, CRM, 21 Avenue du bois Saint Jean, Liège 4000 (Belgium); Pflug, Andreas [Fraunhofer Institute for Surface Engineering and Thin Films IST, Bienroder Weg 54e, Braunschweig 30108 (Germany); Lucas, Stéphane [Research Centre for Physics of Matter and Radiation (PMR-LARN), University of Namur (FUNDP), 61 rue de Bruxelles, Namur 5000 (Belgium); Ristau, Detlev [Laser Zentrum Hannover e.V., Hollerithallee 8, Hannover 30419 (Germany); QUEST: Centre of Quantum Engineering and Space-Time Research, Leibniz Universität Hannover (Germany)

    2015-10-01

    Simulation of the coating process is a very promising approach for the understanding of thin film formation. Nevertheless, this complex matter cannot be covered by a single simulation technique. To consider all mechanisms and processes influencing the optical properties of the growing thin films, various common theoretical methods have been combined to a multi-scale model approach. The simulation techniques have been selected in order to describe all processes in the coating chamber, especially the various mechanisms of thin film growth, and to enable the analysis of the resulting structural as well as optical and electronic layer properties. All methods are merged with adapted communication interfaces to achieve optimum compatibility of the different approaches and to generate physically meaningful results. The present contribution offers an approach for the full simulation of an Ion Beam Sputtering (IBS) coating process combining direct simulation Monte Carlo, classical molecular dynamics, kinetic Monte Carlo, and density functional theory. The simulation is performed exemplary for an existing IBS-coating plant to achieve a validation of the developed multi-scale approach. Finally, the modeled results are compared to experimental data. - Highlights: • A model approach for simulating an Ion Beam Sputtering (IBS) process is presented. • In order to combine the different techniques, optimized interfaces are developed. • The transport of atomic species in the coating chamber is calculated. • We modeled structural and optical film properties based on simulated IBS parameter. • The modeled and the experimental refractive index data fit very well.

  7. Bridging the Gap between the Nanometer-Scale Bottom-Up and Micrometer-Scale Top-Down Approaches for Site-Defined InP/InAs Nanowires.

    Science.gov (United States)

    Zhang, Guoqiang; Rainville, Christophe; Salmon, Adrian; Takiguchi, Masato; Tateno, Kouta; Gotoh, Hideki

    2015-11-24

    This work presents a method that bridges the gap between the nanometer-scale bottom-up and micrometer-scale top-down approaches for site-defined nanostructures, which has long been a significant challenge for applications that require low-cost and high-throughput manufacturing processes. We realized the bridging by controlling the seed indium nanoparticle position through a self-assembly process. Site-defined InP nanowires were then grown from the indium-nanoparticle array in the vapor-liquid-solid mode through a "seed and grow" process. The nanometer-scale indium particles do not always occupy the same locations within the micrometer-scale open window of an InP exposed substrate due to the scale difference. We developed a technique for aligning the nanometer-scale indium particles on the same side of the micrometer-scale window by structuring the surface of a misoriented InP (111)B substrate. Finally, we demonstrated that the developed method can be used to grow a uniform InP/InAs axial-heterostructure nanowire array. The ability to form a heterostructure nanowire array with this method makes it possible to tune the emission wavelength over a wide range by employing the quantum confinement effect and thus expand the application of this technology to optoelectronic devices. Successfully pairing a controllable bottom-up growth technique with a top-down substrate preparation technique greatly improves the potential for the mass-production and widespread adoption of this technology.

  8. A new approach for modeling and analysis of molten salt reactors using SCALE

    Energy Technology Data Exchange (ETDEWEB)

    Powers, J. J.; Harrison, T. J.; Gehin, J. C. [Oak Ridge National Laboratory, PO Box 2008, Oak Ridge, TN 37831-6172 (United States)

    2013-07-01

    The Office of Fuel Cycle Technologies (FCT) of the DOE Office of Nuclear Energy is performing an evaluation and screening of potential fuel cycle options to provide information that can support future research and development decisions based on the more promising fuel cycle options. [1] A comprehensive set of fuel cycle options are put into evaluation groups based on physics and fuel cycle characteristics. Representative options for each group are then evaluated to provide the quantitative information needed to support the valuation of criteria and metrics used for the study. Included in this set of representative options are Molten Salt Reactors (MSRs), the analysis of which requires several capabilities that are not adequately supported by the current version of SCALE or other neutronics depletion software packages (e.g., continuous online feed and removal of materials). A new analysis approach was developed for MSR analysis using SCALE by taking user-specified MSR parameters and performing a series of SCALE/TRITON calculations to determine the resulting equilibrium operating conditions. This paper provides a detailed description of the new analysis approach, including the modeling equations and radiation transport models used. Results for an MSR fuel cycle option of interest are also provided to demonstrate the application to a relevant problem. The current implementation is through a utility code that uses the two-dimensional (2D) TRITON depletion sequence in SCALE 6.1 but could be readily adapted to three-dimensional (3D) TRITON depletion sequences or other versions of SCALE. (authors)

  9. A new approach for modeling and analysis of molten salt reactors using SCALE

    International Nuclear Information System (INIS)

    Powers, J. J.; Harrison, T. J.; Gehin, J. C.

    2013-01-01

    The Office of Fuel Cycle Technologies (FCT) of the DOE Office of Nuclear Energy is performing an evaluation and screening of potential fuel cycle options to provide information that can support future research and development decisions based on the more promising fuel cycle options. [1] A comprehensive set of fuel cycle options are put into evaluation groups based on physics and fuel cycle characteristics. Representative options for each group are then evaluated to provide the quantitative information needed to support the valuation of criteria and metrics used for the study. Included in this set of representative options are Molten Salt Reactors (MSRs), the analysis of which requires several capabilities that are not adequately supported by the current version of SCALE or other neutronics depletion software packages (e.g., continuous online feed and removal of materials). A new analysis approach was developed for MSR analysis using SCALE by taking user-specified MSR parameters and performing a series of SCALE/TRITON calculations to determine the resulting equilibrium operating conditions. This paper provides a detailed description of the new analysis approach, including the modeling equations and radiation transport models used. Results for an MSR fuel cycle option of interest are also provided to demonstrate the application to a relevant problem. The current implementation is through a utility code that uses the two-dimensional (2D) TRITON depletion sequence in SCALE 6.1 but could be readily adapted to three-dimensional (3D) TRITON depletion sequences or other versions of SCALE. (authors)

  10. Scale-Dependence of Processes Structuring Dung Beetle Metacommunities Using Functional Diversity and Community Deconstruction Approaches

    Science.gov (United States)

    da Silva, Pedro Giovâni; Hernández, Malva Isabel Medina

    2015-01-01

    Community structure is driven by mechanisms linked to environmental, spatial and temporal processes, which have been successfully addressed using metacommunity framework. The relative importance of processes shaping community structure can be identified using several different approaches. Two approaches that are increasingly being used are functional diversity and community deconstruction. Functional diversity is measured using various indices that incorporate distinct community attributes. Community deconstruction is a way to disentangle species responses to ecological processes by grouping species with similar traits. We used these two approaches to determine whether they are improvements over traditional measures (e.g., species composition, abundance, biomass) for identification of the main processes driving dung beetle (Scarabaeinae) community structure in a fragmented mainland-island landscape in southern Brazilian Atlantic Forest. We sampled five sites in each of four large forest areas, two on the mainland and two on the island. Sampling was performed in 2012 and 2013. We collected abundance and biomass data from 100 sampling points distributed over 20 sampling sites. We studied environmental, spatial and temporal effects on dung beetle community across three spatial scales, i.e., between sites, between areas and mainland-island. The γ-diversity based on species abundance was mainly attributed to β-diversity as a consequence of the increase in mean α- and β-diversity between areas. Variation partitioning on abundance, biomass and functional diversity showed scale-dependence of processes structuring dung beetle metacommunities. We identified two major groups of responses among 17 functional groups. In general, environmental filters were important at both local and regional scales. Spatial factors were important at the intermediate scale. Our study supports the notion of scale-dependence of environmental, spatial and temporal processes in the distribution

  11. A cross-scale approach to understand drought-induced variability of sagebrush ecosystem productivity

    Science.gov (United States)

    Assal, T.; Anderson, P. J.

    2016-12-01

    Sagebrush (Artemisia spp.) mortality has recently been reported in the Upper Green River Basin (Wyoming, USA) of the sagebrush steppe of western North America. Numerous causes have been suggested, but recent drought (2012-13) is the likely mechanism of mortality in this water-limited ecosystem which provides critical habitat for many species of wildlife. An understanding of the variability in patterns of productivity with respect to climate is essential to exploit landscape scale remote sensing for detection of subtle changes associated with mortality in this sparse, uniformly vegetated ecosystem. We used the standardized precipitation index to characterize drought conditions and Moderate Resolution Imaging Spectroradiometer (MODIS) satellite imagery (250-m resolution) to characterize broad characteristics of growing season productivity. We calculated per-pixel growing season anomalies over a 16-year period (2000-2015) to identify the spatial and temporal variability in productivity. Metrics derived from Landsat satellite imagery (30-m resolution) were used to further investigate trends within anomalous areas at local scales. We found evidence to support an initial hypothesis that antecedent winter drought was most important in explaining reduced productivity. The results indicate drought effects were inconsistent over space and time. MODIS derived productivity deviated by more than four standard deviations in heavily impacted areas, but was well within the interannual variability in other areas. Growing season anomalies highlighted dramatic declines in productivity during the 2012 and 2013 growing seasons. However, large negative anomalies persisted in other areas during the 2014 growing season, indicating lag effects of drought. We are further investigating if the reduction in productivity is mediated by local biophysical properties. Our analysis identified spatially explicit patterns of ecosystem properties altered by severe drought which are consistent with

  12. Scales

    Science.gov (United States)

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...

  13. Modeling Impact-induced Failure of Polysilicon MEMS: A Multi-scale Approach.

    Science.gov (United States)

    Mariani, Stefano; Ghisi, Aldo; Corigliano, Alberto; Zerbini, Sarah

    2009-01-01

    Failure of packaged polysilicon micro-electro-mechanical systems (MEMS) subjected to impacts involves phenomena occurring at several length-scales. In this paper we present a multi-scale finite element approach to properly allow for: (i) the propagation of stress waves inside the package; (ii) the dynamics of the whole MEMS; (iii) the spreading of micro-cracking in the failing part(s) of the sensor. Through Monte Carlo simulations, some effects of polysilicon micro-structure on the failure mode are elucidated.

  14. Time-dependent approach to collisional ionization using exterior complex scaling

    International Nuclear Information System (INIS)

    McCurdy, C. William; Horner, Daniel A.; Rescigno, Thomas N.

    2002-01-01

    We present a time-dependent formulation of the exterior complex scaling method that has previously been used to treat electron-impact ionization of the hydrogen atom accurately at low energies. The time-dependent approach solves a driven Schroedinger equation, and scales more favorably with the number of electrons than the original formulation. The method is demonstrated in calculations for breakup processes in two dimensions (2D) and three dimensions for systems involving short-range potentials and in 2D for electron-impact ionization in the Temkin-Poet model for electron-hydrogen atom collisions

  15. A probabilistic approach to quantifying spatial patterns of flow regimes and network-scale connectivity

    Science.gov (United States)

    Garbin, Silvia; Alessi Celegon, Elisa; Fanton, Pietro; Botter, Gianluca

    2017-04-01

    The temporal variability of river flow regime is a key feature structuring and controlling fluvial ecological communities and ecosystem processes. In particular, streamflow variability induced by climate/landscape heterogeneities or other anthropogenic factors significantly affects the connectivity between streams with notable implication for river fragmentation. Hydrologic connectivity is a fundamental property that guarantees species persistence and ecosystem integrity in riverine systems. In riverine landscapes, most ecological transitions are flow-dependent and the structure of flow regimes may affect ecological functions of endemic biota (i.e., fish spawning or grazing of invertebrate species). Therefore, minimum flow thresholds must be guaranteed to support specific ecosystem services, like fish migration, aquatic biodiversity and habitat suitability. In this contribution, we present a probabilistic approach aiming at a spatially-explicit, quantitative assessment of hydrologic connectivity at the network-scale as derived from river flow variability. Dynamics of daily streamflows are estimated based on catchment-scale climatic and morphological features, integrating a stochastic, physically based approach that accounts for the stochasticity of rainfall with a water balance model and a geomorphic recession flow model. The non-exceedance probability of ecologically meaningful flow thresholds is used to evaluate the fragmentation of individual stream reaches, and the ensuing network-scale connectivity metrics. A multi-dimensional Poisson Process for the stochastic generation of rainfall is used to evaluate the impact of climate signature on reach-scale and catchment-scale connectivity. The analysis shows that streamflow patterns and network-scale connectivity are influenced by the topology of the river network and the spatial variability of climatic properties (rainfall, evapotranspiration). The framework offers a robust basis for the prediction of the impact of

  16. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  17. Polymer density functional theory approach based on scaling second-order direct correlation function.

    Science.gov (United States)

    Zhou, Shiqi

    2006-06-01

    A second-order direct correlation function (DCF) from solving the polymer-RISM integral equation is scaled up or down by an equation of state for bulk polymer, the resultant scaling second-order DCF is in better agreement with corresponding simulation results than the un-scaling second-order DCF. When the scaling second-order DCF is imported into a recently proposed LTDFA-based polymer DFT approach, an originally associated adjustable but mathematically meaningless parameter now becomes mathematically meaningful, i.e., the numerical value lies now between 0 and 1. When the adjustable parameter-free version of the LTDFA is used instead of the LTDFA, i.e., the adjustable parameter is fixed at 0.5, the resultant parameter-free version of the scaling LTDFA-based polymer DFT is also in good agreement with the corresponding simulation data for density profiles. The parameter-free version of the scaling LTDFA-based polymer DFT is employed to investigate the density profiles of a freely jointed tangent hard sphere chain near a variable sized central hard sphere, again the predictions reproduce accurately the simulational results. Importance of the present adjustable parameter-free version lies in its combination with a recently proposed universal theoretical way, in the resultant formalism, the contact theorem is still met by the adjustable parameter associated with the theoretical way.

  18. Environmental Remediation Full-Scale Implementation: Back to Simple Microbial Massive Culture Approaches

    Directory of Open Access Journals (Sweden)

    Agung Syakti

    2010-10-01

    Full Text Available Using bioaugmentation and biostimulation approach for contaminated soil bioremediation were investigated and implemented on field scale. We combine those approaches by culturing massively the petrophilic indigenous microorganisms from chronically contaminated soil enriched by mixed manure. Through these methods, bioremediation performance revealed promising results in removing the petroleum hydrocarbons comparatively using metabolite by product such as biosurfactant, specific enzymes and other extra-cellular product which are considered as a difficult task and will impact on cost increase.

  19. The Universal Patient Centredness Questionnaire: scaling approaches to reduce positive skew

    Directory of Open Access Journals (Sweden)

    Bjertnaes O

    2016-11-01

    Full Text Available Oyvind Bjertnaes, Hilde Hestad Iversen, Andrew M Garratt Unit for Patient-Reported Quality, Norwegian Institute of Public Health, Oslo, Norway Purpose: Surveys of patients’ experiences typically show results that are indicative of positive experiences. Unbalanced response scales have reduced positive skew for responses to items within the Universal Patient Centeredness Questionnaire (UPC-Q. The objective of this study was to compare the unbalanced response scale with another unbalanced approach to scaling to assess whether the positive skew might be further reduced. Patients and methods: The UPC-Q was included in a patient experience survey conducted at the ward level at six hospitals in Norway in 2015. The postal survey included two reminders to nonrespondents. For patients in the first month of inclusion, UPC-Q items had standard scaling: poor, fairly good, good, very good, and excellent. For patients in the second month, the scaling was more positive: poor, good, very good, exceptionally good, and excellent. The effect of scaling on UPC-Q scores was tested with independent samples t-tests and multilevel linear regression analysis, the latter controlling for the hierarchical structure of data and known predictors of patient-reported experiences. Results: The response rate was 54.6% (n=4,970. Significantly lower scores were found for all items of the more positively worded scale: UPC-Q total score difference was 7.9 (P<0.001, on a scale from 0 to 100 where 100 is the best possible score. Differences between the four items of the UPC-Q ranged from 7.1 (P<0.001 to 10.4 (P<0.001. Multivariate multilevel regression analysis confirmed the difference between the response groups, after controlling for other background variables; UPC-Q total score difference estimate was 8.3 (P<0.001. Conclusion: The more positively worded scaling significantly lowered the mean scores, potentially increasing the sensitivity of the UPC-Q to identify differences over

  20. A fingerprinting mixing model approach to generate uniformly representative solutions for distributed contributions of sediment sources in a Pyrenean drainage basin

    Science.gov (United States)

    Palazón, Leticia; Gaspar, Leticia; Latorre, Borja; Blake, Will; Navas, Ana

    2014-05-01

    Spanish Pyrenean reservoirs are under pressure from high sediment yields in contributing catchments. Sediment fingerprinting approaches offer potential to quantify the contribution of different sediment sources, evaluate catchment erosion dynamics and develop management plans to tackle the reservoir siltation problems. The drainage basin of the Barasona reservoir (1509 km2), located in the Central Spanish Pyrenees, is an alpine-prealpine agroforest basin supplying sediments to the reservoir at an annual rate of around 350 t km-2 with implications for reservoir longevity. The climate is mountain type, wet and cold, with both Atlantic and Mediterranean influences. Steep slopes and the presence of deep and narrow gorges favour rapid runoff and large floods. The ability of geochemical fingerprint properties to discriminate between the sediment sources was investigated by conducting the nonparametric Kruskal-Wallis H-test and a stepwise discriminant function analysis (minimization of Wilk's lambda). This standard procedure selects potential fingerprinting properties as optimum composite fingerprint to characterize and discriminate between sediment sources to the reservoir. Then the contribution of each potential sediment source was assessed by applying a Monte Carlo mixing model to obtain source proportions for the Barasona reservoir sediment samples. The Monte Carlo mixing model was written in C programming language and designed to deliver a user-defined number possible solutions. A Combinatorial Principals method was used to identify the most probable solution with associated uncertainty based on source variability. The unique solution for each sample was characterized by the mean value and the standard deviation of the generated solutions and the lower goodness of fit value applied. This method is argued to guarantee a similar set of representative solutions in all unmixing cases based on likelihood of occurrence. Soil samples for the different potential sediment

  1. Should School Nurses Wear Uniforms?

    Science.gov (United States)

    Journal of School Health, 2001

    2001-01-01

    This 1958 paper questions whether school nurses should wear uniforms (specifically, white uniforms). It concludes that white uniforms are often associated with the treatment of ill people, and since many people have a fear reaction to them, they are not necessary and are even undesirable. Since school nurses are school staff members, they should…

  2. Approaches to 30 Percent Energy Savings at the Community Scale in the Hot-Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Thomas-Rees, S. [Building America Partnership for Improved Residential Construction (BA-PIRC), Cocoa, FL (United States); Beal, D. [Building America Partnership for Improved Residential Construction (BA-PIRC), Cocoa, FL (United States); Martin, E. [Building America Partnership for Improved Residential Construction (BA-PIRC), Cocoa, FL (United States)

    2013-03-01

    BA-PIRC has worked with several community-scale builders within the hot humid climate zone to improve performance of production, or community scale, housing. Tommy Williams Homes (Gainesville, FL), Lifestyle Homes (Melbourne, FL), and Habitat for Humanity (various locations, FL) have all been continuous partners of the Building America program and are the subjects of this report to document achievement of the Building America goal of 30% whole house energy savings packages adopted at the community scale. Key aspects of this research include determining how to evolve existing energy efficiency packages to produce replicable target savings, identifying what builders' technical assistance needs are for implementation and working with them to create sustainable quality assurance mechanisms, and documenting the commercial viability through neutral cost analysis and market acceptance. This report documents certain barriers builders overcame and the approaches they implemented in order to accomplish Building America (BA) Program goals that have not already been documented in previous reports.

  3. FEM × DEM: a new efficient multi-scale approach for geotechnical problems with strain localization

    Directory of Open Access Journals (Sweden)

    Nguyen Trung Kien

    2017-01-01

    Full Text Available The paper presents a multi-scale modeling of Boundary Value Problem (BVP approach involving cohesive-frictional granular materials in the FEM × DEM multi-scale framework. On the DEM side, a 3D model is defined based on the interactions of spherical particles. This DEM model is built through a numerical homogenization process applied to a Volume Element (VE. It is then paired with a Finite Element code. Using this numerical tool that combines two scales within the same framework, we conducted simulations of biaxial and pressuremeter tests on a cohesive-frictional granular medium. In these cases, it is known that strain localization does occur at the macroscopic level, but since FEMs suffer from severe mesh dependency as soon as shear band starts to develop, the second gradient regularization technique has been used. As a consequence, the objectivity of the computation with respect to mesh dependency is restored.

  4. “HABITAT MAPPING” GEODATABASE, AN INTEGRATED INTERDISCIPLINARY AND MULTI-SCALE APPROACH FOR DATA MANAGEMENT

    OpenAIRE

    Grande, Valentina; Angeletti, Lorenzo; Campiani, Elisabetta; Conese, Ilaria; Foglini, Federica; Leidi, Elisa; Mercorella, Alessandra; Taviani, Marco

    2016-01-01

    Abstract Historically, a number of different key concepts and methods dealing with marine habitat classifications and mapping have been developed to date. The EU CoCoNET project provides a new attempt in establishing an integrated approach on the definition of habitats. This scheme combines multi-scale geological and biological data, in fact it consists of three levels (Geomorphological level, Substrate level and Biological level) which in turn are divided into several h...

  5. Uniformity of cylindrical imploding underwater shockwaves at very small radii

    Science.gov (United States)

    Yanuka, D.; Rososhek, A.; Bland, S. N.; Krasik, Ya. E.

    2017-11-01

    We compare the convergent shockwaves generated from underwater, cylindrical arrays of copper wire exploded by multiple kilo-ampere current pulses on nanosecond and microsecond scales. In both cases, the pulsed power devices used for the experiments had the same stored energy (˜500 J) and the wire mass was adjusted to optimize energy transfer to the shockwave. Laser backlit framing images of the shock front were achieved down to the radius of 30 μm. It was found that even in the case of initial azimuthal non-symmetry, the shock wave self-repairs in the final stages of its motion, leading to a highly uniform implosion. In both these and previous experiments, interference fringes have been observed in streak and framing images as the shockwave approached the axis. We have been able to accurately model the origin of the fringes, which is due to the propagation of the laser beam diffracting off the uniform converging shock front. The dynamics of the shockwave and its uniformity at small radii indicate that even with only 500 J stored energies, this technique should produce pressures above 1010 Pa on the axis, with temperatures and densities ideal for warm dense matter research.

  6. Magnetostatics of the uniformly polarized torus

    DEFF Research Database (Denmark)

    Beleggia, Marco; De Graef, Marc; Millev, Yonko

    2009-01-01

    We provide an exhaustive description of the magnetostatics of the uniformly polarized torus and its derivative self-intersecting (spindle) shapes. In the process, two complementary approaches have been implemented, position-space analysis of the Laplace equation with inhomogeneous boundary condit...

  7. A Scale-up Approach for Film Coating Process Based on Surface Roughness as the Critical Quality Attribute.

    Science.gov (United States)

    Yoshino, Hiroyuki; Hara, Yuko; Dohi, Masafumi; Yamashita, Kazunari; Hakomori, Tadashi; Kimura, Shin-Ichiro; Iwao, Yasunori; Itai, Shigeru

    2018-04-01

    Scale-up approaches for film coating process have been established for each type of film coating equipment from thermodynamic and mechanical analyses for several decades. The objective of the present study was to establish a versatile scale-up approach for film coating process applicable to commercial production that is based on critical quality attribute (CQA) using the Quality by Design (QbD) approach and is independent of the equipment used. Experiments on a pilot scale using the Design of Experiment (DoE) approach were performed to find a suitable CQA from surface roughness, contact angle, color difference, and coating film properties by terahertz spectroscopy. Surface roughness was determined to be a suitable CQA from a quantitative appearance evaluation. When surface roughness was fixed as the CQA, the water content of the film-coated tablets was determined to be the critical material attribute (CMA), a parameter that does not depend on scale or equipment. Finally, to verify the scale-up approach determined from the pilot scale, experiments on a commercial scale were performed. The good correlation between the surface roughness (CQA) and the water content (CMA) identified at the pilot scale was also retained at the commercial scale, indicating that our proposed method should be useful as a scale-up approach for film coating process.

  8. Approaches to 30% Energy Savings at the Community Scale in the Hot-Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Thomas-Rees, S.; Beal, D.; Martin, E.; Fonorow, K.

    2013-03-01

    BA-PIRC has worked with several community-scale builders within the hot humid climate zone to improve performance of production, or community scale, housing. Tommy Williams Homes (Gainesville, FL), Lifestyle Homes (Melbourne, FL), and Habitat for Humanity (various locations, FL) have all been continuous partners of the BA Program and are the subjects of this report to document achievement of the Building America goal of 30% whole house energy savings packages adopted at the community scale. The scope of this report is to demonstrate achievement of these goals though the documentation of production-scale homes built cost-effectively at the community scale, and modeled to reduce whole-house energy use by 30% in the Hot Humid climate region. Key aspects of this research include determining how to evolve existing energy efficiency packages to produce replicable target savings, identifying what builders' technical assistance needs are for implementation and working with them to create sustainable quality assurance mechanisms, and documenting the commercial viability through neutral cost analysis and market acceptance. This report documents certain barriers builders overcame and the approaches they implemented in order to accomplish Building America (BA) Program goals that have not already been documented in previous reports.

  9. Skin carcinogenesis following uniform and non-uniform β irradiation

    International Nuclear Information System (INIS)

    Charles, M.W.; Williams, J.P.; Coggle, J.E.

    1989-01-01

    Where workers or the general public may be exposed to ionising radiation, the irradiation is rarely uniform. The risk figures and dose limits recommended by the International Commission on Radiological Protection (ICRP) are based largely on clinical and epidemiological studies of reasonably uniform irradiated organs. The paucity of clinical or experimental data for highly non-uniform exposures has prevented the ICRP from providing adequate recommendations. This weakness has led on a number of occasions to the postulate that highly non-uniform exposures of organs could be 100,000 times more carcinogenic than ICRP risk figures would predict. This so-called ''hot-particle hypothesis'' found little support among reputable radiobiologists, but could not be clearly and definitively refuted on the basis of experiment. An experiment, based on skin tumour induction in mouse skin, is described which was developed to test the hypothesis. The skin of 1200 SAS/4 male mice has been exposed to a range of uniform and non-uniform sources of the β emitter 170 Tm (E max ∼ 1 MeV). Non-uniform exposures were produced using arrays of 32 or 8 2-mm diameter sources distributed over the same 8-cm 2 area as a uniform control source. Average skin doses varied from 2-100 Gy. The results for the non-uniform sources show a 30% reduction in tumour incidence by the 32-point array at the lower mean doses compared with the response from uniform sources. The eight-point array showed an order-of-magnitude reduction in tumour incidence compared to uniform irradiation at low doses. These results, in direct contradiction to the ''hot particle hypothesis'', indicate that non-uniform exposures produce significantly fewer tumours than uniform exposures. (author)

  10. Exploring Multi-Scale Spatiotemporal Twitter User Mobility Patterns with a Visual-Analytics Approach

    Directory of Open Access Journals (Sweden)

    Junjun Yin

    2016-10-01

    Full Text Available Understanding human mobility patterns is of great importance for urban planning, traffic management, and even marketing campaign. However, the capability of capturing detailed human movements with fine-grained spatial and temporal granularity is still limited. In this study, we extracted high-resolution mobility data from a collection of over 1.3 billion geo-located Twitter messages. Regarding the concerns of infringement on individual privacy, such as the mobile phone call records with restricted access, the dataset is collected from publicly accessible Twitter data streams. In this paper, we employed a visual-analytics approach to studying multi-scale spatiotemporal Twitter user mobility patterns in the contiguous United States during the year 2014. Our approach included a scalable visual-analytics framework to deliver efficiency and scalability in filtering large volume of geo-located tweets, modeling and extracting Twitter user movements, generating space-time user trajectories, and summarizing multi-scale spatiotemporal user mobility patterns. We performed a set of statistical analysis to understand Twitter user mobility patterns across multi-level spatial scales and temporal ranges. In particular, Twitter user mobility patterns measured by the displacements and radius of gyrations of individuals revealed multi-scale or multi-modal Twitter user mobility patterns. By further studying such mobility patterns in different temporal ranges, we identified both consistency and seasonal fluctuations regarding the distance decay effects in the corresponding mobility patterns. At the same time, our approach provides a geo-visualization unit with an interactive 3D virtual globe web mapping interface for exploratory geo-visual analytics of the multi-level spatiotemporal Twitter user movements.

  11. Linking biogeomorphic feedbacks from ecosystem engineer to landscape scale: a panarchy approach

    Science.gov (United States)

    Eichel, Jana

    2017-04-01

    Scale is a fundamental concept in both ecology and geomorphology. Therefore, scale-based approaches are a valuable tool to bridge the disciplines and improve the understanding of feedbacks between geomorphic processes, landforms, material and organisms and ecological processes in biogeomorphology. Yet, linkages between biogeomorphic feedbacks on different scales, e.g. between ecosystem engineering and landscape scale patterns and dynamics, are not well understood. A panarchy approach sensu Holling et al. (2002) can help to close this research gap and explain how structure and function are created in biogeomorphic ecosystems. Based on results from previous biogeomorphic research in Turtmann glacier foreland (Switzerland; Eichel, 2017; Eichel et al. 2013, 2016), a panarchy concept is presented for lateral moraine slope biogeomorphic ecosystems. It depicts biogeomorphic feedbacks on different spatiotemporal scales as a set of nested adaptive cycles and links them by 'remember' and 'revolt' connections. On a small scale (cm2 - m2; seconds to years), the life cycle of the ecosystem engineer Dryas octopetala L. is considered as an adaptive cycle. Biogeomorphic succession within patches created by geomorphic processes represents an intermediate scale adaptive cycle (m2 - ha, years to decades), while geomorphic and ecologic pattern development at a landscape scale (ha - km2, decades to centuries) can be illustrated by an adaptive cycle of ‚biogeomorphic patch dynamics' (Eichel, 2017). In the panarchy, revolt connections link the smaller scale adaptive cycles to larger scale cycles: on lateral moraine slopes, the development of ecosystem engineer biomass and cover controls the engineering threshold of the biogeomorphic feedback window (Eichel et al., 2016) and therefore the onset of the biogeomorphic phase during biogeomorphic succession. In this phase, engineer patches and biogeomorphic structures can be created in the patch mosaic of the landscape. Remember connections

  12. Properties of small-scale interfacial turbulence from a novel thermography based approach

    Science.gov (United States)

    Schnieders, Jana; Garbe, Christoph

    2013-04-01

    Oceans cover nearly two thirds of the earth's surface and exchange processes between the Atmosphere and the Ocean are of fundamental environmental importance. At the air-sea interface, complex interaction processes take place on a multitude of scales. Turbulence plays a key role in the coupling of momentum, heat and mass transfer [2]. Here we use high resolution infrared imagery to visualize near surface aqueous turbulence. Thermographic data is analized from a range of laboratory facilities and experimental conditions with wind speeds ranging from 1ms-1 to 7ms-1 and various surface conditions. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: (1) The surface heat patterns show characteristic features of scales. (2) The structure of these patterns change with increasing wind stress and surface conditions. We present a new image processing based approach to the analysis of the spacing of cold streaks based on a machine learning approach [4, 1] to classify the thermal footprints of near surface turbulence. Our random forest classifier is based on classical features in image processing such as gray value gradients and edge detecting features. The result is a pixel-wise classification of the surface heat pattern with a subsequent analysis of the streak spacing. This approach has been presented in [3] and can be applied to a wide range of experimental data. In spite of entirely different boundary conditions, the spacing of turbulent cells near the air-water interface seems to match the expected turbulent cell size for flow near a no-slip wall. The analysis of the spacing of cold streaks shows consistent behavior in a range of laboratory facilities when expressed as a function of water sided friction velocity, u*. The scales

  13. Graphene Conductance Uniformity Mapping

    DEFF Research Database (Denmark)

    Buron, Jonas Christian Due; Petersen, Dirch Hjorth; Bøggild, Peter

    2012-01-01

    We demonstrate a combination of micro four-point probe (M4PP) and non-contact terahertz time-domain spectroscopy (THz-TDS) measurements for centimeter scale quantitative mapping of the sheet conductance of large area chemical vapor deposited graphene films. Dual configuration M4PP measurements......, demonstrated on graphene for the first time, provide valuable statistical insight into the influence of microscale defects on the conductance, while THz-TDS has potential as a fast, non-contact metrology method for mapping of the spatially averaged nanoscopic conductance on wafer-scale graphene with scan times......, dominating the microscale conductance of the investigated graphene film....

  14. A Uniform Approach to Type Theory

    Science.gov (United States)

    1989-01-01

    logical and statistical techniques. There is no comprehensive survey on implementation issues. Some partial aspects are described in...U. de Paris (1930). In: Ecrits logiques de Jacques Herbrand, PUF Paris (1968). [71] C. M. Hoffmann, M. J. O’Donnell. "Programming with Equations

  15. Disordering scaling and generalized nearest-neighbor approach in the thermodynamics of Lennard-Jones systems

    International Nuclear Information System (INIS)

    Vorob'ev, V.S.

    2003-01-01

    We suggest a concept of multiple disordering scaling of the crystalline state. Such a scaling procedure applied to a crystal leads to the liquid and (in low density limit) gas states. This approach provides an explanation to a high value of configuration (common) entropy of liquefied noble gases, which can be deduced from experimental data. We use the generalized nearest-neighbor approach to calculate free energy and pressure of the Lennard-Jones systems after performing this scaling procedure. These thermodynamic functions depend on one parameter characterizing the disordering only. Condensed states of the system (liquid and solid) correspond to small values of this parameter. When this parameter tends to unity, we get an asymptotically exact equation of state for a gas involving the second virial coefficient. A reasonable choice of the values for the disordering parameter (ranging between zero and unity) allows us to find the lines of coexistence between different phase states in the Lennard-Jones systems, which are in a good agreement with the available experimental data

  16. Assessment indices for uniform and non-uniform thermal environments

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Different assessment indices for thermal environments were compared and selected for proper assessment of indoor thermal environments.30 subjects reported their overall thermal sensation,thermal comfort,and thermal acceptability in uniform and non-uniform conditions.The results show that these three assessment indices provide equivalent evaluations in uniform environments.However,overall thermal sensation differs from the other two indices and cannot be used as a proper index for the evaluation of non-uniform environments.The relationship between the percentage and the mean vote for each index is established.

  17. Technical and scale efficiency in public and private Irish nursing homes - a bootstrap DEA approach.

    Science.gov (United States)

    Ni Luasa, Shiovan; Dineen, Declan; Zieba, Marta

    2016-10-27

    This article provides methodological and empirical insights into the estimation of technical efficiency in the nursing home sector. Focusing on long-stay care and using primary data, we examine technical and scale efficiency in 39 public and 73 private Irish nursing homes by applying an input-oriented data envelopment analysis (DEA). We employ robust bootstrap methods to validate our nonparametric DEA scores and to integrate the effects of potential determinants in estimating the efficiencies. Both the homogenous and two-stage double bootstrap procedures are used to obtain confidence intervals for the bias-corrected DEA scores. Importantly, the application of the double bootstrap approach affords true DEA technical efficiency scores after adjusting for the effects of ownership, size, case-mix, and other determinants such as location, and quality. Based on our DEA results for variable returns to scale technology, the average technical efficiency score is 62 %, and the mean scale efficiency is 88 %, with nearly all units operating on the increasing returns to scale part of the production frontier. Moreover, based on the double bootstrap results, Irish nursing homes are less technically efficient, and more scale efficient than the conventional DEA estimates suggest. Regarding the efficiency determinants, in terms of ownership, we find that private facilities are less efficient than the public units. Furthermore, the size of the nursing home has a positive effect, and this reinforces our finding that Irish homes produce at increasing returns to scale. Also, notably, we find that a tendency towards quality improvements can lead to poorer technical efficiency performance.

  18. Downsampling Non-Uniformly Sampled Data

    Directory of Open Access Journals (Sweden)

    Fredrik Gustafsson

    2007-10-01

    Full Text Available Decimating a uniformly sampled signal a factor D involves low-pass antialias filtering with normalized cutoff frequency 1/D followed by picking out every Dth sample. Alternatively, decimation can be done in the frequency domain using the fast Fourier transform (FFT algorithm, after zero-padding the signal and truncating the FFT. We outline three approaches to decimate non-uniformly sampled signals, which are all based on interpolation. The interpolation is done in different domains, and the inter-sample behavior does not need to be known. The first one interpolates the signal to a uniformly sampling, after which standard decimation can be applied. The second one interpolates a continuous-time convolution integral, that implements the antialias filter, after which every Dth sample can be picked out. The third frequency domain approach computes an approximate Fourier transform, after which truncation and IFFT give the desired result. Simulations indicate that the second approach is particularly useful. A thorough analysis is therefore performed for this case, using the assumption that the non-uniformly distributed sampling instants are generated by a stochastic process.

  19. A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.

    Science.gov (United States)

    Röhl, Annika; Bockmayr, Alexander

    2017-01-03

    Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.

  20. A multi-scale spatial approach to address environmental effects of small hydropower development.

    Science.gov (United States)

    McManamay, Ryan A; Samu, Nicole; Kao, Shih-Chieh; Bevelhimer, Mark S; Hetrick, Shelaine C

    2015-01-01

    Hydropower development continues to grow worldwide in developed and developing countries. While the ecological and physical responses to dam construction have been well documented, translating this information into planning for hydropower development is extremely difficult. Very few studies have conducted environmental assessments to guide site-specific or widespread hydropower development. Herein, we propose a spatial approach for estimating environmental effects of hydropower development at multiple scales, as opposed to individual site-by-site assessments (e.g., environmental impact assessment). Because the complex, process-driven effects of future hydropower development may be uncertain or, at best, limited by available information, we invested considerable effort in describing novel approaches to represent environmental concerns using spatial data and in developing the spatial footprint of hydropower infrastructure. We then use two case studies in the US, one at the scale of the conterminous US and another within two adjoining rivers basins, to examine how environmental concerns can be identified and related to areas of varying energy capacity. We use combinations of reserve-design planning and multi-metric ranking to visualize tradeoffs among environmental concerns and potential energy capacity. Spatial frameworks, like the one presented, are not meant to replace more in-depth environmental assessments, but to identify information gaps and measure the sustainability of multi-development scenarios as to inform policy decisions at the basin or national level. Most importantly, the approach should foster discussions among environmental scientists and stakeholders regarding solutions to optimize energy development and environmental sustainability.

  1. Pesticide fate at regional scale: Development of an integrated model approach and application

    Science.gov (United States)

    Herbst, M.; Hardelauf, H.; Harms, R.; Vanderborght, J.; Vereecken, H.

    As a result of agricultural practice many soils and aquifers are contaminated with pesticides. In order to quantify the side-effects of these anthropogenic impacts on groundwater quality at regional scale, a process-based, integrated model approach was developed. The Richards’ equation based numerical model TRACE calculates the three-dimensional saturated/unsaturated water flow. For the modeling of regional scale pesticide transport we linked TRACE with the plant module SUCROS and with 3DLEWASTE, a hybrid Lagrangian/Eulerian approach to solve the convection/dispersion equation. We used measurements, standard methods like pedotransfer-functions or parameters from literature to derive the model input for the process model. A first-step application of TRACE/3DLEWASTE to the 20 km 2 test area ‘Zwischenscholle’ for the period 1983-1993 reveals the behaviour of the pesticide isoproturon. The selected test area is characterised by an intense agricultural use and shallow groundwater, resulting in a high vulnerability of the groundwater to pesticide contamination. The model results stress the importance of the unsaturated zone for the occurrence of pesticides in groundwater. Remarkable isoproturon concentrations in groundwater are predicted for locations with thin layered and permeable soils. For four selected locations we used measured piezometric heads to validate predicted groundwater levels. In general, the model results are consistent and reasonable. Thus the developed integrated model approach is seen as a promising tool for the quantification of the agricultural practice impact on groundwater quality.

  2. A multi-scale approach for high cycle anisotropic fatigue resistance: Application to forged components

    International Nuclear Information System (INIS)

    Milesi, M.; Chastel, Y.; Hachem, E.; Bernacki, M.; Loge, R.E.; Bouchard, P.O.

    2010-01-01

    Forged components exhibit good mechanical strength, particularly in terms of high cycle fatigue properties. This is due to the specific microstructure resulting from large plastic deformation as in a forging process. The goal of this study is to account for critical phenomena such as the anisotropy of the fatigue resistance in order to perform high cycle fatigue simulations on industrial forged components. Standard high cycle fatigue criteria usually give good results for isotropic behaviors but are not suitable for components with anisotropic features. The aim is to represent explicitly this anisotropy at a lower scale compared to the process scale and determined local coefficients needed to simulate a real case. We developed a multi-scale approach by considering the statistical morphology and mechanical characteristics of the microstructure to represent explicitly each element. From stochastic experimental data, realistic microstructures were reconstructed in order to perform high cycle fatigue simulations on it with different orientations. The meshing was improved by a local refinement of each interface and simulations were performed on each representative elementary volume. The local mechanical anisotropy is taken into account through the distribution of particles. Fatigue parameters identified at the microscale can then be used at the macroscale on the forged component. The linkage of these data and the process scale is the fiber vector and the deformation state, used to calculate global mechanical anisotropy. Numerical results reveal an expected behavior compared to experimental tendencies. We proved numerically the dependence of the anisotropy direction and the deformation state on the endurance limit evolution.

  3. A multi-scaled approach to evaluating the fish assemblage structure within southern Appalachian streams USA.

    Science.gov (United States)

    Kirsch, Joseph; Peterson, James T.

    2014-01-01

    There is considerable uncertainty about the relative roles of stream habitat and landscape characteristics in structuring stream-fish assemblages. We evaluated the relative importance of environmental characteristics on fish occupancy at the local and landscape scales within the upper Little Tennessee River basin of Georgia and North Carolina. Fishes were sampled using a quadrat sample design at 525 channel units within 48 study reaches during two consecutive years. We evaluated species–habitat relationships (local and landscape factors) by developing hierarchical, multispecies occupancy models. Modeling results suggested that fish occupancy within the Little Tennessee River basin was primarily influenced by stream topology and topography, urban land coverage, and channel unit types. Landscape scale factors (e.g., urban land coverage and elevation) largely controlled the fish assemblage structure at a stream-reach level, and local-scale factors (i.e., channel unit types) influenced fish distribution within stream reaches. Our study demonstrates the utility of a multi-scaled approach and the need to account for hierarchy and the interscale interactions of factors influencing assemblage structure prior to monitoring fish assemblages, developing biological management plans, or allocating management resources throughout a stream system.

  4. LIDAR-based urban metabolism approach to neighbourhood scale energy and carbon emissions modelling

    Energy Technology Data Exchange (ETDEWEB)

    Christen, A. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Geography; Coops, N. [British Columbia Univ., Vancouver, BC (Canada). Dept. of Forest Sciences; Canada Research Chairs, Ottawa, ON (Canada); Kellet, R. [British Columbia Univ., Vancouver, BC (Canada). School of Architecture and Landscape Architecture

    2010-07-01

    A remote sensing technology was used to model neighbourhood scale energy and carbon emissions in a case study set in Vancouver, British Columbia (BC). The study was used to compile and aggregate atmospheric carbon flux, urban form, and energy and emissions data in a replicable neighbourhood-scale approach. The study illustrated methods of integrating diverse emission and uptake processes on a range of scales and resolutions, and benchmarked comparisons of modelled estimates with measured energy consumption data obtained over a 2-year period from a research tower located in the study area. The study evaluated carbon imports, carbon exports and sequestration, and relevant emissions processes. Fossil fuel emissions produced in the neighbourhood were also estimated. The study demonstrated that remote sensing technologies such as LIDAR and multispectral satellite imagery can be an effective means of generating and extracting urban form and land cover data at fine scales. Data from the study were used to develop several emissions reduction and energy conservation scenarios. 6 refs.

  5. FOREWORD: Heterogenous nucleation and microstructure formation—a scale- and system-bridging approach Heterogenous nucleation and microstructure formation—a scale- and system-bridging approach

    Science.gov (United States)

    Emmerich, H.

    2009-11-01

    Scope and aim of this volume. Nucleation and initial microstructure formation play an important role in almost all aspects of materials science [1-5]. The relevance of the prediction and control of nucleation and the subsequent microstructure formation is fully accepted across many areas of modern surface and materials science and technology. One reason is that a large range of material properties, from mechanical ones such as ductility and hardness to electrical and magnetic ones such as electric conductivity and magnetic hardness, depend largely on the specific crystalline structure that forms in nucleation and the subsequent initial microstructure growth. A very demonstrative example for the latter is the so called bamboo structure of an integrated circuit, for which resistance against electromigration [6] , a parallel alignment of grain boundaries vertical to the direction of electricity, is most favorable. Despite the large relevance of predicting and controlling nucleation and the subsequent microstructure formation, and despite significant progress in the experimental analysis of the later stages of crystal growth in line with new theoretical computer simulation concepts [7], details about the initial stages of solidification are still far from being satisfactorily understood. This is in particular true when the nucleation event occurs as heterogenous nucleation. The Priority Program SPP 1296 'Heterogenous Nucleation and Microstructure Formation—a Scale- and System-Bridging Approach' [8] sponsored by the German Research Foundation, DFG, intends to contribute to this open issue via a six year research program that enables approximately twenty research groups in Germany to work interdisciplinarily together following this goal. Moreover, it enables the participants to embed themselves in the international community which focuses on this issue via internationally open joint workshops, conferences and summer schools. An outline of such activities can be found

  6. Ultrasonic transducer design for uniform insonation

    International Nuclear Information System (INIS)

    Harrison, G.H.; Balcer-Kubiczek, E.K.; McCulloch, D.

    1984-01-01

    Techniques used in transducer development for acoustical imaging have been evaluated for the purpose of producing broad, uniform ultrasonic fields from planar radiators. Such fields should be useful in hyperthermia, physical therapy, and ultrasonic bioeffects studies. Fourier inversion of the circ function yielded a source velocity distribution proportional to (P/r) exp ((-ik/2Z) (2Z/sup 2/+r/sup 2/)) J/sub 1/(krP/Z), where r is the radial source coordinate, k is the wave number, and P is the desired radius of uniform insonation at a depth Z in water. This source distribution can be truncated without significantly degrading the solution. A simpler solution consists of exponentially shading the edge of an otherwise uniformly excited disk transducer. This approach was successfully approximated experimentally

  7. Large-scaled biomonitoring of trace-element air pollution: goals and approaches

    International Nuclear Information System (INIS)

    Wolterbeek, H.T.

    2000-01-01

    Biomonitoring is often used in multi-parameter approaches in especially larger scaled surveys. The information obtained may consist of thousands of data points, which can be processed in a variety of mathematical routines to permit a condensed and strongly-smoothed presentation of results and conclusions. Although reports on larger-scaled biomonitoring surveys are 'easy- to-read' and often include far-reaching interpretations, it is not possible to obtain an insight into the real meaningfulness or quality of the survey performed. In any set-up, the aims of the survey should be put forward as clear as possible. Is the survey to provide information on atmospheric element levels, or on total, wet and dry deposition, what should be the time- or geographical scale and resolution of the survey, which elements should be determined, is the survey to give information on emission or immission characteristics? Answers to all these questions are of paramount importance, not only regarding the choice of the biomonitoring species or necessary handling/analysis techniques, but also with respect to planning and personnel, and, not to forget, the expected/available means of data interpretation. In considering a survey set-up, rough survey dimensions may follow directly from the goals; in practice, however, they will be governed by other aspects such as available personnel, handling means/capacity, costs, etc. In what sense and to what extent these factors may cause the survey to drift away from the pre-set goals should receive ample attention: in extreme cases the survey should not be carried out. Bearing in mind the above considerations, the present paper focuses on goals, quality and approaches of larger-scaled biomonitoring surveys on trace element air pollution. The discussion comprises practical problems, options, decisions, analytical means, quality measures, and eventual survey results. (author)

  8. Multi-scale approach for predicting fish species distributions across coral reef seascapes.

    Directory of Open Access Journals (Sweden)

    Simon J Pittman

    Full Text Available Two of the major limitations to effective management of coral reef ecosystems are a lack of information on the spatial distribution of marine species and a paucity of data on the interacting environmental variables that drive distributional patterns. Advances in marine remote sensing, together with the novel integration of landscape ecology and advanced niche modelling techniques provide an unprecedented opportunity to reliably model and map marine species distributions across many kilometres of coral reef ecosystems. We developed a multi-scale approach using three-dimensional seafloor morphology and across-shelf location to predict spatial distributions for five common Caribbean fish species. Seascape topography was quantified from high resolution bathymetry at five spatial scales (5-300 m radii surrounding fish survey sites. Model performance and map accuracy was assessed for two high performing machine-learning algorithms: Boosted Regression Trees (BRT and Maximum Entropy Species Distribution Modelling (MaxEnt. The three most important predictors were geographical location across the shelf, followed by a measure of topographic complexity. Predictor contribution differed among species, yet rarely changed across spatial scales. BRT provided 'outstanding' model predictions (AUC = >0.9 for three of five fish species. MaxEnt provided 'outstanding' model predictions for two of five species, with the remaining three models considered 'excellent' (AUC = 0.8-0.9. In contrast, MaxEnt spatial predictions were markedly more accurate (92% map accuracy than BRT (68% map accuracy. We demonstrate that reliable spatial predictions for a range of key fish species can be achieved by modelling the interaction between the geographical location across the shelf and the topographic heterogeneity of seafloor structure. This multi-scale, analytic approach is an important new cost-effective tool to accurately delineate essential fish habitat and support

  9. Serbian translation of the 20-item toronto alexithymia scale: Psychometric properties and the new methodological approach in translating scales

    Directory of Open Access Journals (Sweden)

    Trajanović Nikola N.

    2013-01-01

    Full Text Available Introduction. Since inception of the alexithymia construct in 1970’s, there has been a continuous effort to improve both its theoretical postulates and the clinical utility through development, standardization and validation of assessment scales. Objective. The aim of this study was to validate the Serbian translation of the 20-item Toronto Alexithymia Scale (TAS-20 and to propose a new method of translation of scales with a property of temporal stability. Methods. The scale was expertly translated by bilingual medical professionals and a linguist, and given to a sample of bilingual participants from the general population who completed both the English and the Serbian version of the scale one week apart. Results. The findings showed that the Serbian version of the TAS-20 had a good internal consistency reliability regarding total scale (α=0.86, and acceptable reliability of the three factors (α=0.71-0.79. Conclusion. The analysis confirmed the validity and consistency of the Serbian translation of the scale, with observed weakness of the factorial structure consistent with studies in other languages. The results also showed that the method of utilizing a self-control bilingual subject is a useful alternative to the back-translation method, particularly in cases of linguistically and structurally sensitive scales, or in cases where a larger sample is not available. This method, dubbed as ‘forth-translation’, could be used to translate psychometric scales measuring properties which have temporal stability over the period of at least several weeks.

  10. Hybrid approaches to nanometer-scale patterning: Exploiting tailored intermolecular interactions

    International Nuclear Information System (INIS)

    Mullen, Thomas J.; Srinivasan, Charan; Shuster, Mitchell J.; Horn, Mark W.; Andrews, Anne M.; Weiss, Paul S.

    2008-01-01

    In this perspective, we explore hybrid approaches to nanometer-scale patterning, where the precision of molecular self-assembly is combined with the sophistication and fidelity of lithography. Two areas - improving existing lithographic techniques through self-assembly and fabricating chemically patterned surfaces - will be discussed in terms of their advantages, limitations, applications, and future outlook. The creation of such chemical patterns enables new capabilities, including the assembly of biospecific surfaces to be recognized by, and to capture analytes from, complex mixtures. Finally, we speculate on the potential impact and upcoming challenges of these hybrid strategies.

  11. Gravitation and Special Relativity from Compton Wave Interactions at the Planck Scale: An Algorithmic Approach

    Science.gov (United States)

    Blackwell, William C., Jr.

    2004-01-01

    In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.

  12. Scaling strength distributions in quasi-brittle materials from micro-to macro-scales: A computational approach to modeling Nature-inspired structural ceramics

    International Nuclear Information System (INIS)

    Genet, Martin; Couegnat, Guillaume; Tomsia, Antoni P.; Ritchie, Robert O.

    2014-01-01

    This paper presents an approach to predict the strength distribution of quasi-brittle materials across multiple length-scales, with emphasis on Nature-inspired ceramic structures. It permits the computation of the failure probability of any structure under any mechanical load, solely based on considerations of the microstructure and its failure properties by naturally incorporating the statistical and size-dependent aspects of failure. We overcome the intrinsic limitations of single periodic unit-based approaches by computing the successive failures of the material components and associated stress redistributions on arbitrary numbers of periodic units. For large size samples, the microscopic cells are replaced by a homogenized continuum with equivalent stochastic and damaged constitutive behavior. After establishing the predictive capabilities of the method, and illustrating its potential relevance to several engineering problems, we employ it in the study of the shape and scaling of strength distributions across differing length-scales for a particular quasi-brittle system. We find that the strength distributions display a Weibull form for samples of size approaching the periodic unit; however, these distributions become closer to normal with further increase in sample size before finally reverting to a Weibull form for macroscopic sized samples. In terms of scaling, we find that the weakest link scaling applies only to microscopic, and not macroscopic scale, samples. These findings are discussed in relation to failure patterns computed at different size-scales. (authors)

  13. Perceptually Uniform Motion Space.

    Science.gov (United States)

    Birkeland, Asmund; Turkay, Cagatay; Viola, Ivan

    2014-11-01

    Flow data is often visualized by animated particles inserted into a flow field. The velocity of a particle on the screen is typically linearly scaled by the velocities in the data. However, the perception of velocity magnitude in animated particles is not necessarily linear. We present a study on how different parameters affect relative motion perception. We have investigated the impact of four parameters. The parameters consist of speed multiplier, direction, contrast type and the global velocity scale. In addition, we investigated if multiple motion cues, and point distribution, affect the speed estimation. Several studies were executed to investigate the impact of each parameter. In the initial results, we noticed trends in scale and multiplier. Using the trends for the significant parameters, we designed a compensation model, which adjusts the particle speed to compensate for the effect of the parameters. We then performed a second study to investigate the performance of the compensation model. From the second study we detected a constant estimation error, which we adjusted for in the last study. In addition, we connect our work to established theories in psychophysics by comparing our model to a model based on Stevens' Power Law.

  14. UVIS Flat Field Uniformity

    Science.gov (United States)

    Quijano, Jessica Kim

    2009-07-01

    The stability and uniformity of the low-frequency flat fields {L-flat} of the UVIS detector will be assessed by using multiple-pointing observations of the globular clusters 47 Tucanae {NGC104} and Omega Centauri {NGC5139}, thus imaging moderately dense stellar fields. By placing the same star over different portions of the detector and measuring relative changes in its brightness, it will be possible to determine local variations in the response of the UVIS detector. Based on previous experience with STIS and ACS, it is deemed that a total of 9 different pointings will suffice to provide adequate characterization of the flat field stability in any given band. For each filter to be tested, the baseline consists of 9 pointings in a 3X3 box pattern with dither steps of about 25% of the FOV, or 40.5", in either the x or y direction {useful also for CTE measurements, if needed in the future}. During SMOV, the complement of filters to be tested is limited to the following 6 filters: F225W, F275W, F336W, for Omega Cen, and F438W, F606W, and F814W for 47 Tuc. Three long exposures for each target are arranged such that the initial dither position is observed with the appropriate filters for that target within one orbit at a single pointing, so that filter-to-filter differences in the observed star positions can be checked. In addition to the 9 baseline exposures, two sets of short exposures will be taken:a} one short exposure will be taken of OmegaCen with each of the visible filters {F438W, F606W and F814W} in order to check the geometric distortion solution to be obtained with the data from proposal 11444;b} for each target, a single short exposure will be taken with each filter to facilitate the study of the PSF as a function of position on the detector by providing unsaturated images of sparsely-spaced bright stars.This proposal corresponds to Activity Description ID WF39. It should execute only after the following proposal has executed:WF21 - 11434

  15. The ESI scale, an ethical approach to the evaluation of seismic hazards

    Science.gov (United States)

    Porfido, Sabina; Nappi, Rosa; De Lucia, Maddalena; Gaudiosi, Germana; Alessio, Giuliana; Guerrieri, Luca

    2015-04-01

    The dissemination of correct information about seismic hazard is an ethical duty of scientific community worldwide. A proper assessment of a earthquake severity and impact should not ignore the evaluation of its intensity, taking into account both the effects on humans, man-made structures, as well as on the natural evironment. We illustrate the new macroseismic scale that measures the intensity taking into account the effects of earthquakes on the environment: the ESI 2007 (Environmental Seismic Intensity) scale (Michetti et al., 2007), ratified by the INQUA (International Union for Quaternary Research) during the XVII Congress in Cairns (Australia). The ESI scale integrates and completes the traditional macroseismic scales, of which it represents the evolution, allowing to assess the intensity parameter also where buildings are absent or damage-based diagnostic elements saturate. Each degree reflects the corresponding strength of an earthquake and the role of ground effects, evaluating the Intensity on the basis of the characteristics and size of primary (e.g. surface faulting and tectonic uplift/subsidence) and secondary effects (e.g. ground cracks, slope movements, liquefaction phenomena, hydrological changes, anomalous waves, tsunamis, trees shaking, dust clouds and jumping stones). This approach can be considered "ethical" because helps to define the real scenario of an earthquake, regardless of the country's socio-economic conditions and level of development. Here lies the value and the relevance of macroseismic scales even today, one hundred years after the death of Giuseppe Mercalli, who conceived the homonymous scale for the evaluation of earthquake intensity. For an appropriate mitigation strategy in seismic areas, it is fundamental to consider the role played by seismically induced effects on ground, such as active faults (size in length and displacement) and secondary effects (the total area affecting). With these perspectives two different cases

  16. Multi-scale approach in numerical reservoir simulation; Uma abordagem multiescala na simulacao numerica de reservatorios

    Energy Technology Data Exchange (ETDEWEB)

    Guedes, Solange da Silva

    1998-07-01

    Advances in petroleum reservoir descriptions have provided an amount of data that can not be handled directly during numerical simulations. This detailed geological information must be incorporated into a coarser model during multiphase fluid flow simulations by means of some upscaling technique. the most used approach is the pseudo relative permeabilities and the more widely used is the Kyte and Berry method (1975). In this work, it is proposed a multi-scale computational model for multiphase flow that implicitly treats the upscaling without using pseudo functions. By solving a sequence of local problems on subdomains of the refined scale it is possible to achieve results with a coarser grid without expensive computations of a fine grid model. The main advantage of this new procedure is to treat the upscaling step implicitly in the solution process, overcoming some practical difficulties related the use of traditional pseudo functions. results of bidimensional two phase flow simulations considering homogeneous porous media are presented. Some examples compare the results of this approach and the commercial upscaling program PSEUDO, a module of the reservoir simulation software ECLIPSE. (author)

  17. Evaluation of low impact development approach for mitigating flood inundation at a watershed scale in China.

    Science.gov (United States)

    Hu, Maochuan; Sayama, Takahiro; Zhang, Xingqi; Tanaka, Kenji; Takara, Kaoru; Yang, Hong

    2017-05-15

    Low impact development (LID) has attracted growing attention as an important approach for urban flood mitigation. Most studies evaluating LID performance for mitigating floods focus on the changes of peak flow and runoff volume. This paper assessed the performance of LID practices for mitigating flood inundation hazards as retrofitting technologies in an urbanized watershed in Nanjing, China. The findings indicate that LID practices are effective for flood inundation mitigation at the watershed scale, and especially for reducing inundated areas with a high flood hazard risk. Various scenarios of LID implementation levels can reduce total inundated areas by 2%-17% and areas with a high flood hazard level by 6%-80%. Permeable pavement shows better performance than rainwater harvesting against mitigating urban waterlogging. The most efficient scenario is combined rainwater harvesting on rooftops with a cistern capacity of 78.5 mm and permeable pavement installed on 75% of non-busy roads and other impervious surfaces. Inundation modeling is an effective approach to obtaining the information necessary to guide decision-making for designing LID practices at watershed scales. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  19. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Science.gov (United States)

    Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien

    2017-01-01

    Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.

  20. An Integrated Assessment Approach to Address Artisanal and Small-Scale Gold Mining in Ghana

    Directory of Open Access Journals (Sweden)

    Niladri Basu

    2015-09-01

    Full Text Available Artisanal and small-scale gold mining (ASGM is growing in many regions of the world including Ghana. The problems in these communities are complex and multi-faceted. To help increase understanding of such problems, and to enable consensus-building and effective translation of scientific findings to stakeholders, help inform policies, and ultimately improve decision making, we utilized an Integrated Assessment approach to study artisanal and small-scale gold mining activities in Ghana. Though Integrated Assessments have been used in the fields of environmental science and sustainable development, their use in addressing specific matter in public health, and in particular, environmental and occupational health is quite limited despite their many benefits. The aim of the current paper was to describe specific activities undertaken and how they were organized, and the outputs and outcomes of our activity. In brief, three disciplinary workgroups (Natural Sciences, Human Health, Social Sciences and Economics were formed, with 26 researchers from a range of Ghanaian institutions plus international experts. The workgroups conducted activities in order to address the following question: What are the causes, consequences and correctives of small-scale gold mining in Ghana? More specifically: What alternatives are available in resource-limited settings in Ghana that allow for gold-mining to occur in a manner that maintains ecological health and human health without hindering near- and long-term economic prosperity? Several response options were identified and evaluated, and are currently being disseminated to various stakeholders within Ghana and internationally.

  1. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    Science.gov (United States)

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  2. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Directory of Open Access Journals (Sweden)

    Marko Budinich

    Full Text Available Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA and multi-objective flux variability analysis (MO-FVA. Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity that take place at the ecosystem scale.

  3. An Integrated Assessment Approach to Address Artisanal and Small-Scale Gold Mining in Ghana.

    Science.gov (United States)

    Basu, Niladri; Renne, Elisha P; Long, Rachel N

    2015-09-17

    Artisanal and small-scale gold mining (ASGM) is growing in many regions of the world including Ghana. The problems in these communities are complex and multi-faceted. To help increase understanding of such problems, and to enable consensus-building and effective translation of scientific findings to stakeholders, help inform policies, and ultimately improve decision making, we utilized an Integrated Assessment approach to study artisanal and small-scale gold mining activities in Ghana. Though Integrated Assessments have been used in the fields of environmental science and sustainable development, their use in addressing specific matter in public health, and in particular, environmental and occupational health is quite limited despite their many benefits. The aim of the current paper was to describe specific activities undertaken and how they were organized, and the outputs and outcomes of our activity. In brief, three disciplinary workgroups (Natural Sciences, Human Health, Social Sciences and Economics) were formed, with 26 researchers from a range of Ghanaian institutions plus international experts. The workgroups conducted activities in order to address the following question: What are the causes, consequences and correctives of small-scale gold mining in Ghana? More specifically: What alternatives are available in resource-limited settings in Ghana that allow for gold-mining to occur in a manner that maintains ecological health and human health without hindering near- and long-term economic prosperity? Several response options were identified and evaluated, and are currently being disseminated to various stakeholders within Ghana and internationally.

  4. Wine consumers’ preferences in Spain: an analysis using the best-worst scaling approach

    Directory of Open Access Journals (Sweden)

    Tiziana de-Magistris

    2014-06-01

    Full Text Available Research on wine consumers’ preferences has largely been explored in the academic literature and the importance of wine attributes has been measured by rating or ranking scales. However, the most recent literature on wine preferences has applied the best-worst scaling approach to avoid the biased outcomes derived from using rating or ranking scales in surveys. This study investigates premium red wine consumers’ preferences in Spain by applying best-worst alternatives. To achieve this goal, a random parameter logit model is applied to assess the impacts of wine attributes on the probability of choosing premium quality red wine by using data from an ad-hoc survey conducted in a medium-sized Spanish city. The results suggest that some wine attributes related to past experience (i.e. it matches food followed by some related to personal knowledge (i.e. the designation of origin are valued as the most important, whereas other attributes related to the image of the New World (i.e. label or brand name are perceived as the least important or indifferent.

  5. A watershed-scale goals approach to assessing and funding wastewater infrastructure.

    Science.gov (United States)

    Rahm, Brian G; Vedachalam, Sridhar; Shen, Jerry; Woodbury, Peter B; Riha, Susan J

    2013-11-15

    Capital needs during the next twenty years for public wastewater treatment, piping, combined sewer overflow correction, and storm-water management are estimated to be approximately $300 billion for the USA. Financing these needs is a significant challenge, as Federal funding for the Clean Water Act has been reduced by 70% during the last twenty years. There is an urgent need for new approaches to assist states and other decision makers to prioritize wastewater maintenance and improvements. We present a methodology for performing an integrated quantitative watershed-scale goals assessment for sustaining wastewater infrastructure. We applied this methodology to ten watersheds of the Hudson-Mohawk basin in New York State, USA that together are home to more than 2.7 million people, cover 3.5 million hectares, and contain more than 36,000 km of streams. We assembled data on 183 POTWs treating approximately 1.5 million m(3) of wastewater per day. For each watershed, we analyzed eight metrics: Growth Capacity, Capacity Density, Soil Suitability, Violations, Tributary Length Impacted, Tributary Capital Cost, Volume Capital Cost, and Population Capital Cost. These metrics were integrated into three goals for watershed-scale management: Tributary Protection, Urban Development, and Urban-Rural Integration. Our results demonstrate that the methodology can be implemented using widely available data, although some verification of data is required. Furthermore, we demonstrate substantial differences in character, need, and the appropriateness of different management strategies among the ten watersheds. These results suggest that it is feasible to perform watershed-scale goals assessment to augment existing approaches to wastewater infrastructure analysis and planning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. School Uniforms: Esprit de Corps.

    Science.gov (United States)

    Ryan, Rosemary P.; Ryan, Thomas E.

    1998-01-01

    The benefits of school uniforms far outweigh their short-term costs. School uniforms not only keep students safe, but they increase their self-esteem, promote a more positive attitude toward school, lead to improved student behavior, and help blur social-class distinctions. Students are allowed to wear their own political or religious messages,…

  7. Comments on Beckmann's Uniform Reducts

    OpenAIRE

    Cook, Stephen

    2006-01-01

    Arnold Beckmann defined the uniform reduct of a propositional proof system f to be the set of those bounded arithmetical formulas whose propositional translations have polynomial size f-proofs. We prove that the uniform reduct of f + Extended Frege consists of all true bounded arithmetical formulas iff f + Extended Frege simulates every proof system.

  8. Gene prediction in metagenomic fragments: A large scale machine learning approach

    Directory of Open Access Journals (Sweden)

    Morgenstern Burkhard

    2008-04-01

    Full Text Available Abstract Background Metagenomics is an approach to the characterization of microbial genomes via the direct isolation of genomic sequences from the environment without prior cultivation. The amount of metagenomic sequence data is growing fast while computational methods for metagenome analysis are still in their infancy. In contrast to genomic sequences of single species, which can usually be assembled and analyzed by many available methods, a large proportion of metagenome data remains as unassembled anonymous sequencing reads. One of the aims of all metagenomic sequencing projects is the identification of novel genes. Short length, for example, Sanger sequencing yields on average 700 bp fragments, and unknown phylogenetic origin of most fragments require approaches to gene prediction that are different from the currently available methods for genomes of single species. In particular, the large size of metagenomic samples requires fast and accurate methods with small numbers of false positive predictions. Results We introduce a novel gene prediction algorithm for metagenomic fragments based on a two-stage machine learning approach. In the first stage, we use linear discriminants for monocodon usage, dicodon usage and translation initiation sites to extract features from DNA sequences. In the second stage, an artificial neural network combines these features with open reading frame length and fragment GC-content to compute the probability that this open reading frame encodes a protein. This probability is used for the classification and scoring of gene candidates. With large scale training, our method provides fast single fragment predictions with good sensitivity and specificity on artificially fragmented genomic DNA. Additionally, this method is able to predict translation initiation sites accurately and distinguishes complete from incomplete genes with high reliability. Conclusion Large scale machine learning methods are well-suited for gene

  9. Traffic sign recognition based on a context-aware scale-invariant feature transform approach

    Science.gov (United States)

    Yuan, Xue; Hao, Xiaoli; Chen, Houjin; Wei, Xueye

    2013-10-01

    A new context-aware scale-invariant feature transform (CASIFT) approach is proposed, which is designed for the use in traffic sign recognition (TSR) systems. The following issues remain in previous works in which SIFT is used for matching or recognition: (1) SIFT is unable to provide color information; (2) SIFT only focuses on local features while ignoring the distribution of global shapes; (3) the template with the maximum number of matching points selected as the final result is instable, especially for images with simple patterns; and (4) SIFT is liable to result in errors when different images share the same local features. In order to resolve these problems, a new CASIFT approach is proposed. The contributions of the work are as follows: (1) color angular patterns are used to provide the color distinguishing information; (2) a CASIFT which effectively combines local and global information is proposed; and (3) a method for computing the similarity between two images is proposed, which focuses on the distribution of the matching points, rather than using the traditional SIFT approach of selecting the template with maximum number of matching points as the final result. The proposed approach is particularly effective in dealing with traffic signs which have rich colors and varied global shape distribution. Experiments are performed to validate the effectiveness of the proposed approach in TSR systems, and the experimental results are satisfying even for images containing traffic signs that have been rotated, damaged, altered in color, have undergone affine transformations, or images which were photographed under different weather or illumination conditions.

  10. A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data

    Science.gov (United States)

    Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.

    2014-12-01

    Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while

  11. Integrating macro and micro scale approaches in the agent-based modeling of residential dynamics

    Science.gov (United States)

    Saeedi, Sara

    2018-06-01

    With the advancement of computational modeling and simulation (M&S) methods as well as data collection technologies, urban dynamics modeling substantially improved over the last several decades. The complex urban dynamics processes are most effectively modeled not at the macro-scale, but following a bottom-up approach, by simulating the decisions of individual entities, or residents. Agent-based modeling (ABM) provides the key to a dynamic M&S framework that is able to integrate socioeconomic with environmental models, and to operate at both micro and macro geographical scales. In this study, a multi-agent system is proposed to simulate residential dynamics by considering spatiotemporal land use changes. In the proposed ABM, macro-scale land use change prediction is modeled by Artificial Neural Network (ANN) and deployed as the agent environment and micro-scale residential dynamics behaviors autonomously implemented by household agents. These two levels of simulation interacted and jointly promoted urbanization process in an urban area of Tehran city in Iran. The model simulates the behavior of individual households in finding ideal locations to dwell. The household agents are divided into three main groups based on their income rank and they are further classified into different categories based on a number of attributes. These attributes determine the households' preferences for finding new dwellings and change with time. The ABM environment is represented by a land-use map in which the properties of the land parcels change dynamically over the simulation time. The outputs of this model are a set of maps showing the pattern of different groups of households in the city. These patterns can be used by city planners to find optimum locations for building new residential units or adding new services to the city. The simulation results show that combining macro- and micro-level simulation can give full play to the potential of the ABM to understand the driving

  12. Data-Driven Approach for Analyzing Hydrogeology and Groundwater Quality Across Multiple Scales.

    Science.gov (United States)

    Curtis, Zachary K; Li, Shu-Guang; Liao, Hua-Sheng; Lusch, David

    2017-08-29

    Recent trends of assimilating water well records into statewide databases provide a new opportunity for evaluating spatial dynamics of groundwater quality and quantity. However, these datasets are scarcely rigorously analyzed to address larger scientific problems because they are of lower quality and massive. We develop an approach for utilizing well databases to analyze physical and geochemical aspects of groundwater systems, and apply it to a multiscale investigation of the sources and dynamics of chloride (Cl - ) in the near-surface groundwater of the Lower Peninsula of Michigan. Nearly 500,000 static water levels (SWLs) were critically evaluated, extracted, and analyzed to delineate long-term, average groundwater flow patterns using a nonstationary kriging technique at the basin-scale (i.e., across the entire peninsula). Two regions identified as major basin-scale discharge zones-the Michigan and Saginaw Lowlands-were further analyzed with regional- and local-scale SWL models. Groundwater valleys ("discharge" zones) and mounds ("recharge" zones) were identified for all models, and the proportions of wells with elevated Cl - concentrations in each zone were calculated, visualized, and compared. Concentrations in discharge zones, where groundwater is expected to flow primarily upwards, are consistently and significantly higher than those in recharge zones. A synoptic sampling campaign in the Michigan Lowlands revealed concentrations generally increase with depth, a trend noted in previous studies of the Saginaw Lowlands. These strong, consistent SWL and Cl - distribution patterns across multiple scales suggest that a deep source (i.e., Michigan brines) is the primary cause for the elevated chloride concentrations observed in discharge areas across the peninsula. © 2017, National Ground Water Association.

  13. Mean-cluster approach indicates cell sorting time scales are determined by collective dynamics

    Science.gov (United States)

    Beatrici, Carine P.; de Almeida, Rita M. C.; Brunnet, Leonardo G.

    2017-03-01

    Cell migration is essential to cell segregation, playing a central role in tissue formation, wound healing, and tumor evolution. Considering random mixtures of two cell types, it is still not clear which cell characteristics define clustering time scales. The mass of diffusing clusters merging with one another is expected to grow as td /d +2 when the diffusion constant scales with the inverse of the cluster mass. Cell segregation experiments deviate from that behavior. Explanations for that could arise from specific microscopic mechanisms or from collective effects, typical of active matter. Here we consider a power law connecting diffusion constant and cluster mass to propose an analytic approach to model cell segregation where we explicitly take into account finite-size corrections. The results are compared with active matter model simulations and experiments available in the literature. To investigate the role played by different mechanisms we considered different hypotheses describing cell-cell interaction: differential adhesion hypothesis and different velocities hypothesis. We find that the simulations yield normal diffusion for long time intervals. Analytic and simulation results show that (i) cluster evolution clearly tends to a scaling regime, disrupted only at finite-size limits; (ii) cluster diffusion is greatly enhanced by cell collective behavior, such that for high enough tendency to follow the neighbors, cluster diffusion may become independent of cluster size; (iii) the scaling exponent for cluster growth depends only on the mass-diffusion relation, not on the detailed local segregation mechanism. These results apply for active matter systems in general and, in particular, the mechanisms found underlying the increase in cell sorting speed certainly have deep implications in biological evolution as a selection mechanism.

  14. Solution approach for a large scale personnel transport system for a large company in Latin America

    Energy Technology Data Exchange (ETDEWEB)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-07-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  15. Solution approach for a large scale personnel transport system for a large company in Latin America

    International Nuclear Information System (INIS)

    Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis

    2017-01-01

    The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  16. Solution approach for a large scale personnel transport system for a large company in Latin America

    Directory of Open Access Journals (Sweden)

    Eduardo-Arturo Garzón-Garnica

    2017-10-01

    Full Text Available Purpose: The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both.  When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.

  17. An Interdisciplinary Approach to Developing Renewable Energy Mixes at the Community Scale

    Science.gov (United States)

    Gormally, Alexandra M.; Whyatt, James D.; Timmis, Roger J.; Pooley, Colin G.

    2013-04-01

    Renewable energy has risen on the global political agenda due to concerns over climate change and energy security. The European Union (EU) currently has a target of 20% renewable energy by the year 2020 and there is increasing focus on the ways in which these targets can be achieved. Here we focus on the UK context which could be considered to be lagging behind other EU countries in terms of targets and implementation. The UK has a lower overall target of 15% renewable energy by 2020 and in 2011 reached only 3.8 % (DUKES, 2012), one of the lowest progressions compared to other EU Member States (European Commission, 2012). The reticence of the UK to reach such targets could in part be due to their dependence on their current energy mix and a highly centralised electricity grid system, which does not lend itself easily to the adoption of renewable technologies. Additionally, increasing levels of demand and the need to raise energy awareness are key concerns in terms of achieving energy security in the UK. There is also growing concern from the public about increasing fuel and energy bills. One possible solution to some of these problems could be through the adoption of small-scale distributed renewable schemes implemented at the community-scale with local ownership or involvement, for example, through energy co-operatives. The notion of the energy co-operative is well understood elsewhere in Europe but unfamiliar to many UK residents due to its centralised approach to energy provision. There are many benefits associated with engaging in distributed renewable energy systems. In addition to financial benefits, participation may raise energy awareness and can lead to positive responses towards renewable technologies. Here we briefly explore how a mix of small-scale renewables, including wind, hydro-power and solar PV, have been implemented and managed by a small island community in the Scottish Hebrides to achieve over 90% of their electricity needs from renewable

  18. A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems.

    Science.gov (United States)

    Shen, Lili; Guo, Jiming; Wang, Lei

    2018-06-06

    The network real-time kinematic (RTK) technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI), and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs), robotic equipment, etc.) require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC) approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC) according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS) data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.

  19. A Large-Scale Design Integration Approach Developed in Conjunction with the Ares Launch Vehicle Program

    Science.gov (United States)

    Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.

    2012-01-01

    This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.

  20. An objective and parsimonious approach for classifying natural flow regimes at a continental scale

    Science.gov (United States)

    Archfield, S. A.; Kennen, J.; Carlisle, D.; Wolock, D.

    2013-12-01

    Hydroecological stream classification--the process of grouping streams by similar hydrologic responses and, thereby, similar aquatic habitat--has been widely accepted and is often one of the first steps towards developing ecological flow targets. Despite its importance, the last national classification of streamgauges was completed about 20 years ago. A new classification of 1,534 streamgauges in the contiguous United States is presented using a novel and parsimonious approach to understand similarity in ecological streamflow response. This new classification approach uses seven fundamental daily streamflow statistics (FDSS) rather than winnowing down an uncorrelated subset from 200 or more ecologically relevant streamflow statistics (ERSS) commonly used in hydroecological classification studies. The results of this investigation demonstrate that the distributions of 33 tested ERSS are consistently different among the classes derived from the seven FDSS. It is further shown that classification based solely on the 33 ERSS generally does a poorer job in grouping similar streamgauges than the classification based on the seven FDSS. This new classification approach has the additional advantages of overcoming some of the subjectivity associated with the selection of the classification variables and provides a set of robust continental-scale classes of US streamgauges.

  1. A computationally inexpensive CFD approach for small-scale biomass burners equipped with enhanced air staging

    International Nuclear Information System (INIS)

    Buchmayr, M.; Gruber, J.; Hargassner, M.; Hochenauer, C.

    2016-01-01

    Highlights: • Time efficient CFD model to predict biomass boiler performance. • Boundary conditions for numerical modeling are provided by measurements. • Tars in the product from primary combustion was considered. • Simulation results were validated by experiments on a real-scale reactor. • Very good accordance between experimental and simulation results. - Abstract: Computational Fluid Dynamics (CFD) is an upcoming technique for optimization and as a part of the design process of biomass combustion systems. An accurate simulation of biomass combustion can only be provided with high computational effort so far. This work presents an accurate, time efficient CFD approach for small-scale biomass combustion systems equipped with enhanced air staging. The model can handle the high amount of biomass tars in the primary combustion product at very low primary air ratios. Gas-phase combustion in the freeboard was performed by the Steady Flamelet Model (SFM) together with a detailed heptane combustion mechanism. The advantage of the SFM is that complex combustion chemistry can be taken into account at low computational effort because only two additional transport equations have to be solved to describe the chemistry in the reacting flow. Boundary conditions for primary combustion product composition were obtained from the fuel bed by experiments. The fuel bed data were used as fuel inlet boundary condition for the gas-phase combustion model. The numerical and experimental investigations were performed for different operating conditions and varying wood-chip moisture on a special designed real-scale reactor. The numerical predictions were validated with experimental results and a very good agreement was found. With the presented approach accurate results can be provided within 24 h using a standard Central Processing Unit (CPU) consisting of six cores. Case studies e.g. for combustion geometry improvement can be realized effectively due to the short calculation

  2. A Concurrent Mixed Methods Approach to Examining the Quantitative and Qualitative Meaningfulness of Absolute Magnitude Estimation Scales in Survey Research

    Science.gov (United States)

    Koskey, Kristin L. K.; Stewart, Victoria C.

    2014-01-01

    This small "n" observational study used a concurrent mixed methods approach to address a void in the literature with regard to the qualitative meaningfulness of the data yielded by absolute magnitude estimation scaling (MES) used to rate subjective stimuli. We investigated whether respondents' scales progressed from less to more and…

  3. A new approach to motion control of torque-constrained manipulators by using time-scaling of reference trajectories

    Energy Technology Data Exchange (ETDEWEB)

    Moreno-Valenzuela, Javier; Orozco-Manriquez, Ernesto [Digital del IPN, CITEDI-IPN, Tijuana, (Mexico)

    2009-12-15

    We introduce a control scheme based on using a trajectory tracking controller and an algorithm for on-line time scaling of the reference trajectories. The reference trajectories are time-scaled according to the measured tracking errors and the detected torque/acceleration saturation. Experiments are presented to illustrate the advantages of the proposed approach

  4. A new approach to motion control of torque-constrained manipulators by using time-scaling of reference trajectories

    International Nuclear Information System (INIS)

    Moreno-Valenzuela, Javier; Orozco-Manriquez, Ernesto

    2009-01-01

    We introduce a control scheme based on using a trajectory tracking controller and an algorithm for on-line time scaling of the reference trajectories. The reference trajectories are time-scaled according to the measured tracking errors and the detected torque/acceleration saturation. Experiments are presented to illustrate the advantages of the proposed approach

  5. Sodium-cutting: a new top-down approach to cut open nanostructures on nonplanar surfaces on a large scale.

    Science.gov (United States)

    Chen, Wei; Deng, Da

    2014-11-11

    We report a new, low-cost and simple top-down approach, "sodium-cutting", to cut and open nanostructures deposited on a nonplanar surface on a large scale. The feasibility of sodium-cutting was demonstrated with the successfully cutting open of ∼100% carbon nanospheres into nanobowls on a large scale from Sn@C nanospheres for the first time.

  6. Assessing a Top-Down Modeling Approach for Seasonal Scale Snow Sensitivity

    Science.gov (United States)

    Luce, C. H.; Lute, A.

    2017-12-01

    Mechanistic snow models are commonly applied to assess changes to snowpacks in a warming climate. Such assessments involve a number of assumptions about details of weather at daily to sub-seasonal time scales. Models of season-scale behavior can provide contrast for evaluating behavior at time scales more in concordance with climate warming projections. Such top-down models, however, involve a degree of empiricism, with attendant caveats about the potential of a changing climate to affect calibrated relationships. We estimated the sensitivity of snowpacks from 497 Snowpack Telemetry (SNOTEL) stations in the western U.S. based on differences in climate between stations (spatial analog). We examined the sensitivity of April 1 snow water equivalent (SWE) and mean snow residence time (SRT) to variations in Nov-Mar precipitation and average Nov-Mar temperature using multivariate local-fit regressions. We tested the modeling approach using a leave-one-out cross-validation as well as targeted two-fold non-random cross-validations contrasting, for example, warm vs. cold years, dry vs. wet years, and north vs. south stations. Nash-Sutcliffe Efficiency (NSE) values for the validations were strong for April 1 SWE, ranging from 0.71 to 0.90, and still reasonable, but weaker, for SRT, in the range of 0.64 to 0.81. From these ranges, we exclude validations where the training data do not represent the range of target data. A likely reason for differences in validation between the two metrics is that the SWE model reflects the influence of conservation of mass while using temperature as an indicator of the season-scale energy balance; in contrast, SRT depends more strongly on the energy balance aspects of the problem. Model forms with lower numbers of parameters generally validated better than more complex model forms, with the caveat that pseudoreplication could encourage selection of more complex models when validation contrasts were weak. Overall, the split sample validations

  7. Preparing laboratory and real-world EEG data for large-scale analysis: A containerized approach

    Directory of Open Access Journals (Sweden)

    Nima eBigdely-Shamlo

    2016-03-01

    Full Text Available Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface (BCI models.. However, the absence of standard-ized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the diffi-culty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a containerized approach and freely available tools we have developed to facilitate the process of an-notating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-analysis. The EEG Study Schema (ESS comprises three data Levels, each with its own XML-document schema and file/folder convention, plus a standardized (PREP pipeline to move raw (Data Level 1 data to a basic preprocessed state (Data Level 2 suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are in-creasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at eegstudy.org, and a central cata-log of over 850 GB of existing data in ESS format is available at study-catalog.org. These tools and resources are part of a larger effort to ena-ble data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org.

  8. Stage I surface crack formation in thermal fatigue: A predictive multi-scale approach

    International Nuclear Information System (INIS)

    Osterstock, S.; Robertson, C.; Sauzay, M.; Aubin, V.; Degallaix, S.

    2010-01-01

    A multi-scale numerical model is developed, predicting the formation of stage I cracks, in thermal fatigue loading conditions. The proposed approach comprises 2 distinct calculation steps. Firstly, the number of cycles to micro-crack initiation is determined, in individual grains. The adopted initiation model depends on local stress-strain conditions, relative to sub-grain plasticity, grain orientation and grain deformation incompatibilities. Secondly, 2-4 grains long surface cracks (stage I) is predicted, by accounting for micro-crack coalescence, in 3 dimensions. The method described in this paper is applied to a 500 grains aggregate, loaded in representative thermal fatigue conditions. Preliminary results provide quantitative insight regarding position, density, spacing and orientations of stage I surface cracks and subsequent formation of crack networks. The proposed method is fully deterministic, provided all grain crystallographic orientations and micro-crack linking thresholds are specified. (authors)

  9. A Person-Centered Approach to Financial Capacity Assessment: Preliminary Development of a New Rating Scale.

    Science.gov (United States)

    Lichtenberg, Peter A; Stoltman, Jonathan; Ficker, Lisa J; Iris, Madelyn; Mast, Benjamin

    2015-01-01

    Financial exploitation and financial capacity issues often overlap when a gerontologist assesses whether an older adult's financial decision is an autonomous, capable choice. Our goal is to describe a new conceptual model for assessing financial decisions using principles of person-centered approaches and to introduce a new instrument, the Lichtenberg Financial Decision Rating Scale (LFDRS). We created a conceptual model, convened meetings of experts from various disciplines to critique the model and provide input on content and structure, and select final items. We then videotaped administration of the LFDRS to five older adults and had 10 experts provide independent ratings. The LFDRS demonstrated good to excellent inter-rater agreement. The LFDRS is a new tool that allows gerontologists to systematically gather information about a specific financial decision and the decisional abilities in question.

  10. Modelling an industrial anaerobic granular reactor using a multi-scale approach

    DEFF Research Database (Denmark)

    Feldman, Hannah; Flores Alsina, Xavier; Ramin, Pedram

    2017-01-01

    The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within...... the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark...... simulations show the effects on the overall process performance when operational (pH) and loading (S:COD) conditions are modified. Lastly, the effect of intra-granular precipitation on the overall organic/inorganic distribution is assessed at: 1) different times; and, 2) reactor heights. Finally...

  11. Quantum scaling in many-body systems an approach to quantum phase transitions

    CERN Document Server

    Continentino, Mucio

    2017-01-01

    Quantum phase transitions are strongly relevant in a number of fields, ranging from condensed matter to cold atom physics and quantum field theory. This book, now in its second edition, approaches the problem of quantum phase transitions from a new and unifying perspective. Topics addressed include the concepts of scale and time invariance and their significance for quantum criticality, as well as brand new chapters on superfluid and superconductor quantum critical points, and quantum first order transitions. The renormalisation group in real and momentum space is also established as the proper language to describe the behaviour of systems close to a quantum phase transition. These phenomena introduce a number of theoretical challenges which are of major importance for driving new experiments. Being strongly motivated and oriented towards understanding experimental results, this is an excellent text for graduates, as well as theorists, experimentalists and those with an interest in quantum criticality.

  12. Burnout of pulverized biomass particles in large scale boiler - Single particle model approach

    Energy Technology Data Exchange (ETDEWEB)

    Saastamoinen, Jaakko; Aho, Martti; Moilanen, Antero [VTT Technical Research Centre of Finland, Box 1603, 40101 Jyvaeskylae (Finland); Soerensen, Lasse Holst [ReaTech/ReAddit, Frederiksborgsveij 399, Niels Bohr, DK-4000 Roskilde (Denmark); Clausen, Soennik [Risoe National Laboratory, DK-4000 Roskilde (Denmark); Berg, Mogens [ENERGI E2 A/S, A.C. Meyers Vaenge 9, DK-2450 Copenhagen SV (Denmark)

    2010-05-15

    Burning of coal and biomass particles are studied and compared by measurements in an entrained flow reactor and by modelling. The results are applied to study the burning of pulverized biomass in a large scale utility boiler originally planned for coal. A simplified single particle approach, where the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The burner location and the trajectories of the particles might be optimised to maximise the residence time and burnout. (author)

  13. A semiparametric graphical modelling approach for large-scale equity selection.

    Science.gov (United States)

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  14. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    Science.gov (United States)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  15. Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions

    Science.gov (United States)

    Soltani, S. S.; Cvetkovic, V.; Destouni, G.

    2017-12-01

    The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow

  16. Development and Psychometric Evaluation of the School Bullying Scales: A Rasch Measurement Approach

    Science.gov (United States)

    Cheng, Ying-Yao; Chen, Li-Ming; Liu, Kun-Shia; Chen, Yi-Ling

    2011-01-01

    The study aims to develop three school bullying scales--the Bully Scale, the Victim Scale, and the Witness Scale--to assess secondary school students' bullying behaviors, including physical bullying, verbal bullying, relational bullying, and cyber bullying. The items of the three scales were developed from viewpoints of bullies, victims, and…

  17. Fractal and multifractal approaches for the analysis of crack-size dependent scaling laws in fatigue

    Energy Technology Data Exchange (ETDEWEB)

    Paggi, Marco [Politecnico di Torino, Department of Structural Engineering and Geotechnics, Corso Duca degli Abruzzi 24, 10129 Torino (Italy)], E-mail: marco.paggi@polito.it; Carpinteri, Alberto [Politecnico di Torino, Department of Structural Engineering and Geotechnics, Corso Duca degli Abruzzi 24, 10129 Torino (Italy)

    2009-05-15

    The enhanced ability to detect and measure very short cracks, along with a great interest in applying fracture mechanics formulae to smaller and smaller crack sizes, has pointed out the so-called anomalous behavior of short cracks with respect to their longer counterparts. The crack-size dependencies of both the fatigue threshold and the Paris' constant C are only two notable examples of these anomalous scaling laws. In this framework, a unified theoretical model seems to be missing and the behavior of short cracks can still be considered as an open problem. In this paper, we propose a critical reexamination of the fractal models for the analysis of crack-size effects in fatigue. The limitations of each model are put into evidence and removed. At the end, a new generalized theory based on fractal geometry is proposed, which permits to consistently interpret the short crack-related anomalous scaling laws within a unified theoretical formulation. Finally, this approach is herein used to interpret relevant experimental data related to the crack-size dependence of the fatigue threshold in metals.

  18. Fractal and multifractal approaches for the analysis of crack-size dependent scaling laws in fatigue

    International Nuclear Information System (INIS)

    Paggi, Marco; Carpinteri, Alberto

    2009-01-01

    The enhanced ability to detect and measure very short cracks, along with a great interest in applying fracture mechanics formulae to smaller and smaller crack sizes, has pointed out the so-called anomalous behavior of short cracks with respect to their longer counterparts. The crack-size dependencies of both the fatigue threshold and the Paris' constant C are only two notable examples of these anomalous scaling laws. In this framework, a unified theoretical model seems to be missing and the behavior of short cracks can still be considered as an open problem. In this paper, we propose a critical reexamination of the fractal models for the analysis of crack-size effects in fatigue. The limitations of each model are put into evidence and removed. At the end, a new generalized theory based on fractal geometry is proposed, which permits to consistently interpret the short crack-related anomalous scaling laws within a unified theoretical formulation. Finally, this approach is herein used to interpret relevant experimental data related to the crack-size dependence of the fatigue threshold in metals.

  19. A computational approach to modeling cellular-scale blood flow in complex geometry

    Science.gov (United States)

    Balogh, Peter; Bagchi, Prosenjit

    2017-04-01

    We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.

  20. Multi-scale approach of plasticity mechanisms in irradiated austenitic steels

    International Nuclear Information System (INIS)

    Nogaret, Th.

    2007-12-01

    The plasticity in irradiated metals is characterized by the localization of the deformation in clear bands, defect free, formed by the dislocation passage. We investigated the clear band formation thanks to a multi-scale approach. Molecular dynamics simulations show that screw dislocations mainly un-fault and absorb the defects as helical turns, are strongly pinned by the helical turns and are remitted in new glide planes when they unpin whereas edge dislocations mainly shear the defects for moderate stresses and can drag the helical turns. The interaction mechanisms were implemented into the discrete dislocation dynamics code in order to study the clear band formation at the micron scale. As dislocations are issued from grain boundaries, we consider a dislocation source located on a box border that emits dislocations when the dislocation nucleation stress is reached. The hardening was seen mainly due to the screw dislocations that are strongly pinned by helical turns. Edge dislocations are less pinned and glide on long distances, letting long screw dislocation segments. As more dislocations are emitted, screw dislocation pile-ups form and this permits the unpinning of screw dislocations. They unpin by activating dislocation segments in new glide planes, which broadens the clear band. When the segments activate, they create edge parts that sweep the screw dislocation lines by dragging away the super-jogs towards the box borders where they accumulate, which clears the band. (author)

  1. Solving Large-Scale TSP Using a Fast Wedging Insertion Partitioning Approach

    Directory of Open Access Journals (Sweden)

    Zuoyong Xiang

    2015-01-01

    Full Text Available A new partitioning method, called Wedging Insertion, is proposed for solving large-scale symmetric Traveling Salesman Problem (TSP. The idea of our proposed algorithm is to cut a TSP tour into four segments by nodes’ coordinate (not by rectangle, such as Strip, FRP, and Karp. Each node is located in one of their segments, which excludes four particular nodes, and each segment does not twist with other segments. After the partitioning process, this algorithm utilizes traditional construction method, that is, the insertion method, for each segment to improve the quality of tour, and then connects the starting node and the ending node of each segment to obtain the complete tour. In order to test the performance of our proposed algorithm, we conduct the experiments on various TSPLIB instances. The experimental results show that our proposed algorithm in this paper is more efficient for solving large-scale TSPs. Specifically, our approach is able to obviously reduce the time complexity for running the algorithm; meanwhile, it will lose only about 10% of the algorithm’s performance.

  2. Using scale and feather traits for module construction provides a functional approach to chicken epidermal development.

    Science.gov (United States)

    Bao, Weier; Greenwold, Matthew J; Sawyer, Roger H

    2017-11-01

    Gene co-expression network analysis has been a research method widely used in systematically exploring gene function and interaction. Using the Weighted Gene Co-expression Network Analysis (WGCNA) approach to construct a gene co-expression network using data from a customized 44K microarray transcriptome of chicken epidermal embryogenesis, we have identified two distinct modules that are highly correlated with scale or feather development traits. Signaling pathways related to feather development were enriched in the traditional KEGG pathway analysis and functional terms relating specifically to embryonic epidermal development were also enriched in the Gene Ontology analysis. Significant enrichment annotations were discovered from customized enrichment tools such as Modular Single-Set Enrichment Test (MSET) and Medical Subject Headings (MeSH). Hub genes in both trait-correlated modules showed strong specific functional enrichment toward epidermal development. Also, regulatory elements, such as transcription factors and miRNAs, were targeted in the significant enrichment result. This work highlights the advantage of this methodology for functional prediction of genes not previously associated with scale- and feather trait-related modules.

  3. An approach for classification of hydrogeological systems at the regional scale based on groundwater hydrographs

    Science.gov (United States)

    Haaf, Ezra; Barthel, Roland

    2016-04-01

    When assessing hydrogeological conditions at the regional scale, the analyst is often confronted with uncertainty of structures, inputs and processes while having to base inference on scarce and patchy data. Haaf and Barthel (2015) proposed a concept for handling this predicament by developing a groundwater systems classification framework, where information is transferred from similar, but well-explored and better understood to poorly described systems. The concept is based on the central hypothesis that similar systems react similarly to the same inputs and vice versa. It is conceptually related to PUB (Prediction in ungauged basins) where organization of systems and processes by quantitative methods is intended and used to improve understanding and prediction. Furthermore, using the framework it is expected that regional conceptual and numerical models can be checked or enriched by ensemble generated data from neighborhood-based estimators. In a first step, groundwater hydrographs from a large dataset in Southern Germany are compared in an effort to identify structural similarity in groundwater dynamics. A number of approaches to group hydrographs, mostly based on a similarity measure - which have previously only been used in local-scale studies, can be found in the literature. These are tested alongside different global feature extraction techniques. The resulting classifications are then compared to a visual "expert assessment"-based classification which serves as a reference. A ranking of the classification methods is carried out and differences shown. Selected groups from the classifications are related to geological descriptors. Here we present the most promising results from a comparison of classifications based on series correlation, different series distances and series features, such as the coefficients of the discrete Fourier transform and the intrinsic mode functions of empirical mode decomposition. Additionally, we show examples of classes

  4. UOBPRM: A uniformly distributed obstacle-based PRM

    KAUST Repository

    Yeh, Hsin-Yi; Thomas, Shawna; Eppstein, David; Amato, Nancy M.

    2012-01-01

    This paper presents a new sampling method for motion planning that can generate configurations more uniformly distributed on C-obstacle surfaces than prior approaches. Here, roadmap nodes are generated from the intersections between C

  5. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-10-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important

  6. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-12-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on power consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that

  7. Characterizing Synergistic Water and Energy Efficiency at the Residential Scale Using a Cost Abatement Curve Approach

    Science.gov (United States)

    Stillwell, A. S.; Chini, C. M.; Schreiber, K. L.; Barker, Z. A.

    2015-12-01

    Energy and water are two increasingly correlated resources. Electricity generation at thermoelectric power plants requires cooling such that large water withdrawal and consumption rates are associated with electricity consumption. Drinking water and wastewater treatment require significant electricity inputs to clean, disinfect, and pump water. Due to this energy-water nexus, energy efficiency measures might be a cost-effective approach to reducing water use and water efficiency measures might support energy savings as well. This research characterizes the cost-effectiveness of different efficiency approaches in households by quantifying the direct and indirect water and energy savings that could be realized through efficiency measures, such as low-flow fixtures, energy and water efficient appliances, distributed generation, and solar water heating. Potential energy and water savings from these efficiency measures was analyzed in a product-lifetime adjusted economic model comparing efficiency measures to conventional counterparts. Results were displayed as cost abatement curves indicating the most economical measures to implement for a target reduction in water and/or energy consumption. These cost abatement curves are useful in supporting market innovation and investment in residential-scale efficiency.

  8. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    Science.gov (United States)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and

  9. INTEGRATED IMAGING APPROACHES SUPPORTING THE EXCAVATION ACTIVITIES. MULTI-SCALE GEOSPATIAL DOCUMENTATION IN HIERAPOLIS (TK

    Directory of Open Access Journals (Sweden)

    A. Spanò

    2018-05-01

    Full Text Available The paper focuses on the exploration of the suitability and the discretization of applicability issues about advanced surveying integrated techniques, mainly based on image-based approaches compared and integrated to range-based ones that have been developed with the use of the cutting-edge solutions tested on field. The investigated techniques integrate both technological devices for 3D data acquisition and thus editing and management systems to handle metric models and multi-dimensional data in a geospatial perspective, in order to innovate and speed up the extraction of information during the archaeological excavation activities. These factors, have been experienced in the outstanding site of the Hierapolis of Phrygia ancient city (Turkey, downstream the 2017 surveying missions, in order to produce high-scale metric deliverables in terms of high-detailed Digital Surface Models (DSM, 3D continuous surface models and high-resolution orthoimages products. In particular, the potentialities in the use of UAV platforms for low altitude acquisitions in aerial photogrammetric approach, together with terrestrial panoramic acquisitions (Trimble V10 imaging rover, have been investigated with a comparison toward consolidated Terrestrial Laser Scanning (TLS measurements. One of the main purposes of the paper is to evaluate the results offered by the technologies used independently and using integrated approaches. A section of the study in fact, is specifically dedicated to experimenting the union of different sensor dense clouds: both dense clouds derived from UAV have been integrated with terrestrial Lidar clouds, to evaluate their fusion. Different test cases have been considered, representing typical situations that can be encountered in archaeological sites.

  10. A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems

    Directory of Open Access Journals (Sweden)

    Lili Shen

    2018-06-01

    Full Text Available The network real-time kinematic (RTK technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI, and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs, robotic equipment, etc. require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.

  11. Modelling an industrial anaerobic granular reactor using a multi-scale approach.

    Science.gov (United States)

    Feldman, H; Flores-Alsina, X; Ramin, P; Kjellberg, K; Jeppsson, U; Batstone, D J; Gernaey, K V

    2017-12-01

    The objective of this paper is to show the results of an industrial project dealing with modelling of anaerobic digesters. A multi-scale mathematical approach is developed to describe reactor hydrodynamics, granule growth/distribution and microbial competition/inhibition for substrate/space within the biofilm. The main biochemical and physico-chemical processes in the model are based on the Anaerobic Digestion Model No 1 (ADM1) extended with the fate of phosphorus (P), sulfur (S) and ethanol (Et-OH). Wastewater dynamic conditions are reproduced and data frequency increased using the Benchmark Simulation Model No 2 (BSM2) influent generator. All models are tested using two plant data sets corresponding to different operational periods (#D1, #D2). Simulation results reveal that the proposed approach can satisfactorily describe the transformation of organics, nutrients and minerals, the production of methane, carbon dioxide and sulfide and the potential formation of precipitates within the bulk (average deviation between computer simulations and measurements for both #D1, #D2 is around 10%). Model predictions suggest a stratified structure within the granule which is the result of: 1) applied loading rates, 2) mass transfer limitations and 3) specific (bacterial) affinity for substrate. Hence, inerts (X I ) and methanogens (X ac ) are situated in the inner zone, and this fraction lowers as the radius increases favouring the presence of acidogens (X su ,X aa , X fa ) and acetogens (X c4 ,X pro ). Additional simulations show the effects on the overall process performance when operational (pH) and loading (S:COD) conditions are modified. Lastly, the effect of intra-granular precipitation on the overall organic/inorganic distribution is assessed at: 1) different times; and, 2) reactor heights. Finally, the possibilities and opportunities offered by the proposed approach for conducting engineering optimization projects are discussed. Copyright © 2017 Elsevier Ltd. All

  12. Prediction and verification of centrifugal dewatering of P. pastoris fermentation cultures using an ultra scale-down approach.

    Science.gov (United States)

    Lopes, A G; Keshavarz-Moore, E

    2012-08-01

    Recent years have seen a dramatic rise in fermentation broth cell densities and a shift to extracellular product expression in microbial cells. As a result, dewatering characteristics during cell separation is of importance, as any liquor trapped in the sediment results in loss of product, and thus a decrease in product recovery. In this study, an ultra scale-down (USD) approach was developed to enable the rapid assessment of dewatering performance of pilot-scale centrifuges with intermittent solids discharge. The results were then verified at scale for two types of pilot-scale centrifuges: a tubular bowl equipment and a disk-stack centrifuge. Initial experiments showed that employing a laboratory-scale centrifugal mimic based on using a comparable feed concentration to that of the pilot-scale centrifuge, does not successfully predict the dewatering performance at scale (P-value centrifuge. Initial experiments used Baker's yeast feed suspensions followed by fresh Pichia pastoris fermentation cultures. This work presents a simple and novel USD approach to predict dewatering levels in two types of pilot-scale centrifuges using small quantities of feedstock (centrifuge needs to be operated, reducing the need for repeated pilot-scale runs during early stages of process development. Copyright © 2012 Wiley Periodicals, Inc.

  13. Canopy structure and topography effects on snow distribution at a catchment scale: Application of multivariate approaches

    Directory of Open Access Journals (Sweden)

    Jenicek Michal

    2018-03-01

    Full Text Available The knowledge of snowpack distribution at a catchment scale is important to predict the snowmelt runoff. The objective of this study is to select and quantify the most important factors governing the snowpack distribution, with special interest in the role of different canopy structure. We applied a simple distributed sampling design with measurement of snow depth and snow water equivalent (SWE at a catchment scale. We selected eleven predictors related to character of specific localities (such as elevation, slope orientation and leaf area index and to winter meteorological conditions (such as irradiance, sum of positive air temperature and sum of new snow depth. The forest canopy structure was described using parameters calculated from hemispherical photographs. A degree-day approach was used to calculate melt factors. Principal component analysis, cluster analysis and Spearman rank correlation were applied to reduce the number of predictors and to analyze measured data. The SWE in forest sites was by 40% lower than in open areas, but this value depended on the canopy structure. The snow ablation in large openings was on average almost two times faster compared to forest sites. The snow ablation in the forest was by 18% faster after forest defoliation (due to the bark beetle. The results from multivariate analyses showed that the leaf area index was a better predictor to explain the SWE distribution during accumulation period, while irradiance was better predictor during snowmelt period. Despite some uncertainty, parameters derived from hemispherical photographs may replace measured incoming solar radiation if this meteorological variable is not available.

  14. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  15. A New Approach to Adaptive Control of Multiple Scales in Plasma Simulations

    Science.gov (United States)

    Omelchenko, Yuri

    2007-04-01

    A new approach to temporal refinement of kinetic (Particle-in-Cell, Vlasov) and fluid (MHD, two-fluid) simulations of plasmas is presented: Discrete-Event Simulation (DES). DES adaptively distributes CPU resources in accordance with local time scales and enables asynchronous integration of inhomogeneous nonlinear systems with multiple time scales on meshes of arbitrary topologies. This removes computational penalties usually incurred in explicit codes due to the global Courant-Friedrich-Levy (CFL) restriction on a time-step size. DES stands apart from multiple time-stepping algorithms in that it requires neither selecting a global synchronization time step nor pre-determining a sequence of time-integration operations for individual parts of the system (local time increments need not bear any integer multiple relations). Instead, elements of a mesh-distributed solution self-adaptively predict and synchronize their temporal trajectories by directly enforcing local causality (accuracy) constraints, which are formulated in terms of incremental changes to the evolving solution. Together with flux-conservative propagation of information, this new paradigm ensures stable and fast asynchronous runs, where idle computation is automatically eliminated. DES is parallelized via a novel Preemptive Event Processing (PEP) technique, which automatically synchronizes elements with similar update rates. In this mode, events with close execution times are projected onto time levels, which are adaptively determined by the program. PEP allows reuse of standard message-passing algorithms on distributed architectures. For optimum accuracy, DES can be combined with adaptive mesh refinement (AMR) techniques for structured and unstructured meshes. Current examples of event-driven models range from electrostatic, hybrid particle-in-cell plasma systems to reactive fluid dynamics simulations. They demonstrate the superior performance of DES in terms of accuracy, speed and robustness.

  16. A new approach to inventorying bodies of water, from local to global scale

    Directory of Open Access Journals (Sweden)

    Bartout, Pascal

    2015-12-01

    Full Text Available Having reliable estimates of the number of water bodies on different geographical scales is of great importance to better understand biogeochemical cycles and to tackle the social issues related to the economic and cultural use of water bodies. However, limnological research suffers from a lack of reliable inventories; the available scientific references are predominately based on water bodies of natural origin, large in size and preferentially located in previously glaciated areas. Artificial, small and randomly distributed water bodies, especially ponds, are usually not inventoried. Following Wetzel’s theory (1990, some authors included them in global inventories by using remote sensing or mathematical extrapolation, but fieldwork on the ground has been done on a very limited amount of territory. These studies have resulted in an explosive increase in the estimated number of water bodies, going from 8.44 million lakes (Meybeck 1995 to 3.5 billion water bodies (Downing 2010. These numbers raise several questions, especially about the methodology used for counting small-sized water bodies and the methodological treatment of spatial variables. In this study, we use inventories of water bodies for Sweden, Finland, Estonia and France to show incoherencies generated by the “global to local” approach. We demonstrate that one universal relationship does not suffice for generating the regional or global inventories of water bodies because local conditions vary greatly from one region to another and cannot be offset adequately by each other. The current paradigm for global estimates of water bodies in limnology, which is based on one representative model applied to different territories, does not produce sufficiently exact global inventories. The step-wise progression from the local to the global scale requires the development of many regional equations based on fieldwork; a specific equation that adequately reflects the actual relationship

  17. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    International Nuclear Information System (INIS)

    Engelmann, Christian; Hukerikar, Saurabh

    2017-01-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across

  18. Effective modelling of percolation at the landscape scale using data-based approaches

    Science.gov (United States)

    Selle, Benny; Lischeid, Gunnar; Huwe, Bernd

    2008-06-01

    Process-based models have been extensively applied to assess the impact of landuse change on water quantity and quality at landscape scales. However, the routine application of those models suffers from large computational efforts, lack of transparency and the requirement of many input parameters. Data-based models such as Feed-Forward Multilayer Perceptrons (MLP) and Classification and Regression Trees (CART) may be used as effective models, i.e. simple approximations of complex process-based models. These data-based approaches can subsequently be applied for scenario analysis and as a transparent management tool provided climatic boundary conditions and the basic model assumptions of the process-based models do not change dramatically. In this study, we apply MLP, CART and Multiple Linear Regression (LR) to model the spatially distributed and spatially aggregated percolation in soils using weather, groundwater and soil data. The percolation data is obtained via numerical experiments with Hydrus1D. Thus, the complex process-based model is approximated using simpler data-based approaches. The MLP model explains most of the percolation variance in time and space without using any soil information. This reflects the effective dimensionality of the process-based model and suggests that percolation in the study area may be modelled much simpler than using Hydrus1D. The CART model shows that soil properties play a negligible role for percolation under wet climatic conditions. However, they become more important if the conditions turn drier. The LR method does not yield satisfactory predictions for the spatially distributed percolation however the spatially aggregated percolation is well approximated. This may indicate that the soils behave simpler (i.e. more linear) when percolation dynamics are upscaled.

  19. A novel design approach for small scale low enthalpy binary geothermal power plants

    International Nuclear Information System (INIS)

    Gabbrielli, Roberto

    2012-01-01

    Highlights: ► Off-design analysis of ORC geothermal power plants through the years and the days. ► Thermal degradation of the geothermal source reduces largely the plant performances. ► The plant capacity factor is low if the brine temperature is far from the design value. ► The performances through the life are more important than those at the design point. ► ORC geothermal power plants should be designed with the end-life brine temperature. - Abstract: In this paper a novel design approach for small scale low enthalpy binary geothermal power plants is proposed. After the suction, the hot water (brine) superheats an organic fluid (R134a) in a Rankine cycle and, then, is injected back underground. This fact causes the well-known thermal degradation of the geothermal resource during the years. Hence, the binary geothermal power plants have to operate with conditions that largely vary during their life and, consequently, the most part of their functioning is executed in off-design conditions. So, as the novel approach here proposed, the design temperature of the geothermal resource is selected between its highest and lowest values, that correspond to the beginning and the end of the operative life of the geothermal power plant, respectively. Hence, using a detailed off-design performance model, the optimal design point of the geothermal power plant is evaluated maximizing the total actualized cash flow from the incentives for renewable power generation. Under different renewable energy incentive scenarios, the power plant that is designed using the lowest temperature of the geothermal resource always results the best option.

  20. A long-term, continuous simulation approach for large-scale flood risk assessments

    Science.gov (United States)

    Falter, Daniela; Schröter, Kai; Viet Dung, Nguyen; Vorogushyn, Sergiy; Hundecha, Yeshewatesfa; Kreibich, Heidi; Apel, Heiko; Merz, Bruno

    2014-05-01

    The Regional Flood Model (RFM) is a process based model cascade developed for flood risk assessments of large-scale basins. RFM consists of four model parts: the rainfall-runoff model SWIM, a 1D channel routing model, a 2D hinterland inundation model and the flood loss estimation model for residential buildings FLEMOps+r. The model cascade was recently undertaken a proof-of-concept study at the Elbe catchment (Germany) to demonstrate that flood risk assessments, based on a continuous simulation approach, including rainfall-runoff, hydrodynamic and damage estimation models, are feasible for large catchments. The results of this study indicated that uncertainties are significant, especially for hydrodynamic simulations. This was basically a consequence of low data quality and disregarding dike breaches. Therefore, RFM was applied with a refined hydraulic model setup for the Elbe tributary Mulde. The study area Mulde catchment comprises about 6,000 km2 and 380 river-km. The inclusion of more reliable information on overbank cross-sections and dikes considerably improved the results. For the application of RFM for flood risk assessments, long-term climate input data is needed to drive the model chain. This model input was provided by a multi-site, multi-variate weather generator that produces sets of synthetic meteorological data reproducing the current climate statistics. The data set comprises 100 realizations of 100 years of meteorological data. With the proposed continuous simulation approach of RFM, we simulated a virtual period of 10,000 years covering the entire flood risk chain including hydrological, 1D/2D hydrodynamic and flood damage estimation models. This provided a record of around 2.000 inundation events affecting the study area with spatially detailed information on inundation depths and damage to residential buildings on a resolution of 100 m. This serves as basis for a spatially consistent, flood risk assessment for the Mulde catchment presented in

  1. ELECTRONIC CIRCUIT BOARDS NON-UNIFORM COOLING SYSTEM MODEL

    Directory of Open Access Journals (Sweden)

    D. V. Yevdulov

    2016-01-01

    Full Text Available Abstract. The paper considers a mathematical model of non-uniform cooling of electronic circuit boards. The block diagram of the system implementing this approach, the method of calculation of the electronic board temperature field, as well as the principle of its thermal performance optimizing are presented. In the considered scheme the main heat elimination from electronic board is produced by the radiator system, and additional cooling of the most temperature-sensitive components is produced by thermoelectric batteries. Are given the two-dimensional temperature fields of the electronic board during its uniform and non-uniform cooling, is carried out their comparison. As follows from the calculations results, when using a uniform overall cooling of electronic unit there is a waste of energy for the cooling 0f electronic board parts which temperature is within acceptable temperature range without the cooling system. This approach leads to the increase in the cooling capacity of used thermoelectric batteries in comparison with the desired values. This largely reduces the efficiency of heat elimination system. The use for electronic boards cooling of non-uniform local heat elimination removes this disadvantage. The obtained dependences show that in this case, the energy required to create a given temperature is smaller than when using a common uniform cooling. In this approach the temperature field of the electronic board is more uniform and the cooling is more efficient. 

  2. A new generic approach for estimating the concentrations of down-the-drain chemicals at catchment and national scale

    Energy Technology Data Exchange (ETDEWEB)

    Keller, V.D.J. [Centre for Ecology and Hydrology, Hydrological Risks and Resources, Maclean Building, Crowmarsh Gifford, Wallingford OX10 8BB (United Kingdom)]. E-mail: vke@ceh.ac.uk; Rees, H.G. [Centre for Ecology and Hydrology, Hydrological Risks and Resources, Maclean Building, Crowmarsh Gifford, Wallingford OX10 8BB (United Kingdom); Fox, K.K. [University of Lancaster (United Kingdom); Whelan, M.J. [Unilever Safety and Environmental Assurance Centre, Colworth (United Kingdom)

    2007-07-15

    A new generic approach for estimating chemical concentrations in rivers at catchment and national scales is presented. Domestic chemical loads in waste water are estimated using gridded population data. River flows are estimated by combining predicted runoff with topographically derived flow direction. Regional scale exposure is characterised by two summary statistics: PEC{sub works}, the average concentration immediately downstream of emission points, and, PEC{sub area}, the catchment-average chemical concentration. The method was applied to boron at national (England and Wales) and catchment (Aire-Calder) scales. Predicted concentrations were within 50% of measured mean values in the Aire-Calder catchment and in agreement with results from the GREAT-ER model. The concentration grids generated provide a picture of the spatial distribution of expected chemical concentrations at various scales, and can be used to identify areas of potentially high risk. - A new grid-based approach to predict spatially-referenced freshwater concentrations of domestic chemicals.

  3. A review of analogue modelling of geodynamic processes: Approaches, scaling, materials and quantification, with an application to subduction experiments

    Science.gov (United States)

    Schellart, Wouter P.; Strak, Vincent

    2016-10-01

    We present a review of the analogue modelling method, which has been used for 200 years, and continues to be used, to investigate geological phenomena and geodynamic processes. We particularly focus on the following four components: (1) the different fundamental modelling approaches that exist in analogue modelling; (2) the scaling theory and scaling of topography; (3) the different materials and rheologies that are used to simulate the complex behaviour of rocks; and (4) a range of recording techniques that are used for qualitative and quantitative analyses and interpretations of analogue models. Furthermore, we apply these four components to laboratory-based subduction models and describe some of the issues at hand with modelling such systems. Over the last 200 years, a wide variety of analogue materials have been used with different rheologies, including viscous materials (e.g. syrups, silicones, water), brittle materials (e.g. granular materials such as sand, microspheres and sugar), plastic materials (e.g. plasticine), visco-plastic materials (e.g. paraffin, waxes, petrolatum) and visco-elasto-plastic materials (e.g. hydrocarbon compounds and gelatins). These materials have been used in many different set-ups to study processes from the microscale, such as porphyroclast rotation, to the mantle scale, such as subduction and mantle convection. Despite the wide variety of modelling materials and great diversity in model set-ups and processes investigated, all laboratory experiments can be classified into one of three different categories based on three fundamental modelling approaches that have been used in analogue modelling: (1) The external approach, (2) the combined (external + internal) approach, and (3) the internal approach. In the external approach and combined approach, energy is added to the experimental system through the external application of a velocity, temperature gradient or a material influx (or a combination thereof), and so the system is open

  4. A multi-scale experimental and simulation approach for fractured subsurface systems

    Science.gov (United States)

    Viswanathan, H. S.; Carey, J. W.; Frash, L.; Karra, S.; Hyman, J.; Kang, Q.; Rougier, E.; Srinivasan, G.

    2017-12-01

    Fractured systems play an important role in numerous subsurface applications including hydraulic fracturing, carbon sequestration, geothermal energy and underground nuclear test detection. Fractures that range in scale from microns to meters and their structure control the behavior of these systems which provide over 85% of our energy and 50% of US drinking water. Determining the key mechanisms in subsurface fractured systems has been impeded due to the lack of sophisticated experimental methods to measure fracture aperture and connectivity, multiphase permeability, and chemical exchange capacities at the high temperature, pressure, and stresses present in the subsurface. In this study, we developed and use microfluidic and triaxial core flood experiments required to reveal the fundamental dynamics of fracture-fluid interactions. In addition we have developed high fidelity fracture propagation and discrete fracture network flow models to simulate these fractured systems. We also have developed reduced order models of these fracture simulators in order to conduct uncertainty quantification for these systems. We demonstrate an integrated experimental/modeling approach that allows for a comprehensive characterization of fractured systems and develop models that can be used to optimize the reservoir operating conditions over a range of subsurface conditions.

  5. Computational approach on PEB process in EUV resist: multi-scale simulation

    Science.gov (United States)

    Kim, Muyoung; Moon, Junghwan; Choi, Joonmyung; Lee, Byunghoon; Jeong, Changyoung; Kim, Heebom; Cho, Maenghyo

    2017-03-01

    For decades, downsizing has been a key issue for high performance and low cost of semiconductor, and extreme ultraviolet lithography is one of the promising candidates to achieve the goal. As a predominant process in extreme ultraviolet lithography on determining resolution and sensitivity, post exposure bake has been mainly studied by experimental groups, but development of its photoresist is at the breaking point because of the lack of unveiled mechanism during the process. Herein, we provide theoretical approach to investigate underlying mechanism on the post exposure bake process in chemically amplified resist, and it covers three important reactions during the process: acid generation by photo-acid generator dissociation, acid diffusion, and deprotection. Density functional theory calculation (quantum mechanical simulation) was conducted to quantitatively predict activation energy and probability of the chemical reactions, and they were applied to molecular dynamics simulation for constructing reliable computational model. Then, overall chemical reactions were simulated in the molecular dynamics unit cell, and final configuration of the photoresist was used to predict the line edge roughness. The presented multiscale model unifies the phenomena of both quantum and atomic scales during the post exposure bake process, and it will be helpful to understand critical factors affecting the performance of the resulting photoresist and design the next-generation material.

  6. Neural ensemble communities: Open-source approaches to hardware for large-scale electrophysiology

    Science.gov (United States)

    Siegle, Joshua H.; Hale, Gregory J.; Newman, Jonathan P.; Voigts, Jakob

    2014-01-01

    One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is “open” or “closed”: that is, whether or not the system’s schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. PMID:25528614

  7. Quantifying in-stream retention of nitrate at catchment scales using a practical mass balance approach.

    Science.gov (United States)

    Schwientek, Marc; Selle, Benny

    2016-02-01

    As field data on in-stream nitrate retention is scarce at catchment scales, this study aimed at quantifying net retention of nitrate within the entire river network of a fourth-order stream. For this purpose, a practical mass balance approach combined with a Lagrangian sampling scheme was applied and seasonally repeated to estimate daily in-stream net retention of nitrate for a 17.4 km long, agriculturally influenced, segment of the Steinlach River in southwestern Germany. This river segment represents approximately 70% of the length of the main stem and about 32% of the streambed area of the entire river network. Sampling days in spring and summer were biogeochemically more active than in autumn and winter. Results obtained for the main stem of Steinlach River were subsequently extrapolated to the stream network in the catchment. It was demonstrated that, for baseflow conditions in spring and summer, in-stream nitrate retention could sum up to a relevant term of the catchment's nitrogen balance if the entire stream network was considered.

  8. A space and time scale-dependent nonlinear geostatistical approach for downscaling daily precipitation and temperature

    KAUST Repository

    Jha, Sanjeev Kumar

    2015-07-21

    A geostatistical framework is proposed to downscale daily precipitation and temperature. The methodology is based on multiple-point geostatistics (MPS), where a multivariate training image is used to represent the spatial relationship between daily precipitation and daily temperature over several years. Here, the training image consists of daily rainfall and temperature outputs from the Weather Research and Forecasting (WRF) model at 50 km and 10 km resolution for a twenty year period ranging from 1985 to 2004. The data are used to predict downscaled climate variables for the year 2005. The result, for each downscaled pixel, is daily time series of precipitation and temperature that are spatially dependent. Comparison of predicted precipitation and temperature against a reference dataset indicates that both the seasonal average climate response together with the temporal variability are well reproduced. The explicit inclusion of time dependence is explored by considering the climate properties of the previous day as an additional variable. Comparison of simulations with and without inclusion of time dependence shows that the temporal dependence only slightly improves the daily prediction because the temporal variability is already well represented in the conditioning data. Overall, the study shows that the multiple-point geostatistics approach is an efficient tool to be used for statistical downscaling to obtain local scale estimates of precipitation and temperature from General Circulation Models. This article is protected by copyright. All rights reserved.

  9. Effective use of integrated hydrological models in basin-scale water resources management: surrogate modeling approaches

    Science.gov (United States)

    Zheng, Y.; Wu, B.; Wu, X.

    2015-12-01

    Integrated hydrological models (IHMs) consider surface water and subsurface water as a unified system, and have been widely adopted in basin-scale water resources studies. However, due to IHMs' mathematical complexity and high computational cost, it is difficult to implement them in an iterative model evaluation process (e.g., Monte Carlo Simulation, simulation-optimization analysis, etc.), which diminishes their applicability for supporting decision-making in real-world situations. Our studies investigated how to effectively use complex IHMs to address real-world water issues via surrogate modeling. Three surrogate modeling approaches were considered, including 1) DYCORS (DYnamic COordinate search using Response Surface models), a well-established response surface-based optimization algorithm; 2) SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), a response surface-based optimization algorithm that we developed specifically for IHMs; and 3) Probabilistic Collocation Method (PCM), a stochastic response surface approach. Our investigation was based on a modeling case study in the Heihe River Basin (HRB), China's second largest endorheic river basin. The GSFLOW (Coupled Ground-Water and Surface-Water Flow Model) model was employed. Two decision problems were discussed. One is to optimize, both in time and in space, the conjunctive use of surface water and groundwater for agricultural irrigation in the middle HRB region; and the other is to cost-effectively collect hydrological data based on a data-worth evaluation. Overall, our study results highlight the value of incorporating an IHM in making decisions of water resources management and hydrological data collection. An IHM like GSFLOW can provide great flexibility to formulating proper objective functions and constraints for various optimization problems. On the other hand, it has been demonstrated that surrogate modeling approaches can pave the path for such incorporation in real

  10. Large scale atomistic approaches to thermal transport and phonon scattering in nanostructured materials

    Science.gov (United States)

    Savic, Ivana

    2012-02-01

    Decreasing the thermal conductivity of bulk materials by nanostructuring and dimensionality reduction, or by introducing some amount of disorder represents a promising strategy in the search for efficient thermoelectric materials [1]. For example, considerable improvements of the thermoelectric efficiency in nanowires with surface roughness [2], superlattices [3] and nanocomposites [4] have been attributed to a significantly reduced thermal conductivity. In order to accurately describe thermal transport processes in complex nanostructured materials and directly compare with experiments, the development of theoretical and computational approaches that can account for both anharmonic and disorder effects in large samples is highly desirable. We will first summarize the strengths and weaknesses of the standard atomistic approaches to thermal transport (molecular dynamics [5], Boltzmann transport equation [6] and Green's function approach [7]) . We will then focus on the methods based on the solution of the Boltzmann transport equation, that are computationally too demanding, at present, to treat large scale systems and thus to investigate realistic materials. We will present a Monte Carlo method [8] to solve the Boltzmann transport equation in the relaxation time approximation [9], that enables computation of the thermal conductivity of ordered and disordered systems with a number of atoms up to an order of magnitude larger than feasible with straightforward integration. We will present a comparison between exact and Monte Carlo Boltzmann transport results for small SiGe nanostructures and then use the Monte Carlo method to analyze the thermal properties of realistic SiGe nanostructured materials. This work is done in collaboration with Davide Donadio, Francois Gygi, and Giulia Galli from UC Davis.[4pt] [1] See e.g. A. J. Minnich, M. S. Dresselhaus, Z. F. Ren, and G. Chen, Energy Environ. Sci. 2, 466 (2009).[0pt] [2] A. I. Hochbaum et al, Nature 451, 163 (2008).[0pt

  11. Uniform excitations in magnetic nanoparticles

    DEFF Research Database (Denmark)

    Mørup, Steen; Frandsen, Cathrine; Hansen, Mikkel Fougt

    2010-01-01

    We present a short review of the magnetic excitations in nanoparticles below the superparamagnetic blocking temperature. In this temperature regime, the magnetic dynamics in nanoparticles is dominated by uniform excitations, and this leads to a linear temperature dependence of the magnetization...... and the magnetic hyperfine field, in contrast to the Bloch T3/2 law in bulk materials. The temperature dependence of the average magnetization is conveniently studied by Mössbauer spectroscopy. The energy of the uniform excitations of magnetic nanoparticles can be studied by inelastic neutron scattering....

  12. Uniform excitations in magnetic nanoparticles

    Directory of Open Access Journals (Sweden)

    Steen Mørup

    2010-11-01

    Full Text Available We present a short review of the magnetic excitations in nanoparticles below the superparamagnetic blocking temperature. In this temperature regime, the magnetic dynamics in nanoparticles is dominated by uniform excitations, and this leads to a linear temperature dependence of the magnetization and the magnetic hyperfine field, in contrast to the Bloch T3/2 law in bulk materials. The temperature dependence of the average magnetization is conveniently studied by Mössbauer spectroscopy. The energy of the uniform excitations of magnetic nanoparticles can be studied by inelastic neutron scattering.

  13. A multi-scale approach to monitor urban carbon-dioxide emissions in the atmosphere over Vancouver, Canada

    Science.gov (United States)

    Christen, A.; Crawford, B.; Ketler, R.; Lee, J. K.; McKendry, I. G.; Nesic, Z.; Caitlin, S.

    2015-12-01

    Measurements of long-lived greenhouse gases in the urban atmosphere are potentially useful to constrain and validate urban emission inventories, or space-borne remote-sensing products. We summarize and compare three different approaches, operating at different scales, that directly or indirectly identify, attribute and quantify emissions (and uptake) of carbon dioxide (CO2) in urban environments. All three approaches are illustrated using in-situ measurements in the atmosphere in and over Vancouver, Canada. Mobile sensing may be a promising way to quantify and map CO2 mixing ratios at fine scales across heterogenous and complex urban environments. We developed a system for monitoring CO2 mixing ratios at street level using a network of mobile CO2 sensors deployable on vehicles and bikes. A total of 5 prototype sensors were built and simultaneously used in a measurement campaign across a range of urban land use types and densities within a short time frame (3 hours). The dataset is used to aid in fine scale emission mapping in combination with simultaneous tower-based flux measurements. Overall, calculated CO2 emissions are realistic when compared against a spatially disaggregated scale emission inventory. The second approach is based on mass flux measurements of CO2 using a tower-based eddy covariance (EC) system. We present a continuous 7-year long dataset of CO2 fluxes measured by EC at the 28m tall flux tower 'Vancouver-Sunset'. We show how this dataset can be combined with turbulent source area models to quantify and partition different emission processes at the neighborhood-scale. The long-term EC measurements are within 10% of a spatially disaggregated scale emission inventory. Thirdly, at the urban scale, we present a dataset of CO2 mixing ratios measured using a tethered balloon system in the urban boundary layer above Vancouver. Using a simple box model, net city-scale CO2 emissions can be determined using measured rate of change of CO2 mixing ratios

  14. What scaling means in wind engineering: Complementary role of the reduced scale approach in a BLWT and the full scale testing in a large climatic wind tunnel

    Science.gov (United States)

    Flamand, Olivier

    2017-12-01

    Wind engineering problems are commonly studied by wind tunnel experiments at a reduced scale. This introduces several limitations and calls for a careful planning of the tests and the interpretation of the experimental results. The talk first revisits the similitude laws and discusses how they are actually applied in wind engineering. It will also remind readers why different scaling laws govern in different wind engineering problems. Secondly, the paper focuses on the ways to simplify a detailed structure (bridge, building, platform) when fabricating the downscaled models for the tests. This will be illustrated by several examples from recent engineering projects. Finally, under the most severe weather conditions, manmade structures and equipment should remain operational. What “recreating the climate” means and aims to achieve will be illustrated through common practice in climatic wind tunnel modelling.

  15. Instantaneous variance scaling of AIRS thermodynamic profiles using a circular area Monte Carlo approach

    Science.gov (United States)

    Dorrestijn, Jesse; Kahn, Brian H.; Teixeira, João; Irion, Fredrick W.

    2018-05-01

    Satellite observations are used to obtain vertical profiles of variance scaling of temperature (T) and specific humidity (q) in the atmosphere. A higher spatial resolution nadir retrieval at 13.5 km complements previous Atmospheric Infrared Sounder (AIRS) investigations with 45 km resolution retrievals and enables the derivation of power law scaling exponents to length scales as small as 55 km. We introduce a variable-sized circular-area Monte Carlo methodology to compute exponents instantaneously within the swath of AIRS that yields additional insight into scaling behavior. While this method is approximate and some biases are likely to exist within non-Gaussian portions of the satellite observational swaths of T and q, this method enables the estimation of scale-dependent behavior within instantaneous swaths for individual tropical and extratropical systems of interest. Scaling exponents are shown to fluctuate between β = -1 and -3 at scales ≥ 500 km, while at scales ≤ 500 km they are typically near β ≈ -2, with q slightly lower than T at the smallest scales observed. In the extratropics, the large-scale β is near -3. Within the tropics, however, the large-scale β for T is closer to -1 as small-scale moist convective processes dominate. In the tropics, q exhibits large-scale β between -2 and -3. The values of β are generally consistent with previous works of either time-averaged spatial variance estimates, or aircraft observations that require averaging over numerous flight observational segments. The instantaneous variance scaling methodology is relevant for cloud parameterization development and the assessment of time variability of scaling exponents.

  16. A modular approach to large-scale design optimization of aerospace systems

    Science.gov (United States)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft

  17. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    Science.gov (United States)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  18. Topographic mapping on large-scale tidal flats with an iterative approach on the waterline method

    Science.gov (United States)

    Kang, Yanyan; Ding, Xianrong; Xu, Fan; Zhang, Changkuan; Ge, Xiaoping

    2017-05-01

    Tidal flats, which are both a natural ecosystem and a type of landscape, are of significant importance to ecosystem function and land resource potential. Morphologic monitoring of tidal flats has become increasingly important with respect to achieving sustainable development targets. Remote sensing is an established technique for the measurement of topography over tidal flats; of the available methods, the waterline method is particularly effective for constructing a digital elevation model (DEM) of intertidal areas. However, application of the waterline method is more limited in large-scale, shifting tidal flats areas, where the tides are not synchronized and the waterline is not a quasi-contour line. For this study, a topographical map of the intertidal regions within the Radial Sand Ridges (RSR) along the Jiangsu Coast, China, was generated using an iterative approach on the waterline method. A series of 21 multi-temporal satellite images (18 HJ-1A/B CCD and three Landsat TM/OLI) of the RSR area collected at different water levels within a five month period (31 December 2013-28 May 2014) was used to extract waterlines based on feature extraction techniques and artificial further modification. These 'remotely-sensed waterlines' were combined with the corresponding water levels from the 'model waterlines' simulated by a hydrodynamic model with an initial generalized DEM of exposed tidal flats. Based on the 21 heighted 'remotely-sensed waterlines', a DEM was constructed using the ANUDEM interpolation method. Using this new DEM as the input data, it was re-entered into the hydrodynamic model, and a new round of water level assignment of waterlines was performed. A third and final output DEM was generated covering an area of approximately 1900 km2 of tidal flats in the RSR. The water level simulation accuracy of the hydrodynamic model was within 0.15 m based on five real-time tide stations, and the height accuracy (root mean square error) of the final DEM was 0.182 m

  19. Uniformity calibration for ICT image

    International Nuclear Information System (INIS)

    Zeng Gang; Liu Li; Que Jiemin; Zhang Yingping; Yin Yin; Wang Yanfang; Yu Zhongqiang; Yan Yonglian

    2004-01-01

    The uniformity of ICT image is impaired by beam hardening and the inconsistency of detector units responses. The beam hardening and the nonlinearity of the detector's output have been analyzed. The correction factors are determined experimentally by the detector's responses with different absorption length. The artifacts in the CT image of a symmetrical aluminium cylinder have been eliminated after calibration. (author)

  20. School Uniforms: Guidelines for Principals.

    Science.gov (United States)

    Essex, Nathan L.

    2001-01-01

    Principals desiring to develop a school-uniform policy should involve parents, teachers, community leaders, and student representatives; beware restrictions on religious and political expression; provide flexibility and assistance for low-income families; implement a pilot program; align the policy with school-safety issues; and consider legal…

  1. Uniform peanut performance test 2017

    Science.gov (United States)

    The Uniform Peanut Performance Tests (UPPT) are designed to evaluate the commercial potential of advanced breeding peanut lines not formally released. The tests are performed in ten locations across the peanut production belt. In this study, 2 controls and 14 entries were evaluated at 8 locations....

  2. Approaches to Debugging at Scale on the Peregrine System | High-Performance

    Science.gov (United States)

    possible approaches. One approach provides those nodes as soon as possible but the time of their administrators. Approach 1: Run an Interactive Job Submit an interactive job asking for the number of nodes you to end the interactive job, and then type exit again to end the screen session. Approach 2: Request a

  3. Clean focus, dose and CD metrology for CD uniformity improvement

    Science.gov (United States)

    Lee, Honggoo; Han, Sangjun; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, DongYoung; Oh, Eungryong; Choi, Ahlin; Kim, Nakyoon; Robinson, John C.; Mengel, Markus; Pablo, Rovira; Yoo, Sungchul; Getin, Raphael; Choi, Dongsub; Jeon, Sanghuck

    2018-03-01

    Lithography process control solutions require more exacting capabilities as the semiconductor industry goes forward to the 1x nm node DRAM device manufacturing. In order to continue scaling down the device feature sizes, critical dimension (CD) uniformity requires continuous improvement to meet the required CD error budget. In this study we investigate using optical measurement technology to improve over CD-SEM methods in focus, dose, and CD. One of the key challenges is measuring scanner focus of device patterns. There are focus measurement methods based on specially designed marks on scribe-line, however, one issue of this approach is that it will report focus of scribe line which is potentially different from that of the real device pattern. In addition, scribe-line marks require additional design and troubleshooting steps that add complexity. In this study, we investigated focus measurement directly on the device pattern. Dose control is typically based on using the linear correlation behavior between dose and CD. The noise of CD measurement, based on CD-SEM for example, will not only impact the accuracy, but also will make it difficult to monitor dose signature on product wafers. In this study we will report the direct dose metrology result using an optical metrology system which especially enhances the DUV spectral coverage to improve the signal to noise ratio. CD-SEM is often used to measure CD after the lithography step. This measurement approach has the advantage of easy recipe setup as well as the flexibility to measure critical feature dimensions, however, we observe that CD-SEM metrology has limitations. In this study, we demonstrate within-field CD uniformity improvement through the extraction of clean scanner slit and scan CD behavior by using optical metrology.

  4. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    Science.gov (United States)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  5. Thermal fatigue of austenitic stainless steel: influence of surface conditions through a multi-scale approach

    International Nuclear Information System (INIS)

    Le-Pecheur, Anne

    2008-01-01

    Some cases of cracking of 304L austenitic stainless steel components due to thermal fatigue were encountered in particular on the Residual Heat Removal Circuits (RHR) of the Pressurized Water Reactor (PWR). EDF has initiated a R and D program to understand assess the risks of damage on nuclear plant mixing zones. The INTHERPOL test developed at EDF is designed in order to perform pure thermal fatigue test on tubular specimen under mono-frequency thermal load. These tests are carried out under various loadings, surface finish qualities and welding in order to give an account of these parameters on crack initiation. The main topic of this study is the research of a fatigue criterion using a micro:macro modelling approach. The first part of work deals with material characterization (stainless steel 304L) emphasising the specificities of the surface roughness link with a strong hardening gradient. The first results of the characterization on the surface show a strong work-hardening gradient on a 250 microns layer. This gradient does not evolved after thermal cycling. Micro hardness measurements and TEM observations were intensively used to characterize this gradient. The second part is the macroscopic modelling of INTHERPOL tests in order to determine the components of the stress and strain tensors due to thermal cycling. The third part of work is thus to evaluate the effect of surface roughness and hardening gradient using a calculation on a finer scale. This simulation is based on the variation of dislocation density. A goal for the future is the determination of the fatigue criterion mainly based on polycrystalline modelling. Stocked energy or critical plane being available that allows making a sound choice for the criteria. (author)

  6. Assessing Weather-Yield Relationships in Rice at Local Scale Using Data Mining Approaches.

    Directory of Open Access Journals (Sweden)

    Sylvain Delerce

    Full Text Available Seasonal and inter-annual climate variability have become important issues for farmers, and climate change has been shown to increase them. Simultaneously farmers and agricultural organizations are increasingly collecting observational data about in situ crop performance. Agriculture thus needs new tools to cope with changing environmental conditions and to take advantage of these data. Data mining techniques make it possible to extract embedded knowledge associated with farmer experiences from these large observational datasets in order to identify best practices for adapting to climate variability. We introduce new approaches through a case study on irrigated and rainfed rice in Colombia. Preexisting observational datasets of commercial harvest records were combined with in situ daily weather series. Using Conditional Inference Forest and clustering techniques, we assessed the relationships between climatic factors and crop yield variability at the local scale for specific cultivars and growth stages. The analysis showed clear relationships in the various location-cultivar combinations, with climatic factors explaining 6 to 46% of spatiotemporal variability in yield, and with crop responses to weather being non-linear and cultivar-specific. Climatic factors affected cultivars differently during each stage of development. For instance, one cultivar was affected by high nighttime temperatures in the reproductive stage but responded positively to accumulated solar radiation during the ripening stage. Another was affected by high nighttime temperatures during both the vegetative and reproductive stages. Clustering of the weather patterns corresponding to individual cropping events revealed different groups of weather patterns for irrigated and rainfed systems with contrasting yield levels. Best-suited cultivars were identified for some weather patterns, making weather-site-specific recommendations possible. This study illustrates the potential of

  7. Integrated Approach for Improving Small Scale Market Oriented Dairy Systems in Pakistan: Economic Impact of Interventions

    Directory of Open Access Journals (Sweden)

    A. Ghaffar

    2010-02-01

    Full Text Available The International Atomic Energy Agency (IAEA launched a Coordinated Research Program in 10 developing countries including Pakistan involving small scale market oriented dairy farmers to identify and prioritize the constraints and opportunities in the selected dairy farms, develop intervention strategies and assess the economic impact of the intervention. The interventions in animal health (control of mastitis at sub-clinical stage and reduction in calf mortality, nutrition (balanced feed reproduction (mineral supplementation, and general management (training of farmers were identified and implemented in a participatory approach at the selected dairy farms. The calf mortality was reduced from 35 to 13 percent up to the age of 3 months. Use of Alfa Deval post milking teat dips reduced the incidence of sub-clinical mastitis from 34 to 5% showing economical benefits of the interventions. Partial budget technique was used to analyze its impact in the registered herds. The farmers recorded monthly quantities of different feed ingredients and seasonal green fodder offered to the animals. From this data set total metabolizeable energy requirements and availability from feed were computed which revealed that animals were deficient in metabolizeable energy in all locations. This was also confirmed by seasonal variation in body condition scoring. At some selected farms the mineral mixture supplement was introduced which exhibited increased milk yield by 5 % in addition to shorten service period by 30 days. Three sessions of training were arranged to train the farmers to care new born calves, daily farm management and detect the animals in heat efficiently to enhance the over all income of the farmers. The overall income of the farm was increased by 40%.

  8. Assessing Weather-Yield Relationships in Rice at Local Scale Using Data Mining Approaches.

    Science.gov (United States)

    Delerce, Sylvain; Dorado, Hugo; Grillon, Alexandre; Rebolledo, Maria Camila; Prager, Steven D; Patiño, Victor Hugo; Garcés Varón, Gabriel; Jiménez, Daniel

    2016-01-01

    Seasonal and inter-annual climate variability have become important issues for farmers, and climate change has been shown to increase them. Simultaneously farmers and agricultural organizations are increasingly collecting observational data about in situ crop performance. Agriculture thus needs new tools to cope with changing environmental conditions and to take advantage of these data. Data mining techniques make it possible to extract embedded knowledge associated with farmer experiences from these large observational datasets in order to identify best practices for adapting to climate variability. We introduce new approaches through a case study on irrigated and rainfed rice in Colombia. Preexisting observational datasets of commercial harvest records were combined with in situ daily weather series. Using Conditional Inference Forest and clustering techniques, we assessed the relationships between climatic factors and crop yield variability at the local scale for specific cultivars and growth stages. The analysis showed clear relationships in the various location-cultivar combinations, with climatic factors explaining 6 to 46% of spatiotemporal variability in yield, and with crop responses to weather being non-linear and cultivar-specific. Climatic factors affected cultivars differently during each stage of development. For instance, one cultivar was affected by high nighttime temperatures in the reproductive stage but responded positively to accumulated solar radiation during the ripening stage. Another was affected by high nighttime temperatures during both the vegetative and reproductive stages. Clustering of the weather patterns corresponding to individual cropping events revealed different groups of weather patterns for irrigated and rainfed systems with contrasting yield levels. Best-suited cultivars were identified for some weather patterns, making weather-site-specific recommendations possible. This study illustrates the potential of data mining for

  9. Thermal conductivity of granular porous media: A pore scale modeling approach

    Directory of Open Access Journals (Sweden)

    R. Askari

    2015-09-01

    Full Text Available Pore scale modeling method has been widely used in the petrophysical studies to estimate macroscopic properties (e.g. porosity, permeability, and electrical resistivity of porous media with respect to their micro structures. Although there is a sumptuous literature about the application of the method to study flow in porous media, there are fewer studies regarding its application to thermal conduction characterization, and the estimation of effective thermal conductivity, which is a salient parameter in many engineering surveys (e.g. geothermal resources and heavy oil recovery. By considering thermal contact resistance, we demonstrate the robustness of the method for predicting the effective thermal conductivity. According to our results obtained from Utah oil sand samples simulations, the simulation of thermal contact resistance is pivotal to grant reliable estimates of effective thermal conductivity. Our estimated effective thermal conductivities exhibit a better compatibility with the experimental data in companion with some famous experimental and analytical equations for the calculation of the effective thermal conductivity. In addition, we reconstruct a porous medium for an Alberta oil sand sample. By increasing roughness, we observe the effect of thermal contact resistance in the decrease of the effective thermal conductivity. However, the roughness effect becomes more noticeable when there is a higher thermal conductivity of solid to fluid ratio. Moreover, by considering the thermal resistance in porous media with different grains sizes, we find that the effective thermal conductivity augments with increased grain size. Our observation is in a reasonable accordance with experimental results. This demonstrates the usefulness of our modeling approach for further computational studies of heat transfer in porous media.

  10. A national scale flood hazard mapping methodology: The case of Greece - Protection and adaptation policy approaches.

    Science.gov (United States)

    Kourgialas, Nektarios N; Karatzas, George P

    2017-12-01

    The present work introduces a national scale flood hazard assessment methodology, using multi-criteria analysis and artificial neural networks (ANNs) techniques in a GIS environment. The proposed methodology was applied in Greece, where flash floods are a relatively frequent phenomenon and it has become more intense over the last decades, causing significant damages in rural and urban sectors. In order the most prone flooding areas to be identified, seven factor-maps (that are directly related to flood generation) were combined in a GIS environment. These factor-maps are: a) the Flow accumulation (F), b) the Land use (L), c) the Altitude (A), b) the Slope (S), e) the soil Erodibility (E), f) the Rainfall intensity (R), and g) the available water Capacity (C). The name to the proposed method is "FLASERC". The flood hazard for each one of these factors is classified into five categories: Very low, low, moderate, high, and very high. The above factors are combined and processed using the appropriate ANN algorithm tool. For the ANN training process spatial distribution of historical flooded points in Greece within the five different flood hazard categories of the aforementioned seven factor-maps were combined. In this way, the overall flood hazard map for Greece was determined. The final results are verified using additional historical flood events that have occurred in Greece over the last 100years. In addition, an overview of flood protection measures and adaptation policy approaches were proposed for agricultural and urban areas located at very high flood hazard areas. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Psychometric properties of the Epworth Sleepiness Scale: A factor analysis and item-response theory approach.

    Science.gov (United States)

    Pilcher, June J; Switzer, Fred S; Munc, Alec; Donnelly, Janet; Jellen, Julia C; Lamm, Claus

    2018-04-01

    The purpose of this study is to examine the psychometric properties of the Epworth Sleepiness Scale (ESS) in two languages, German and English. Students from a university in Austria (N = 292; 55 males; mean age = 18.71 ± 1.71 years; 237 females; mean age = 18.24 ± 0.88 years) and a university in the US (N = 329; 128 males; mean age = 18.71 ± 0.88 years; 201 females; mean age = 21.59 ± 2.27 years) completed the ESS. An exploratory-factor analysis was completed to examine dimensionality of the ESS. Item response theory (IRT) analyses were used to provide information about the response rates on the items on the ESS and provide differential item functioning (DIF) analyses to examine whether the items were interpreted differently between the two languages. The factor analyses suggest that the ESS measures two distinct sleepiness constructs. These constructs indicate that the ESS is probing sleepiness in settings requiring active versus passive responding. The IRT analyses found that overall, the items on the ESS perform well as a measure of sleepiness. However, Item 8 and to a lesser extent Item 6 were being interpreted differently by respondents in comparison to the other items. In addition, the DIF analyses showed that the responses between German and English were very similar indicating that there are only minor measurement differences between the two language versions of the ESS. These findings suggest that the ESS provides a reliable measure of propensity to sleepiness; however, it does convey a two-factor approach to sleepiness. Researchers and clinicians can use the German and English versions of the ESS but may wish to exclude Item 8 when calculating a total sleepiness score.

  12. Airframe Noise Prediction of a Full Aircraft in Model and Full Scale Using a Lattice Boltzmann Approach

    Science.gov (United States)

    Fares, Ehab; Duda, Benjamin; Khorrami, Mehdi R.

    2016-01-01

    Unsteady flow computations are presented for a Gulfstream aircraft model in landing configuration, i.e., flap deflected 39deg and main landing gear deployed. The simulations employ the lattice Boltzmann solver PowerFLOW(Trademark) to simultaneously capture the flow physics and acoustics in the near field. Sound propagation to the far field is obtained using a Ffowcs Williams and Hawkings acoustic analogy approach. Two geometry representations of the same aircraft are analyzed: an 18% scale, high-fidelity, semi-span model at wind tunnel Reynolds number and a full-scale, full-span model at half-flight Reynolds number. Previously published and newly generated model-scale results are presented; all full-scale data are disclosed here for the first time. Reynolds number and geometrical fidelity effects are carefully examined to discern aerodynamic and aeroacoustic trends with a special focus on the scaling of surface pressure fluctuations and farfield noise. An additional study of the effects of geometrical detail on farfield noise is also documented. The present investigation reveals that, overall, the model-scale and full-scale aeroacoustic results compare rather well. Nevertheless, the study also highlights that finer geometrical details that are typically not captured at model scales can have a non-negligible contribution to the farfield noise signature.

  13. Phenomenology of scaled factorial moments and future approaches for correlation studies

    International Nuclear Information System (INIS)

    Seibert, D.

    1991-01-01

    We show that the definitions of the exclusive and inclusive scaled factorial moments are not equivalent, and propose the use of scaled factorial moments that reduce to the exclusive moments in the case of fixed multiplicity. We then present a new derivation of the multiplicity scaling law for scaled factorial moment data. This scaling law seems to hold, independent of collision energy, for events with fixed projectile and target. However, deviations from this scaling law indicate that correlations in S-Au collisions are 30 times as strong as correlations in hadronic collisions. Finally, we discuss 'split-bin' correlation functions, the most useful tool for future investigations of these anomalously strong hadronic correlations. (orig.)

  14. Approaching Repetitive Short Circuit Tests on MW-Scale Power Modules by means of an Automatic Testing Setup

    DEFF Research Database (Denmark)

    Reigosa, Paula Diaz; Wang, Huai; Iannuzzo, Francesco

    2016-01-01

    An automatic testing system to perform repetitive short-circuit tests on megawatt-scale IGBT power modules is pre-sented and described in this paper, pointing out the advantages and features of such testing approach. The developed system is based on a non-destructive short-circuit tester, which has...

  15. Stability and Control of Large-Scale Dynamical Systems A Vector Dissipative Systems Approach

    CERN Document Server

    Haddad, Wassim M

    2011-01-01

    Modern complex large-scale dynamical systems exist in virtually every aspect of science and engineering, and are associated with a wide variety of physical, technological, environmental, and social phenomena, including aerospace, power, communications, and network systems, to name just a few. This book develops a general stability analysis and control design framework for nonlinear large-scale interconnected dynamical systems, and presents the most complete treatment on vector Lyapunov function methods, vector dissipativity theory, and decentralized control architectures. Large-scale dynami

  16. Scale Economies and Industry Agglomeration Externalities: A Dynamic Cost Function Approach

    OpenAIRE

    Donald S. Siegel; Catherine J. Morrison Paul

    1999-01-01

    Scale economies and agglomeration externalities are alleged to be important determinants of economic growth. To assess these effects, the authors outline and estimate a microfoundations model based on a dynamic cost function specification. This model provides for the separate identification of the impacts of externalities and cyclical utilization on short- and long-run scale economies and input substitution patterns. The authors find that scale economies are prevalent in U.S manufacturing; co...

  17. A watershed-scale approach to tracing metal contamination in the environment

    Science.gov (United States)

    Church, Stanley E

    1996-01-01

    IntroductionPublic policy during the 1800's encouraged mining in the western United States. Mining on Federal lands played an important role in the growing economy creating national wealth from our abundant and diverse mineral resource base. The common industrial practice from the early days of mining through about 1970 in the U.S. was for mine operators to dispose of the mine wastes and mill tailings in the nearest stream reach or lake. As a result of this contamination, many stream reaches below old mines, mills, and mining districts and some major rivers and lakes no longer support aquatic life. Riparian habitats within these affected watersheds have also been impacted. Often, the water from these affected stream reaches is generally not suitable for drinking, creating a public health hazard. The recent Department of Interior Abandoned Mine Lands (AML) Initiative is an effort on the part of the Federal Government to address the adverse environmental impact of these past mining practices on Federal lands. The AML Initiative has adopted a watershed approach to determine those sites that contribute the majority of the contaminants in the watershed. By remediating the largest sources of contamination within the watershed, the impact of metal contamination in the environment within the watershed as a whole is reduced rather than focusing largely on those sites for which principal responsible parties can be found.The scope of the problem of metal contamination in the environment from past mining practices in the coterminous U.S. is addressed in a recent report by Ferderer (1996). Using the USGS1:2,000,000-scale hydrologic drainage basin boundaries and the USGS Minerals Availability System (MAS) data base, he plotted the distribution of 48,000 past-producing metal mines on maps showing the boundaries of lands administered by the various Federal Land Management Agencies (FLMA). Census analysis of these data provided an initial screening tool for prioritization of

  18. Biodiversity conservation in Swedish forests: ways forward for a 30-year-old multi-scaled approach.

    Science.gov (United States)

    Gustafsson, Lena; Perhans, Karin

    2010-12-01

    A multi-scaled model for biodiversity conservation in forests was introduced in Sweden 30 years ago, which makes it a pioneer example of an integrated ecosystem approach. Trees are set aside for biodiversity purposes at multiple scale levels varying from individual trees to areas of thousands of hectares, with landowner responsibility at the lowest level and with increasing state involvement at higher levels. Ecological theory supports the multi-scaled approach, and retention efforts at every harvest occasion stimulate landowners' interest in conservation. We argue that the model has large advantages but that in a future with intensified forestry and global warming, development based on more progressive thinking is necessary to maintain and increase biodiversity. Suggestions for the future include joint planning for several forest owners, consideration of cost-effectiveness, accepting opportunistic work models, adjusting retention levels to stand and landscape composition, introduction of temporary reserves, creation of "receiver habitats" for species escaping climate change, and protection of young forests.

  19. Random noise attenuation of non-uniformly sampled 3D seismic data along two spatial coordinates using non-equispaced curvelet transform

    Science.gov (United States)

    Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi

    2018-04-01

    The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.

  20. Climatic and physiographic controls on catchment-scale nitrate loss at different spatial scales: insights from a top-down model development approach

    Science.gov (United States)

    Shafii, Mahyar; Basu, Nandita; Schiff, Sherry; Van Cappellen, Philippe

    2017-04-01

    Dramatic increase in nitrogen circulating in the biosphere due to anthropogenic activities has resulted in impairment of water quality in groundwater and surface water causing eutrophication in coastal regions. Understanding the fate and transport of nitrogen from landscape to coastal areas requires exploring the drivers of nitrogen processes in both time and space, as well as the identification of appropriate flow pathways. Conceptual models can be used as diagnostic tools to provide insights into such controls. However, diagnostic evaluation of coupled hydrological-biogeochemical models is challenging. This research proposes a top-down methodology utilizing hydrochemical signatures to develop conceptual models for simulating the integrated streamflow and nitrate responses while taking into account dominant controls on nitrate variability (e.g., climate, soil water content, etc.). Our main objective is to seek appropriate model complexity that sufficiently reproduces multiple hydrological and nitrate signatures. Having developed a suitable conceptual model for a given watershed, we employ it in sensitivity studies to demonstrate the dominant process controls that contribute to the nitrate response at scales of interest. We apply the proposed approach to nitrate simulation in a range of small to large sub-watersheds in the Grand River Watershed (GRW) located in Ontario. Such multi-basin modeling experiment will enable us to address process scaling and investigate the consequences of lumping processes in terms of models' predictive capability. The proposed methodology can be applied to the development of large-scale models that can help decision-making associated with nutrients management at regional scale.

  1. Uniform risk functionals for characterization of strong earthquake ground motions

    International Nuclear Information System (INIS)

    Anderson, J.G.; Trifunac, M.D.

    1978-01-01

    A uniform risk functional (e.g., Fourier spectrum, response spectrum, duration, etc.) is defined so that the probability that it is exceeded by some earthquake during a selected period of time is independent of the frequency of seismic waves. Such a functional is derived by an independent calculation, at each frequency, for the probability that the quantity being considered will be exceeded. Different aspects of the seismicity can control the amplitude of a uniform risk functional in different frequency ranges, and a uniform risk functional does not necessarily describe the strong shaking from any single earthquake. To be useful for calculating uniform risk functionals, a scaling relationship must provide an independent estimate of amplitudes of the functional in several frequency bands. The scaling relationship of Trifunac (1976) for Fourier spectra satisfies this requirement and further describes the distribution of spectral amplitudes about the mean trend; here, it is applied to find uniform risk Fourier amplitude spectra. In an application to finding the uniform risk spectra at a realistic site, this method is quite sensitive to the description of seismicity. Distinct models of seismicity, all consistent with our current level of knowledge of an area, can give significantly different risk estimates

  2. Improved regional-scale Brazilian cropping systems' mapping based on a semi-automatic object-based clustering approach

    Science.gov (United States)

    Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha

    2018-06-01

    Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.

  3. Prospective and participatory integrated assessment of agricultural systems from farm to regional scales: Comparison of three modeling approaches.

    Science.gov (United States)

    Delmotte, Sylvestre; Lopez-Ridaura, Santiago; Barbier, Jean-Marc; Wery, Jacques

    2013-11-15

    Evaluating the impacts of the development of alternative agricultural systems, such as organic or low-input cropping systems, in the context of an agricultural region requires the use of specific tools and methodologies. They should allow a prospective (using scenarios), multi-scale (taking into account the field, farm and regional level), integrated (notably multicriteria) and participatory assessment, abbreviated PIAAS (for Participatory Integrated Assessment of Agricultural System). In this paper, we compare the possible contribution to PIAAS of three modeling approaches i.e. Bio-Economic Modeling (BEM), Agent-Based Modeling (ABM) and statistical Land-Use/Land Cover Change (LUCC) models. After a presentation of each approach, we analyze their advantages and drawbacks, and identify their possible complementarities for PIAAS. Statistical LUCC modeling is a suitable approach for multi-scale analysis of past changes and can be used to start discussion about the futures with stakeholders. BEM and ABM approaches have complementary features for scenarios assessment at different scales. While ABM has been widely used for participatory assessment, BEM has been rarely used satisfactorily in a participatory manner. On the basis of these results, we propose to combine these three approaches in a framework targeted to PIAAS. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. 46 CFR 310.11 - Cadet uniforms.

    Science.gov (United States)

    2010-10-01

    ... for State, Territorial or Regional Maritime Academies and Colleges § 310.11 Cadet uniforms. Cadet uniforms shall be supplied at the school in accordance with the uniform regulations of the School. Those... 46 Shipping 8 2010-10-01 2010-10-01 false Cadet uniforms. 310.11 Section 310.11 Shipping MARITIME...

  5. Measurement Invariance of the Passion Scale across Three Samples: An ESEM Approach

    Science.gov (United States)

    Schellenberg, Benjamin J. I.; Gunnell, Katie E.; Mosewich, Amber D.; Bailis, Daniel S.

    2014-01-01

    Sport and exercise psychology researchers rely on the Passion Scale to assess levels of harmonious and obsessive passion for many different types of activities (Vallerand, 2010). However, this practice assumes that items from the Passion Scale are interpreted with the same meaning across all activity types. Using exploratory structural equation…

  6. A pragmatic approach to modelling soil and water conservation measures with a cathment scale erosion model.

    NARCIS (Netherlands)

    Hessel, R.; Tenge, A.J.M.

    2008-01-01

    To reduce soil erosion, soil and water conservation (SWC) methods are often used. However, no method exists to model beforehand how implementing such measures will affect erosion at catchment scale. A method was developed to simulate the effects of SWC measures with catchment scale erosion models.

  7. Multi-scale modeling with cellular automata: The complex automata approach

    NARCIS (Netherlands)

    Hoekstra, A.G.; Falcone, J.-L.; Caiazzo, A.; Chopard, B.

    2008-01-01

    Cellular Automata are commonly used to describe complex natural phenomena. In many cases it is required to capture the multi-scale nature of these phenomena. A single Cellular Automata model may not be able to efficiently simulate a wide range of spatial and temporal scales. It is our goal to

  8. The stokes number approach to support scale-up and technology transfer of a mixing process

    NARCIS (Netherlands)

    Willemsz, T.A.; Hooijmaijers, R.; Rubingh, C.M.; Frijlink, H.W.; Vromans, H.; Voort Maarschalk, K. van der

    2012-01-01

    Transferring processes between different scales and types of mixers is a common operation in industry. Challenges within this operation include the existence of considerable differences in blending conditions between mixer scales and types. Obtaining the correct blending conditions is crucial for

  9. The Stokes number approach to support scale-up and technology transfer of a mixing process

    NARCIS (Netherlands)

    Willemsz, Tofan A; Hooijmaijers, Ricardo; Rubingh, Carina M; Frijlink, Henderik W; Vromans, Herman; van der Voort Maarschalk, Kees

    Transferring processes between different scales and types of mixers is a common operation in industry. Challenges within this operation include the existence of considerable differences in blending conditions between mixer scales and types. Obtaining the correct blending conditions is crucial for

  10. A study of flame spread in engineered cardboard fuelbeds: Part II: Scaling law approach

    Science.gov (United States)

    Brittany A. Adam; Nelson K. Akafuah; Mark Finney; Jason Forthofer; Kozo Saito

    2013-01-01

    In this second part of a two part exploration of dynamic behavior observed in wildland fires, time scales differentiating convective and radiative heat transfer is further explored. Scaling laws for the two different types of heat transfer considered: Radiation-driven fire spread, and convection-driven fire spread, which can both occur during wildland fires. A new...

  11. Optimization Approach for Multi-scale Segmentation of Remotely Sensed Imagery under k-means Clustering Guidance

    Directory of Open Access Journals (Sweden)

    WANG Huixian

    2015-05-01

    Full Text Available In order to adapt different scale land cover segmentation, an optimized approach under the guidance of k-means clustering for multi-scale segmentation is proposed. At first, small scale segmentation and k-means clustering are used to process the original images; then the result of k-means clustering is used to guide objects merging procedure, in which Otsu threshold method is used to automatically select the impact factor of k-means clustering; finally we obtain the segmentation results which are applicable to different scale objects. FNEA method is taken for an example and segmentation experiments are done using a simulated image and a real remote sensing image from GeoEye-1 satellite, qualitative and quantitative evaluation demonstrates that the proposed method can obtain high quality segmentation results.

  12. Uniformity across 200 mm silicon wafers printed by nanoimprint lithography

    International Nuclear Information System (INIS)

    Gourgon, C; Perret, C; Tallal, J; Lazzarino, F; Landis, S; Joubert, O; Pelzer, R

    2005-01-01

    Uniformity of the printing process is one of the key parameters of nanoimprint lithography. This technique has to be extended to large size wafers to be useful for several industrial applications, and the uniformity of micro and nanostructures has to be guaranteed on large surfaces. This paper presents results of printing on 200 mm diameter wafers. The residual thickness uniformity after printing is demonstrated at the wafer scale in large patterns (100 μm), in smaller lines of 250 nm and in sub-100 nm features. We show that a mould deformation occurs during the printing process, and that this deformation is needed to guarantee printing uniformity. However, the mould deformation is also responsible for the potential degradation of the patterns

  13. Transversals in 4-uniform hypergraphs

    DEFF Research Database (Denmark)

    Henning, Michael A; Yeo, Anders

    2016-01-01

    with maximum degree ∆(H) ≤ 3, then τ (H) ≤ n/4 + m/6, which proves a known conjecture. We show that an easy corollary of our main result is that if H is a 4-uniform hypergraph with n vertices and n edges, then τ (H) ≤3/7 n, which was the main result of the Thomassé-Yeo paper [Combinatorica 27 (2007), 473...

  14. ESPRIT And Uniform Linear Arrays

    Science.gov (United States)

    Roy, R. H.; Goldburg, M.; Ottersten, B. E.; Swindlehurst, A. L.; Viberg, M.; Kailath, T.

    1989-11-01

    Abstract ¬â€?ESPRIT is a recently developed and patented technique for high-resolution estimation of signal parameters. It exploits an invariance structure designed into the sensor array to achieve a reduction in computational requirements of many orders of magnitude over previous techniques such as MUSIC, Burg's MEM, and Capon's ML, and in addition achieves performance improvement as measured by parameter estimate error variance. It is also manifestly more robust with respect to sensor errors (e.g. gain, phase, and location errors) than other methods as well. Whereas ESPRIT only requires that the sensor array possess a single invariance best visualized by considering two identical but other-wise arbitrary arrays of sensors displaced (but not rotated) with respect to each other, many arrays currently in use in various applications are uniform linear arrays of identical sensor elements. Phased array radars are commonplace in high-resolution direction finding systems, and uniform tapped delay lines (i.e., constant rate A/D converters) are the rule rather than the exception in digital signal processing systems. Such arrays possess many invariances, and are amenable to other types of analysis, which is one of the main reasons such structures are so prevalent. Recent developments in high-resolution algorithms of the signal/noise subspace genre including total least squares (TLS) ESPRIT applied to uniform linear arrays are summarized. ESPRIT is also shown to be a generalization of the root-MUSIC algorithm (applicable only to the case of uniform linear arrays of omni-directional sensors and unimodular cisoids). Comparisons with various estimator bounds, including CramerRao bounds, are presented.

  15. Uniform-droplet spray forming

    Energy Technology Data Exchange (ETDEWEB)

    Blue, C.A.; Sikka, V.K. [Oak Ridge National Lab., TN (United States); Chun, Jung-Hoon [Massachusetts Institute of Technology, Cambridge, MA (United States); Ando, T. [Tufts Univ., Medford, MA (United States)

    1997-04-01

    The uniform-droplet process is a new method of liquid-metal atomization that results in single droplets that can be used to produce mono-size powders or sprayed-on to substrates to produce near-net shapes with tailored microstructure. The mono-sized powder-production capability of the uniform-droplet process also has the potential of permitting engineered powder blends to produce components of controlled porosity. Metal and alloy powders are commercially produced by at least three different methods: gas atomization, water atomization, and rotating disk. All three methods produce powders of a broad range in size with a very small yield of fine powders with single-sized droplets that can be used to produce mono-size powders or sprayed-on substrates to produce near-net shapes with tailored microstructures. The economical analysis has shown the process to have the potential of reducing capital cost by 50% and operating cost by 37.5% when applied to powder making. For the spray-forming process, a 25% savings is expected in both the capital and operating costs. The project is jointly carried out at Massachusetts Institute of Technology (MIT), Tuffs University, and Oak Ridge National Laboratory (ORNL). Preliminary interactions with both finished parts and powder producers have shown a strong interest in the uniform-droplet process. Systematic studies are being conducted to optimize the process parameters, understand the solidification of droplets and spray deposits, and develop a uniform-droplet-system (UDS) apparatus appropriate for processing engineering alloys.

  16. Experience of Integrated Safeguards Approach for Large-scale Hot Cell Laboratory

    International Nuclear Information System (INIS)

    Miyaji, N.; Kawakami, Y.; Koizumi, A.; Otsuji, A.; Sasaki, K.

    2010-01-01

    The Japan Atomic Energy Agency (JAEA) has been operating a large-scale hot cell laboratory, the Fuels Monitoring Facility (FMF), located near the experimental fast reactor Joyo at the Oarai Research and Development Center (JNC-2 site). The FMF conducts post irradiation examinations (PIE) of fuel assemblies irradiated in Joyo. The assemblies are disassembled and non-destructive examinations, such as X-ray computed tomography tests, are carried out. Some of the fuel pins are cut into specimens and destructive examinations, such as ceramography and X-ray micro analyses, are performed. Following PIE, the tested material, in the form of a pin or segments, is shipped back to a Joyo spent fuel pond. In some cases, after reassembly of the examined irradiated fuel pins is completed, the fuel assemblies are shipped back to Joyo for further irradiation. For the IAEA to apply the integrated safeguards approach (ISA) to the FMF, a new verification system on material shipping and receiving process between Joyo and the FMF has been established by the IAEA under technical collaboration among the Japan Safeguard Office (JSGO) of MEXT, the Nuclear Material Control Center (NMCC) and the JAEA. The main concept of receipt/shipment verification under the ISA for JNC-2 site is as follows: under the IS, the FMF is treated as a Joyo-associated facility in terms of its safeguards system because it deals with the same spent fuels. Verification of the material shipping and receiving process between Joyo and the FMF can only be applied to the declared transport routes and transport casks. The verification of the nuclear material contained in the cask is performed with the method of gross defect at the time of short notice random interim inspections (RIIs) by measuring the surface neutron dose rate of the cask, filled with water to reduce radiation. The JAEA performed a series of preliminary tests with the IAEA, the JSGO and the NMCC, and confirmed from the standpoint of the operator that this

  17. Multidisciplinary approach and multi-scale elemental analysis and separation chemistry

    International Nuclear Information System (INIS)

    Mariet, Clarisse

    2014-01-01

    The development of methods for the analysis of trace elements is an important component of my research activities either for a radiometric measure or mass spectrometric detection. Many studies raise the question of the chemical signature of a sample or a process: eruptive behavior of a volcano, indicator of pollution, ion exchange in vectors vesicles of active principles,... Each time, highly sensitive analytical procedures, accurate and multi-elementary as well as the development of specific protocols were needed. Neutron activation analysis has often been used as reference procedure and allowed to validate the chemical lixiviation and the measurement by ICP-MS. Analysis of radioactive samples requires skills in analysis of trace but also separation chemistry. Two separation methods occupy an important place in the separation chemistry of radionuclides: chromatography and liquid-liquid extraction. The study of extraction of Lanthanide (III) by the oxide octyl (phenyl)-n, N-diisobutyl-carbamoylmethyl phosphine (CMPO) and a calixarene-CMPO led to better understand and quantify the influence of operating conditions on their performance of extraction and selectivity. The high concentration of salts in aqueous solutions required to reason in terms of thermodynamic activities in relying on a comprehensive approach to quantification of deviations from ideality. In order to reduce the amount of waste generated and costs, alternatives to the hydrometallurgical extraction processes were considered using ionic liquids at low temperatures as alternative solvents in biphasic processes. Remaining in this logic of effluent reduction, miniaturization of the liquid-liquid extraction is also study so as to exploit the characteristics of microscopic scale (very large specific surface, short diffusion distances). The miniaturization of chromatographic separations carries the same ambitions of gain of volumes of wastes and reagents. The miniaturization of the separation Uranium

  18. Comparative study of random and uniform models for the distribution of TRISO particles in HTR-10 fuel elements

    International Nuclear Information System (INIS)

    Rosales, J.; Perez, J.; Garcia, C.; Munnoz, A.; Lira, C. A. B. O.

    2015-01-01

    TRISO particles are the specific features of HTR-10 and generally HTGR reactors. Their heterogeneity and random arrangement in graphite matrix of these reactors create a significant modeling challenge. In the simulation of spherical fuel elements using MCNPX are usually created repetitive structures using uniform distribution models. The use of these repetitive structures introduces two major approaches: the non-randomness of the TRISO particles inside the pebbles and the intersection of the pebble surface with the TRISO particles. These approaches could affect significantly the multiplicative properties of the core. In order to study the influence of these approaches in the multiplicative properties was estimated the K inf value in one pebble with white boundary conditions using 4 different configurations regarding the distribution of the TRISO particles inside the pebble: uniform hexagonal model, cubic uniform model, cubic uniform without the effect of cutting and a random distribution model. It was studied the impact these models on core scale solving the problem B1, from the Benchmark Problems presented in a Coordinated Research Program of the IAEA. (Author)

  19. General Biology and Current Management Approaches of Soft Scale Pests (Hemiptera: Coccidae).

    Science.gov (United States)

    Camacho, Ernesto Robayo; Chong, Juang-Horng

    We summarize the economic importance, biology, and management of soft scales, focusing on pests of agricultural, horticultural, and silvicultural crops in outdoor production systems and urban landscapes. We also provide summaries on voltinism, crawler emergence timing, and predictive models for crawler emergence to assist in developing soft scale management programs. Phloem-feeding soft scale pests cause direct (e.g., injuries to plant tissues and removal of nutrients) and indirect damage (e.g., reduction in photosynthesis and aesthetic value by honeydew and sooty mold). Variations in life cycle, reproduction, fecundity, and behavior exist among congenerics due to host, environmental, climatic, and geographical variations. Sampling of soft scale pests involves sighting the insects or their damage, and assessing their abundance. Crawlers of most univoltine species emerge in the spring and the summer. Degree-day models and plant phenological indicators help determine the initiation of sampling and treatment against crawlers (the life stage most vulnerable to contact insecticides). The efficacy of cultural management tactics, such as fertilization, pruning, and irrigation, in reducing soft scale abundance is poorly documented. A large number of parasitoids and predators attack soft scale populations in the field; therefore, natural enemy conservation by using selective insecticides is important. Systemic insecticides provide greater flexibility in application method and timing, and have longer residual longevity than contact insecticides. Application timing of contact insecticides that coincides with crawler emergence is most effective in reducing soft scale abundance.

  20. Evolution of feeding specialization in Tanganyikan scale-eating cichlids: a molecular phylogenetic approach

    Directory of Open Access Journals (Sweden)

    Nishida Mutsumi

    2007-10-01

    Full Text Available Abstract Background Cichlid fishes in Lake Tanganyika exhibit remarkable diversity in their feeding habits. Among them, seven species in the genus Perissodus are known for their unique feeding habit of scale eating with specialized feeding morphology and behaviour. Although the origin of the scale-eating habit has long been questioned, its evolutionary process is still unknown. In the present study, we conducted interspecific phylogenetic analyses for all nine known species in the tribe Perissodini (seven Perissodus and two Haplotaxodon species using amplified fragment length polymorphism (AFLP analyses of the nuclear DNA. On the basis of the resultant phylogenetic frameworks, the evolution of their feeding habits was traced using data from analyses of stomach contents, habitat depths, and observations of oral jaw tooth morphology. Results AFLP analyses resolved the phylogenetic relationships of the Perissodini, strongly supporting monophyly for each species. The character reconstruction of feeding ecology based on the AFLP tree suggested that scale eating evolved from general carnivorous feeding to highly specialized scale eating. Furthermore, scale eating is suggested to have evolved in deepwater habitats in the lake. Oral jaw tooth shape was also estimated to have diverged in step with specialization for scale eating. Conclusion The present evolutionary analyses of feeding ecology and morphology based on the obtained phylogenetic tree demonstrate for the first time the evolutionary process leading from generalised to highly specialized scale eating, with diversification in feeding morphology and behaviour among species.

  1. A GIS-based approach to prevent contamination of groundwater at regional scale

    Science.gov (United States)

    Balderacchi, M.; Vischetti, C.; di Guardo, A.; Trevisan, M.

    2009-04-01

    Sustainable development is a fundamental objective of the European Union. Since 1991, the use of numerical models has been used to assess the environmental fate of pesticides (directive 91/414 EC). Since then, new approaches to assess pesticide contamination have been developed. This is an ongoing process, with approaches getting increasingly close to reality. Actually, there is a new challenge to integrate the most advanced and cost-effective monitoring strategies with simulation models so that reliable indicators of unsaturated flow and transport can be suitably mapped and coupled with other indicators related to productivity and sustainability. The most relevant role of GIS in the analysis of pesticide fate in soil is its application to process together input data and the results of distribution model based simulations of pesticide transport. FitoMarche is a GIS-based software tool that estimates pesticide movement in the unsaturated zone using MACRO 5 and it is able to simulate complex and real crop rotations at the regional scale. Crop rotation involves the sequential production of different plant species on the same land, every crop is characterized by different agricultural practices that involve the use of different pesticides at different doses. FitoMarche extracts MACRO input data from a series of geographic data sets (shapefiles) and an internal database, writes input files for MACRO, executes the simulation and extracts solute and water fluxes from MACRO output files. The study has been performed in the Marche region, located in central Italy along the Adriatic coast. Soil, climate, land use shapefiles were provided from public authorities, crop rotation schemes were estimated from ISTAT (the national statistics institute) 5th agricultural census database using a municipality detail and agricultural practices following the local customs. Two herbicides have been tested: "A" is employed on maize crop, and "B" on maize, sunflower and sugarbeet. In the

  2. A REGION-BASED MULTI-SCALE APPROACH FOR OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    T. Kavzoglu

    2016-06-01

    Full Text Available Within the last two decades, object-based image analysis (OBIA considering objects (i.e. groups of pixels instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient. Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  3. An advanced online monitoring approach to study the scaling behavior in direct contact membrane distillation

    KAUST Repository

    Lee, Jung Gil; Jang, Yongsun; Fortunato, Luca; Jeong, Sanghyun; Lee, Sangho; Leiknes, TorOve; Ghaffour, NorEddine

    2017-01-01

    scaling was performed by using various analytical methods, especially an in-situ monitoring technique using an optical coherence tomography (OCT) to observe the cross-sectional view on the membrane surface during operation. Different concentrations of Ca

  4. Understanding protected area resilience: a multi-scale, social-ecological approach

    Science.gov (United States)

    Cumming, Graeme S.; Allen, Craig R.; Ban, Natalie C.; Biggs, Duan; Biggs, Harry C.; Cumming, David H.M; De Vos, Alta; Epstein, Graham; Etienne, Michel; Maciejewski, Kristine; Mathevet, Raphael; Moore, Christine; Nenadovic, Mateja; Schoon, Michael

    2015-01-01

    Protected areas (PAs) remain central to the conservation of biodiversity. Classical PAs were conceived as areas that would be set aside to maintain a natural state with minimal human influence. However, global environmental change and growing cross-scale anthropogenic influences mean that PAs can no longer be thought of as ecological islands that function independently of the broader social-ecological system in which they are located. For PAs to be resilient (and to contribute to broader social-ecological resilience), they must be able to adapt to changing social and ecological conditions over time in a way that supports the long-term persistence of populations, communities, and ecosystems of conservation concern. We extend Ostrom's social-ecological systems framework to consider the long-term persistence of PAs, as a form of land use embedded in social-ecological systems, with important cross-scale feedbacks. Most notably, we highlight the cross-scale influences and feedbacks on PAs that exist from the local to the global scale, contextualizing PAs within multi-scale social-ecological functional landscapes. Such functional landscapes are integral to understand and manage individual PAs for long-term sustainability. We illustrate our conceptual contribution with three case studies that highlight cross-scale feedbacks and social-ecological interactions in the functioning of PAs and in relation to regional resilience. Our analysis suggests that while ecological, economic, and social processes are often directly relevant to PAs at finer scales, at broader scales, the dominant processes that shape and alter PA resilience are primarily social and economic.

  5. Solid-state electrochemistry on the nanometer and atomic scales: the scanning probe microscopy approach

    Science.gov (United States)

    Strelcov, Evgheni; Yang, Sang Mo; Jesse, Stephen; Balke, Nina; Vasudevan, Rama K.; Kalinin, Sergei V.

    2016-01-01

    Energy technologies of the 21st century require understanding and precise control over ion transport and electrochemistry at all length scales – from single atoms to macroscopic devices. This short review provides a summary of recent works dedicated to methods of advanced scanning probe microscopy for probing electrochemical transformations in solids at the meso-, nano- and atomic scales. Discussion presents advantages and limitations of several techniques and a wealth of examples highlighting peculiarities of nanoscale electrochemistry. PMID:27146961

  6. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback: The PCN Incubation-Panarctic Thermal (PInc-PanTher) Scaling Approach

    Science.gov (United States)

    Koven, C. D.; Schuur, E.; Schaedel, C.; Bohn, T. J.; Burke, E.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J. W.; Hayes, D. J.; Hugelius, G.; Jafarov, E. E.; Krinner, G.; Kuhry, P.; Lawrence, D. M.; MacDougall, A.; Marchenko, S. S.; McGuire, A. D.; Natali, S.; Nicolsky, D.; Olefeldt, D.; Peng, S.; Romanovsky, V. E.; Schaefer, K. M.; Strauss, J.; Treat, C. C.; Turetsky, M. R.

    2015-12-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a 3-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100.

  7. An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging

    Science.gov (United States)

    Linares, R.; Furfaro, R.

    The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.

  8. Fast and accurate approaches for large-scale, automated mapping of food diaries on food composition tables

    DEFF Research Database (Denmark)

    Lamarine, Marc; Hager, Jörg; Saris, Wim H M

    2018-01-01

    the EuroFIR resource. Two approaches were tested: the first was based solely on food name similarity (fuzzy matching). The second used a machine learning approach (C5.0 classifier) combining both fuzzy matching and food energy. We tested mapping food items using their original names and also an English...... not lead to any improvements compared to the fuzzy matching. However, it could increase substantially the recall rate for food items without any clear equivalent in the FCTs (+7 and +20% when mapping items using their original or English-translated names). Our approaches have been implemented as R packages...... and are freely available from GitHub. Conclusion: This study is the first to provide automated approaches for large-scale food item mapping onto FCTs. We demonstrate that both high precision and recall can be achieved. Our solutions can be used with any FCT and do not require any programming background...

  9. UOBPRM: A uniformly distributed obstacle-based PRM

    KAUST Repository

    Yeh, Hsin-Yi

    2012-10-01

    This paper presents a new sampling method for motion planning that can generate configurations more uniformly distributed on C-obstacle surfaces than prior approaches. Here, roadmap nodes are generated from the intersections between C-obstacles and a set of uniformly distributed fixed-length segments in C-space. The results show that this new sampling method yields samples that are more uniformly distributed than previous obstacle-based methods such as OBPRM, Gaussian sampling, and Bridge test sampling. UOBPRM is shown to have nodes more uniformly distributed near C-obstacle surfaces and also requires the fewest nodes and edges to solve challenging motion planning problems with varying narrow passages. © 2012 IEEE.

  10. Development and Initial Validation of the Need Satisfaction and Need Support at Work Scales: A Validity-Focused Approach

    Directory of Open Access Journals (Sweden)

    Susanne Tafvelin

    2018-01-01

    Full Text Available Although the relevance of employee need satisfaction and manager need support have been examined, the integration of self-determination theory (SDT into work and organizational psychology has been hampered by the lack of validated measures. The purpose of the current study was to develop and validate measures of employees’ perception of need satisfaction (NSa-WS and need support (NSu-WS at work that were grounded in SDT. We used three Swedish samples (total 'N' = 1,430 to develop and validate our scales. We used a confirmatory approach including expert panels to assess item content relevance, confirmatory factor analysis for factorial validity, and associations with theoretically warranted outcomes to assess criterion-related validity. Scale reliability was also assessed. We found evidence of content, factorial, and criterion-related validity of our two scales of need satisfaction and need support at work. Further, the scales demonstrated high internal consistency. Our newly developed scales may be used in research and practice to further our understanding regarding how satisfaction and support of employee basic needs influence employee motivation, performance, and well-being. Our study makes a contribution to the current literature by providing (1 scales that are specifically designed for the work context, (2 an example of how expert panels can be used to assess content validity, and (3 testing of theoretically derived hypotheses that, although SDT is built on them, have not been examined before.

  11. Climate change, livelihoods and the multiple determinants of water adequacy: two approaches at regional to global scale

    Science.gov (United States)

    Lissner, Tabea; Reusser, Dominik

    2015-04-01

    Inadequate access to water is already a problem in many regions of the world and processes of global change are expected to further exacerbate the situation. Many aspects determine the adequacy of water resources: beside actual physical water stress, where the resource itself is limited, economic and social water stress can be experienced if access to resource is limited by inadequate infrastructure, political or financial constraints. To assess the adequacy of water availability for human use, integrated approaches are needed that allow to view the multiple determinants in conjunction and provide sound results as a basis for informed decisions. This contribution proposes two parts of an integrated approach to look at the multiple dimensions of water scarcity at regional to global scale. These were developed in a joint project with the German Development Agency (GIZ). It first outlines the AHEAD approach to measure Adequate Human livelihood conditions for wEll-being And Development, implemented at global scale and at national resolution. This first approach allows viewing impacts of climate change, e.g. changes in water availability, within the wider context of AHEAD conditions. A specific focus lies on the uncertainties in projections of climate change and future water availability. As adequate water access is not determined by water availability alone, in a second step we develop an approach to assess the water requirements for different sectors in more detail, including aspects of quantity, quality as well as access, in an integrated way. This more detailed approach is exemplified at region-scale in Indonesia and South Africa. Our results show that in many regions of the world, water scarcity is a limitation to AHEAD conditions in many countries, regardless of differing modelling output. The more detailed assessments highlight the relevance of additional aspects to assess the adequacy of water for human use, showing that in many regions, quality and

  12. Economies of scale in the Korean district heating system: A variable cost function approach

    International Nuclear Information System (INIS)

    Park, Sun-Young; Lee, Kyoung-Sil; Yoo, Seung-Hoon

    2016-01-01

    This paper aims to investigate the cost efficiency of South Korea’s district heating (DH) system by using a variable cost function and cost-share equation. We employ a seemingly unrelated regression model, with quarterly time-series data from the Korea District Heating Corporation (KDHC)—a public utility that covers about 59% of the DH system market in South Korea—over the 1987–2011 period. The explanatory variables are price of labor, price of material, capital cost, and production level. The results indicate that economies of scale are present and statistically significant. Thus, expansion of its DH business would allow KDHC to obtain substantial economies of scale. According to our forecasts vis-à-vis scale economies, the KDHC will enjoy cost efficiency for some time yet. To ensure a socially efficient supply of DH, it is recommended that the KDHC expand its business proactively. With regard to informing policy or regulations, our empirical results could play a significant role in decision-making processes. - Highlights: • We examine economies of scale in the South Korean district heating sector. • We focus on Korea District Heating Corporation (KDHC), a public utility. • We estimate a translog cost function, using a variable cost function. • We found economies of scale to be present and statistically significant. • KDHC will enjoy cost efficiency and expanding its supply is socially efficient.

  13. A comparative study of modern and fossil cone scales and seeds of conifers: A geochemical approach

    Science.gov (United States)

    Artur, Stankiewicz B.; Mastalerz, Maria; Kruge, M.A.; Van Bergen, P. F.; Sadowska, A.

    1997-01-01

    Modern cone scales and seeds of Pinus strobus and Sequoia sempervirens, and their fossil (Upper Miocene, c. 6 Mar) counterparts Pinus leitzii and Sequoia langsdorfi have been studied using pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS), electron-microprobe and scanning electron microscopy. Microscopic observations revealed only minor microbial activity and high-quality structural preservation of the fossil material. The pyrolysates of both modern genera showed the presence of ligno-cellulose characteristic of conifers. However, the abundance of (alkylated)phenols and 1,2-benzenediols in modern S. sempervirens suggests the presence of non-hydrolysable tannins or abundant polyphenolic moieties not previously reported in modern conifers. The marked differences between the pyrolysis products of both modern genera are suggested to be of chemosystematic significance. The fossil samples also contained ligno-cellulose which exhibited only partial degradation, primarily of the carbohydrate constituents. Comparison between the fossil cone scale and seed pyrolysates indicated that the ligno-cellulose complex present in the seeds is chemically more resistant than that in the cone scales. Principal component analysis (PCA) of the pyrolysis data allowed for the determination of the discriminant functions used to assess the extent of degradation and the chemosystematic differences between both genera and between cone scales and seeds. Elemental composition (C, O, S), obtained using electron-microprobe, corroborated the pyrolysis results. Overall, the combination of chemical, microscopic and statistical methods allowed for a detailed characterization and chemosystematic interpretations of modern and fossil conifer cone scales and seeds.

  14. Scaling of cratering experiments: an analytical and heuristic approach to the phenomenology

    International Nuclear Information System (INIS)

    Killian, B.G.; Germain, L.S.

    1977-01-01

    The phenomenology of cratering can be thought of as consisting of two phases. The first phase, where the effects of gravity are negligible, consists of the energy source dynamically imparting its energy to the surroundings, rock and air. As illustrated in this paper, the first phase can be scaled if: radiation effects are negligible, experiments are conducted in the same rock material, time and distance use the same scaling factor, and distances scale as the cube root of the energy. The second phase of cratering consists of the rock, with its already developed velocity field, being thrown out. It is governed by the ballistics equation, and gravity is of primary importance. This second phase of cratering is examined heuristically by examples of the ballistics equation which illustrate the basic phenomena in crater formation. When gravity becomes significant, in addition to the conditions for scaling imposed in the first phase, distances must scale inversely as the ratio of gravities. A qualitative relationship for crater radius is derived and compared with calculations and experimental data over a wide range of energy sources and gravities

  15. Multi-scale approach to radiation damage induced by ion beams: complex DNA damage and effects of thermal spikes

    International Nuclear Information System (INIS)

    Surdutovich, E.; Yakubovich, A.V.; Solov'yov, A.V.; Surdutovich, E.; Yakubovich, A.V.; Solov'yov, A.V.

    2010-01-01

    We present the latest advances of the multi-scale approach to radiation damage caused by irradiation of a tissue with energetic ions and report the calculations of complex DNA damage and the effects of thermal spikes on biomolecules. The multi-scale approach aims to quantify the most important physical, chemical, and biological phenomena taking place during and following irradiation with ions and provide a better means for clinically-necessary calculations with adequate accuracy. We suggest a way of quantifying the complex clustered damage, one of the most important features of the radiation damage caused by ions. This quantification allows the studying of how the clusterization of DNA lesions affects the lethality of damage. We discuss the first results of molecular dynamics simulations of ubiquitin in the environment of thermal spikes, predicted to occur in tissue for a short time after an ion's passage in the vicinity of the ions' tracks. (authors)

  16. Validation of a plant-wide phosphorus modelling approach with minerals precipitation in a full-scale WWTP

    DEFF Research Database (Denmark)

    Mbamba, Christian Kazadi; Flores Alsina, Xavier; Batstone, Damien John

    2016-01-01

    approach describing ion speciation and ion pairing with kinetic multiple minerals precipitation. Model performance is evaluated against data sets from a full-scale wastewater treatment plant, assessing capability to describe water and sludge lines across the treatment process under steady-state operation...... plant. Dynamic influent profiles were generated using a calibrated influent generator and were used to study the effect of long-term influent dynamics on plant performance. Model-based analysis shows that minerals precipitation strongly influences composition in the anaerobic digesters, but also impacts......The focus of modelling in wastewater treatment is shifting from single unit to plant-wide scale. Plant wide modelling approaches provide opportunities to study the dynamics and interactions of different transformations in water and sludge streams. Towards developing more general and robust...

  17. Applying the global RCP-SSP-SPA scenario framework at sub-national scale: A multi-scale and participatory scenario approach.

    Science.gov (United States)

    Kebede, Abiy S; Nicholls, Robert J; Allan, Andrew; Arto, Iñaki; Cazcarro, Ignacio; Fernandes, Jose A; Hill, Chris T; Hutton, Craig W; Kay, Susan; Lázár, Attila N; Macadam, Ian; Palmer, Matthew; Suckall, Natalie; Tompkins, Emma L; Vincent, Katharine; Whitehead, Paul W

    2018-09-01

    To better anticipate potential impacts of climate change, diverse information about the future is required, including climate, society and economy, and adaptation and mitigation. To address this need, a global RCP (Representative Concentration Pathways), SSP (Shared Socio-economic Pathways), and SPA (Shared climate Policy Assumptions) (RCP-SSP-SPA) scenario framework has been developed by the Intergovernmental Panel on Climate Change Fifth Assessment Report (IPCC-AR5). Application of this full global framework at sub-national scales introduces two key challenges: added complexity in capturing the multiple dimensions of change, and issues of scale. Perhaps for this reason, there are few such applications of this new framework. Here, we present an integrated multi-scale hybrid scenario approach that combines both expert-based and participatory methods. The framework has been developed and applied within the DECCMA 1 project with the purpose of exploring migration and adaptation in three deltas across West Africa and South Asia: (i) the Volta delta (Ghana), (ii) the Mahanadi delta (India), and (iii) the Ganges-Brahmaputra-Meghna (GBM) delta (Bangladesh/India). Using a climate scenario that encompasses a wide range of impacts (RCP8.5) combined with three SSP-based socio-economic scenarios (SSP2, SSP3, SSP5), we generate highly divergent and challenging scenario contexts across multiple scales against which robustness of the human and natural systems within the deltas are tested. In addition, we consider four distinct adaptation policy trajectories: Minimum intervention, Economic capacity expansion, System efficiency enhancement, and System restructuring, which describe alternative future bundles of adaptation actions/measures under different socio-economic trajectories. The paper highlights the importance of multi-scale (combined top-down and bottom-up) and participatory (joint expert-stakeholder) scenario methods for addressing uncertainty in adaptation decision

  18. Vote par sondage uniforme incorruptible

    OpenAIRE

    Blanchard , Nicolas

    2016-01-01

    International audience; Introduit en 2012 par David Chaum, le vote par sondage uniforme (random-sample voting) est un protocole de vote basé sur un choix d'une sous-population représentative , permettant de limiter les coûts tout en ayant de nombreux avantages, principalement lorsqu'il est couplé a d'autres techniques comme ThreeBallot. Nous analysons un problème de corruptibilité potentielle où les votants peuvent vendre leur vote au plus offrant et proposons une variation du protocole reméd...

  19. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    Science.gov (United States)

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  20. Thermodynamic modeling of small scale biomass gasifiers: Development and assessment of the ''Multi-Box'' approach.

    Science.gov (United States)

    Vakalis, Stergios; Patuzzi, Francesco; Baratieri, Marco

    2016-04-01

    Modeling can be a powerful tool for designing and optimizing gasification systems. Modeling applications for small scale/fixed bed biomass gasifiers have been interesting due to their increased commercial practices. Fixed bed gasifiers are characterized by a wide range of operational conditions and are multi-zoned processes. The reactants are distributed in different phases and the products from each zone influence the following process steps and thus the composition of the final products. The present study aims to improve the conventional 'Black-Box' thermodynamic modeling by means of developing multiple intermediate 'boxes' that calculate two phase (solid-vapor) equilibriums in small scale gasifiers. Therefore the model is named ''Multi-Box''. Experimental data from a small scale gasifier have been used for the validation of the model. The returned results are significantly closer with the actual case study measurements in comparison to single-stage thermodynamic modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations

    Science.gov (United States)

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414

  2. Density Fluctuations in Uniform Quantum Gases

    International Nuclear Information System (INIS)

    Bosse, J.; Pathak, K. N.; Singh, G. S.

    2011-01-01

    Analytical expressions are given for the static structure factor S(k) and the pair correlation function g(r) for uniform ideal Bose-Einstein and Fermi-Dirac gases for all temperatures. In the vicinity of Bose Einstein condensation (BEC) temperature, g(r) becomes long ranged and remains so in the condensed phase. In the dilute gas limit, g(r) of bosons and fermions do not coincide with Maxwell-Boltzmann gas but exhibit bunching and anti-bunching effect respectively. The width of these functions depends on the temperature and is scaled as √(inverse atomic mass). Our numerical results provide the precise quantitative values of suppression/increase (antibunching and bunching) of the density fluctuations at small distances in ideal quantum gases in qualitative agreement with the experimental observation for almost non-trapped dilute gases.

  3. Evaluation model of project complexity for large-scale construction projects in Iran - A Fuzzy ANP approach

    Directory of Open Access Journals (Sweden)

    Aliyeh Kazemi

    2016-09-01

    Full Text Available Construction projects have always been complex. By growing trend of this complexity, implementations of large-scale constructions become harder. Hence, evaluating and understanding these complexities are critical. Correct evaluation of a project complication can provide executives and managers with good source to use. Fuzzy analytic network process (ANP is a logical and systematic approach toward defining, evaluation, and grading. This method allows for analyzing complex systems, and determining complexity of them. In this study, by taking advantage of fuzzy ANP, effective indexes for development of complications in large-scale construction projects in Iran have been determined and prioritized. The results show socio-political, project system interdependencies, and technological complexity indexes ranked top to three. Furthermore, in comparison of three main huge projects: commercial-administrative, hospital, and skyscrapers, the hospital project had been evaluated as the most complicated. This model is beneficial for professionals in managing large-scale projects.

  4. An advanced online monitoring approach to study the scaling behavior in direct contact membrane distillation

    KAUST Repository

    Lee, Jung Gil

    2017-10-12

    One of the major challenges in membrane distillation (MD) desalination is scaling, mainly CaSO4 and CaCO3. In this study, in order to achieve a better understanding and establish a strategy for controlling scaling, a detailed investigation on the MD scaling was performed by using various analytical methods, especially an in-situ monitoring technique using an optical coherence tomography (OCT) to observe the cross-sectional view on the membrane surface during operation. Different concentrations of CaSO4, CaCO3, as well as NaCl were tested separately and in different mixed feed solutions. Results showed that when CaSO4 alone was employed in the feed solution, the mean permeate flux (MPF) has significantly dropped at lower volume concentration factor (VCF) compared to other feed solutions and this critical point was observed to be influenced by the solubility changes of CaSO4 resulting from the various inlet feed temperatures. Although the inlet feed and permeate flow rates could contribute to the initial MPF value, the VCF, which showed a sharp MPF decline, was not affected. It was clearly observed that the scaling on the membrane surface due to crystal growth in the bulk and the deposition of aggregated crystals on the membrane surface abruptly appeared close to the critical point of VCF by using OCT observation in a real time. On the other hand, NaCl + CaSO4 mixed feed solution resulted in a linear MPF decline as VCF increases and delayed the critical point to higher VCF values. In addition, CaCO3 alone in feed solution did not affect the scaling, however, when CaSO4 was added to CaCO3, the initial MPF decline and VCF met the critical point earlier. In summary, calcium scaling crystal formed at different conditions influenced the filtration dynamics and MD performances.

  5. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  6. Creation of Nuclear Data Base up to 150 MeV and corresponding scaling approach for ADS

    International Nuclear Information System (INIS)

    Shubin, Y. N.; Gai, E. V.; Ignatyuk, A. V.; Lunev, V. P.

    1997-01-01

    The status of nuclear data in the energy region up to 150 MeV is outlined. The specific physical reasons for the detailed investigations of nuclear structure effects is noted out. The necessity of the development of Nuclear Data System for ADS is stressed. The program for the creation of nuclear data base up to 150 MeV and corresponding scaling approach for ADS is proposed. (Author) 14 refs

  7. Correct approach to consideration of experimental resolution in parametric analysis of scaling violation in deep inelastic lepton-nucleon interaction

    International Nuclear Information System (INIS)

    Ammosov, V.V.; Usubov, Z.U.; Zhigunov, V.P.

    1990-01-01

    A problem of parametric analysis of the scaling violation in deep inelastic lepton-nucleon interactions in the framework of quantum chromodynamics (QCD) is considered. For a correct consideration of the experimental resolution we use the χ 2 -method, which is demonstrated by numeric experiments and analysis of the 15-foot bubble chamber neutrino experimental data. The model parameters obtained in this approach differ noticeably from those obtained earlier. (orig.)

  8. A comparative analysis of ecosystem services valuation approaches for application at the local scale and in data scarce regions

    OpenAIRE

    Pandeya, B.; Buytaert, W.; Zulkafli, Z.; Karpouzoglou, T.; Mao, F.; Hannah, D.M.

    2016-01-01

    Despite significant advances in the development of the ecosystem services concept across the science and policy arenas, the valuation of ecosystem services to guide sustainable development remains challenging, especially at a local scale and in data scarce regions. In this paper, we review and compare major past and current valuation approaches and discuss their key strengths and weaknesses for guiding policy decisions. To deal with the complexity of methods used in different valuation approa...

  9. A Remote Sensing Approach for Regional-Scale Mapping of Agricultural Land-Use Systems Based on NDVI Time Series

    Directory of Open Access Journals (Sweden)

    Beatriz Bellón

    2017-06-01

    Full Text Available In response to the need for generic remote sensing tools to support large-scale agricultural monitoring, we present a new approach for regional-scale mapping of agricultural land-use systems (ALUS based on object-based Normalized Difference Vegetation Index (NDVI time series analysis. The approach consists of two main steps. First, to obtain relatively homogeneous land units in terms of phenological patterns, a principal component analysis (PCA is applied to an annual MODIS NDVI time series, and an automatic segmentation is performed on the resulting high-order principal component images. Second, the resulting land units are classified into the crop agriculture domain or the livestock domain based on their land-cover characteristics. The crop agriculture domain land units are further classified into different cropping systems based on the correspondence of their NDVI temporal profiles with the phenological patterns associated with the cropping systems of the study area. A map of the main ALUS of the Brazilian state of Tocantins was produced for the 2013–2014 growing season with the new approach, and a significant coherence was observed between the spatial distribution of the cropping systems in the final ALUS map and in a reference map extracted from the official agricultural statistics of the Brazilian Institute of Geography and Statistics (IBGE. This study shows the potential of remote sensing techniques to provide valuable baseline spatial information for supporting agricultural monitoring and for large-scale land-use systems analysis.

  10. A new approach to ductile tearing assessment of pipelines under large-scale yielding

    Energy Technology Data Exchange (ETDEWEB)

    Ostby, Erling [SINTEF Materials and Chemistry, N-7465, Trondheim (Norway)]. E-mail: Erling.Obstby@sintef.no; Thaulow, Christian [Norwegian University of Science and Technology, N-7491, Trondheim (Norway); Nyhus, Bard [SINTEF Materials and Chemistry, N-7465, Trondheim (Norway)

    2007-06-15

    In this paper we focus on the issue of ductile tearing assessment for cases with global plasticity, relevant for example to strain-based design of pipelines. A proposal for a set of simplified strain-based driving force equations is used as a basis for calculation of ductile tearing. We compare the traditional approach using the tangency criterion to predict unstable tearing, with a new alternative approach for ductile tearing calculations. A criterion to determine the CTOD at maximum load carrying capacity in the crack ligament is proposed, and used as the failure criterion in the new approach. Compared to numerical reference simulations, the tangency criterion predicts conservative results with regard to the strain capacity. The new approach yields results in better agreement with the reference numerical simulations.

  11. Scaling Watershed Models: Modern Approaches to Science Computation with MapReduce, Parallelization, and Cloud Optimization

    Science.gov (United States)

    Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...

  12. Biogem: an effective tool based approach for scaling up open source software development in bioinformatics

    NARCIS (Netherlands)

    Bonnal, R.J.P.; Smant, G.; Prins, J.C.P.

    2012-01-01

    Biogem provides a software development environment for the Ruby programming language, which encourages community-based software development for bioinformatics while lowering the barrier to entry and encouraging best practices. Biogem, with its targeted modular and decentralized approach, software

  13. Large-scale identification of polymorphic microsatellites using an in silico approach

    NARCIS (Netherlands)

    Tang, J.; Baldwin, S.J.; Jacobs, J.M.E.; Linden, van der C.G.; Voorrips, R.E.; Leunissen, J.A.M.; Eck, van H.J.; Vosman, B.

    2008-01-01

    Background - Simple Sequence Repeat (SSR) or microsatellite markers are valuable for genetic research. Experimental methods to develop SSR markers are laborious, time consuming and expensive. In silico approaches have become a practicable and relatively inexpensive alternative during the last

  14. A moni-modelling approach to manage groundwater risk to pesticide leaching at regional scale.

    Science.gov (United States)

    Di Guardo, Andrea; Finizio, Antonio

    2016-03-01

    Historically, the approach used to manage risk of chemical contamination of water bodies is based on the use of monitoring programmes, which provide a snapshot of the presence/absence of chemicals in water bodies. Monitoring is required in the current EU regulations, such as the Water Framework Directive (WFD), as a tool to record temporal variation in the chemical status of water bodies. More recently, a number of models have been developed and used to forecast chemical contamination of water bodies. These models combine information of chemical properties, their use, and environmental scenarios. Both approaches are useful for risk assessors in decision processes. However, in our opinion, both show flaws and strengths when taken alone. This paper proposes an integrated approach (moni-modelling approach) where monitoring data and modelling simulations work together in order to provide a common decision framework for the risk assessor. This approach would be very useful, particularly for the risk management of pesticides at a territorial level. It fulfils the requirement of the recent Sustainable Use of Pesticides Directive. In fact, the moni-modelling approach could be used to identify sensible areas where implement mitigation measures or limitation of use of pesticides, but even to effectively re-design future monitoring networks or to better calibrate the pedo-climatic input data for the environmental fate models. A case study is presented, where the moni-modelling approach is applied in Lombardy region (North of Italy) to identify groundwater vulnerable areas to pesticides. The approach has been applied to six active substances with different leaching behaviour, in order to highlight the advantages in using the proposed methodology. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Optimal control for power-off landing of a small-scale helicopter : a pseudospectral approach

    NARCIS (Netherlands)

    Taamallah, S.; Bombois, X.; Hof, Van den P.M.J.

    2012-01-01

    We derive optimal power-off landing trajectories, for the case of a small-scale helicopter UAV. These open-loop optimal trajectories represent the solution to the minimization of a cost objective, given system dynamics, controls and states equality and inequality constraints. The plant dynamics

  16. Relaxing the weak scale: A new approach to the hierarchy problem

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Recently, a new mechanism to generate a naturally small electroweak scale has been proposed. This is based on the idea that a dynamical evolution during the early universe can drive the Higgs mass to a value much smaller than the UV cutoff of the SM. In this talk I will present this idea, its explicit realizations, potential problems, and experimental consequences.

  17. Iterative approach to modeling subsurface stormflow based on nonlinear, hillslope-scale physics

    NARCIS (Netherlands)

    Spaaks, J.H.; Bouten, W.; McDonnell, J.J.

    2009-01-01

    Soil water transport in small, humid, upland catchments is often dominated by subsurface stormflow. Recent studies of this process suggest that at the plot scale, generation of transient saturation may be governed by threshold behavior, and that transient saturation is a prerequisite for lateral

  18. A multiple-time-scale approach to the control of ITBs on JET

    Energy Technology Data Exchange (ETDEWEB)

    Laborde, L.; Mazon, D.; Moreau, D. [EURATOM-CEA Association (DSM-DRFC), CEA Cadarache, 13 - Saint Paul lez Durance (France); Moreau, D. [Culham Science Centre, EFDA-JET, Abingdon, OX (United Kingdom); Ariola, M. [EURATOM/ENEA/CREATE Association, Univ. Napoli Federico II, Napoli (Italy); Cordoliani, V. [Ecole Polytechnique, 91 - Palaiseau (France); Tala, T. [EURATOM-Tekes Association, VTT Processes (Finland)

    2005-07-01

    The simultaneous real-time control of the current and temperature gradient profiles could lead to the steady state sustainment of an internal transport barrier (ITB) and so to a stationary optimized plasma regime. Recent experiments in JET have demonstrated significant progress in achieving such a control: different current and temperature gradient target profiles have been reached and sustained for several seconds using a controller based on a static linear model. It's worth noting that the inverse safety factor profile evolves on a slow time scale (resistive time) while the normalized electron temperature gradient reacts on a faster one (confinement time). Moreover these experiments have shown that the controller was sensitive to rapid plasma events such as transient ITBs during the safety factor profile evolution or MHD instabilities which modify the pressure profiles on the confinement time scale. In order to take into account the different dynamics of the controlled profiles and to better react to rapid plasma events the control technique is being improved by using a multiple-time-scale approximation. The paper describes the theoretical analysis and closed-loop simulations using a control algorithm based on two-time-scale state-space model. These closed-loop simulations using the full dynamic but linear model used for the controller design to simulate the plasma response have demonstrated that this new controller allows the normalized electron temperature gradient target profile to be reached faster than the one used in previous experiments. (A.C.)

  19. A Multidimensional Scaling Approach to Developmental Dimensions in Object Permanence and Tracking Stimuli.

    Science.gov (United States)

    Townes-Rosenwein, Linda

    This paper discusses a longitudinal, exploratory study of developmental dimensions related to object permanence theory and explains how multidimensional scaling techniques can be used to identify developmental dimensions. Eighty infants, randomly assigned to one of four experimental groups and one of four counterbalanced orders of stimuli, were…

  20. A multiple-time-scale approach to the control of ITBs on JET

    International Nuclear Information System (INIS)

    Laborde, L.; Mazon, D.; Moreau, D.; Moreau, D.; Ariola, M.; Cordoliani, V.; Tala, T.

    2005-01-01

    The simultaneous real-time control of the current and temperature gradient profiles could lead to the steady state sustainment of an internal transport barrier (ITB) and so to a stationary optimized plasma regime. Recent experiments in JET have demonstrated significant progress in achieving such a control: different current and temperature gradient target profiles have been reached and sustained for several seconds using a controller based on a static linear model. It's worth noting that the inverse safety factor profile evolves on a slow time scale (resistive time) while the normalized electron temperature gradient reacts on a faster one (confinement time). Moreover these experiments have shown that the controller was sensitive to rapid plasma events such as transient ITBs during the safety factor profile evolution or MHD instabilities which modify the pressure profiles on the confinement time scale. In order to take into account the different dynamics of the controlled profiles and to better react to rapid plasma events the control technique is being improved by using a multiple-time-scale approximation. The paper describes the theoretical analysis and closed-loop simulations using a control algorithm based on two-time-scale state-space model. These closed-loop simulations using the full dynamic but linear model used for the controller design to simulate the plasma response have demonstrated that this new controller allows the normalized electron temperature gradient target profile to be reached faster than the one used in previous experiments. (A.C.)

  1. A Heuristic Approach to Author Name Disambiguation in Bibliometrics Databases for Large-scale Research Assessments

    NARCIS (Netherlands)

    D'Angelo, C.A.; Giuffrida, C.; Abramo, G.

    2011-01-01

    National exercises for the evaluation of research activity by universities are becoming regular practice in ever more countries. These exercises have mainly been conducted through the application of peer-review methods. Bibliometrics has not been able to offer a valid large-scale alternative because

  2. Received signal strength in large-scale wireless relay sensor network: a stochastic ray approach

    NARCIS (Netherlands)

    Hu, L.; Chen, Y.; Scanlon, W.G.

    2011-01-01

    The authors consider a point percolation lattice representation of a large-scale wireless relay sensor network (WRSN) deployed in a cluttered environment. Each relay sensor corresponds to a grid point in the random lattice and the signal sent by the source is modelled as an ensemble of photons that

  3. A high-level and scalable approach for generating scale-free graphs using active objects

    NARCIS (Netherlands)

    K. Azadbakht (Keyvan); N. Bezirgiannis (Nikolaos); F.S. de Boer (Frank); Aliakbary, S. (Sadegh)

    2016-01-01

    textabstractThe Barabasi-Albert model (BA) is designed to generate scale-free networks using the preferential attachment mechanism. In the preferential attachment (PA) model, new nodes are sequentially introduced to the network and they attach preferentially to existing nodes. PA is a classical

  4. Assessing heterogeneity in soil nitrogen cycling: a plot-scale approach

    Science.gov (United States)

    Peter Baas; Jacqueline E. Mohan; David Markewitz; Jennifer D. Knoepp

    2014-01-01

    The high level of spatial and temporal heterogeneity in soil N cycling processes hinders our ability to develop an ecosystem-wide understanding of this cycle. This study examined how incorporating an intensive assessment of spatial variability for soil moisture, C, nutrients, and soil texture can better explain ecosystem N cycling at the plot scale. Five sites...

  5. Scale-up of a mixer-settler extractor using a unit operations approach

    International Nuclear Information System (INIS)

    Lindholm, D.C.; Bautista, R.G.

    1976-01-01

    The results of scale-up studies on a continuous, multistage horizontal mixer-settler extractor are presented. The chemical and mechanical system involves the separation of lanthanum from a mixture of rare earth chlorides using di(2-ethylhexyl) phosphoric acid as the solvent and dilute HCl as a scrub solution in a bench scale extractor. Each stage has a hold-up of 2.6 l. A single stage unit is utilized for scale-up studies. Results are obtained on four sizes of geometrically similar units, the largest being six times the volume of the original bench size. A unit operations technique is chosen so that mixing and settling can be examined independently. Variables examined include type of continuous phase, flow rate of inlet streams, and power input to the mixer. Inlet flow-rate ratios are kept constant for all tests. Two potential methods of unbaffled pump-mixer scale-up are explored; the maintenance of constant impeller tip speed and constant power input. For the settler, the previously successful method of basing design on constant flow-rate per unit cross-sectional area is used

  6. Review of broad-scale drought monitoring of forests: Toward an integrated data mining approach

    Science.gov (United States)

    Steve Norman; Frank H. Koch; William W. Hargrove

    2016-01-01

    Efforts to monitor the broad-scale impacts of drought on forests often come up short. Drought is a direct stressor of forests as well as a driver of secondary disturbance agents, making a full accounting of drought impacts challenging. General impacts  can be inferred from moisture deficits quantified using precipitation and temperature measurements. However,...

  7. Predicting ecosystem functioning from plant traits: Results from a multi-scale ecophsiological modeling approach

    NARCIS (Netherlands)

    Wijk, van M.T.

    2007-01-01

    Ecosystem functioning is the result of processes working at a hierarchy of scales. The representation of these processes in a model that is mathematically tractable and ecologically meaningful is a big challenge. In this paper I describe an individual based model (PLACO¿PLAnt COmpetition) that

  8. The Classroom Process Scale (CPS): An Approach to the Measurement of Teaching Effectiveness.

    Science.gov (United States)

    Anderson, Lorin W.; Scott, Corinne C.

    The purpose of this presentation is to describe the Classroom Process Scale (CPS) and its usefulness for the assessment of teaching effectiveness. The CPS attempts to ameliorate weaknesses in existing classroom process measures by including a coding of student involvement in learning, objectives being pursued, and methods used to pursue attainment…

  9. A comparative study of two approaches to analyse groundwater recharge, travel times and nitrate storage distribution at a regional scale

    Science.gov (United States)

    Turkeltaub, T.; Ascott, M.; Gooddy, D.; Jia, X.; Shao, M.; Binley, A. M.

    2017-12-01

    Understanding deep percolation, travel time processes and nitrate storage in the unsaturated zone at a regional scale is crucial for sustainable management of many groundwater systems. Recently, global hydrological models have been developed to quantify the water balance at such scales and beyond. However, the coarse spatial resolution of the global hydrological models can be a limiting factor when analysing regional processes. This study compares simulations of water flow and nitrate storage based on regional and global scale approaches. The first approach was applied over the Loess Plateau of China (LPC) to investigate the water fluxes and nitrate storage and travel time to the LPC groundwater system. Using raster maps of climate variables, land use data and soil parameters enabled us to determine fluxes by employing Richards' equation and the advection - dispersion equation. These calculations were conducted for each cell on the raster map in a multiple 1-D column approach. In the second approach, vadose zone travel times and nitrate storage were estimated by coupling groundwater recharge (PCR-GLOBWB) and nitrate leaching (IMAGE) models with estimates of water table depth and unsaturated zone porosity. The simulation results of the two methods indicate similar spatial groundwater recharge, nitrate storage and travel time distribution. Intensive recharge rates are located mainly at the south central and south west parts of the aquifer's outcrops. Particularly low recharge rates were simulated in the top central area of the outcrops. However, there are significant discrepancies between the simulated absolute recharge values, which might be related to the coarse scale that is used in the PCR-GLOBWB model, leading to smoothing of the recharge estimations. Both models indicated large nitrate inventories in the south central and south west parts of the aquifer's outcrops and the shortest travel times in the vadose zone are in the south central and east parts of the

  10. Importance of ecohydrological modelling approaches in the prediction of plant behaviour and water balance at different scales

    Science.gov (United States)

    García-Arias, Alicia; Ruiz-Pérez, Guiomar; Francés, Félix

    2017-04-01

    Vegetation plays a main role in the water balance of most hydrological systems. However, in the past it has been barely considered the effect of the interception and evapotranspiration for hydrological modelling purposes. During the last years many authors have recognised and supported ecohydrological approaches instead of traditional strategies. This contribution is aimed to demonstrate the pivotal role of the vegetation in ecohydrological models and that a better understanding of the hydrological systems can be achieved by considering the appropriate processes related to plants. The study is performed in two scales: the plot scale and the reach scale. At plot scale, only zonal vegetation was considered while at reach scale both zonal and riparian were taken into account. In order to assure the main role of the water on the vegetation development, semiarid environments have been selected for the case studies. Results show an increase of the capabilities to predict plant behaviour and water balance when interception and evapotranspiration are taken into account in the soil water balance

  11. Deep learning-based subdivision approach for large scale macromolecules structure recovery from electron cryo tomograms.

    Science.gov (United States)

    Xu, Min; Chai, Xiaoqi; Muthakana, Hariank; Liang, Xiaodan; Yang, Ge; Zeev-Ben-Mordehai, Tzviya; Xing, Eric P

    2017-07-15

    Cellular Electron CryoTomography (CECT) enables 3D visualization of cellular organization at near-native state and in sub-molecular resolution, making it a powerful tool for analyzing structures of macromolecular complexes and their spatial organizations inside single cells. However, high degree of structural complexity together with practical imaging limitations makes the systematic de novo discovery of structures within cells challenging. It would likely require averaging and classifying millions of subtomograms potentially containing hundreds of highly heterogeneous structural classes. Although it is no longer difficult to acquire CECT data containing such amount of subtomograms due to advances in data acquisition automation, existing computational approaches have very limited scalability or discrimination ability, making them incapable of processing such amount of data. To complement existing approaches, in this article we propose a new approach for subdividing subtomograms into smaller but relatively homogeneous subsets. The structures in these subsets can then be separately recovered using existing computation intensive methods. Our approach is based on supervised structural feature extraction using deep learning, in combination with unsupervised clustering and reference-free classification. Our experiments show that, compared with existing unsupervised rotation invariant feature and pose-normalization based approaches, our new approach achieves significant improvements in both discrimination ability and scalability. More importantly, our new approach is able to discover new structural classes and recover structures that do not exist in training data. Source code freely available at http://www.cs.cmu.edu/∼mxu1/software . mxu1@cs.cmu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  12. A study of safeguards approach for the area of plutonium evaporator in a large scale reprocessing plant

    International Nuclear Information System (INIS)

    Sakai, Hirotada; Ikawa, Koji

    1994-01-01

    A preliminary study on a safeguards approach for the chemical processing area in a large scale reprocessing plant has been carried out. In this approach, plutonium inventory at the plutonium evaporator will not be taken, but containment and surveillance (C/S) measures will be applied to ensure the integrity of an area specifically defined to include the plutonium evaporator. The plutonium evaporator area consists of the evaporator itself and two accounting points, i.e., one before the plutonium evaporator and the other after the plutonium evaporator. For newly defined accounting points, two alternative measurement methods, i.e., accounting vessels with high accuracy and flow meters, were examined. Conditions to provide the integrity of the plutonium evaporator area were also examined as well as other technical aspects associated with this approach. The results showed that an appropriate combination of NRTA and C/S measures would be essential to realize a cost effective safeguards approach to be applied for a large scale reprocessing plant. (author)

  13. Measurement and Comparison of Variance in the Performance of Algerian Universities using models of Returns to Scale Approach

    Directory of Open Access Journals (Sweden)

    Imane Bebba

    2017-08-01

    Full Text Available This study aimed to measure and compare the performance of forty-seven Algerian universities, using models of returns to Scale approach, which is based primarily on the Data Envelopment Analysis  method. In order to achieve the objective of the study, a set of variables was chosen to represent the dimension of teaching. The variables consisted of three input variables, which were:  the total number of students  in the undergraduate level, students in the post graduate level and the number of permanent professors. On the other hand, the output variable was represented by the total number of students holding degrees of the two levels. Four basic models for data envelopment analysis method were applied. These were: (Scale Returns, represented by input-oriented and output-oriented constant returns and input-oriented and output-oriented  variable returns. After the analysis of data, results revealed that eight universities achieved full efficiency according to constant returns to scale in both input and output orientations. Seventeen universities achieved full efficiency according to the model of input-oriented returns to scale variable. Sixteen universities achieved full efficiency according to the model of output-oriented  returns to scale variable. Therefore, during the performance measurement, the size of the university, competition, financial and infrastructure constraints, and the process of resource allocation within the university  should be taken into consideration. Also, multiple input and output variables reflecting the dimensions of teaching, research, and community service should be included while measuring and assessing the performance of Algerian universities, rather than using two variables which do not reflect the actual performance of these universities. Keywords: Performance of Algerian Universities, Data envelopment analysis method , Constant returns to scale, Variable returns to scale, Input-orientation, Output-orientation.

  14. [Dimensional approach of emotion in psychiatry: validation of the Positive and Negative Emotionality scale (EPN-31)].

    Science.gov (United States)

    Pélissolo, A; Rolland, J-P; Perez-Diaz, F; Jouvent, R; Allilaire, J-F

    2007-01-01

    This paper reports the first validation study of the EPN-31 scale (Positive and Negative Emotionality scale, 31 items) in a French psychiatric sample. This questionnaire has been adapted by Rolland from an emotion inventory developed by Diener, and is also in accordance with Watson and Clark's tripartite model of affects. Respondents were asked to rate the frequency with which they had experienced each affect (31 basic emotional states) during the last month. The answer format was a 7-point scale, ranging from 1 "Not experienced at all" to 7 "Experienced this affect several times each day". Three main scores were calculated (positive affects, negative affects, and surprise affects), as well as six sub-scores (joy, tenderness, anger, fear, sadness, shame). Four hundred psychiatric patients were included in this study, and completed the EPN-31 scale and the Hospital Anxiety and Depression (HAD) scale. The Global Assessment of Functioning (GAF) scale was rated, as well as DSM IV diagnostic criteria. We performed a principal component analysis, with Varimax orthogonal transformation, and explored the factorial structure of the questionnaire, the internal consistency of each dimension, and the correlations between EPN-31 scores and HAD scores. The factorial structure of the EPN-31 was well-defined as expected, with a three-factor (positive, negative and surprise affects) solution accounting for 58.2% of the variance of the questionnaire. No correlation was obtained between positive and negative affects EPN-31 scores (r=0.006). All alpha Cronbach coefficients were between 0.80 and 0.95 for main scores, and between 0.72 and 0.90 for sub-scores. GAF scores were significantly correlated with EPN-31 positive affects scores (r=0.21; p=0.001) and with EPN-31 negative affects scores (r=- 0.45; p=0.001). We obtained significant correlations between positive affects score and HAD depression score (r=- 0.45; pemotionality. Significantly higher EPN-31 positive affect mean scores

  15. Synergistic soil moisture observation - an interdisciplinary multi-sensor approach to yield improved estimates across scales

    Science.gov (United States)

    Schrön, M.; Fersch, B.; Jagdhuber, T.

    2017-12-01

    The representative determination of soil moisture across different spatial ranges and scales is still an important challenge in hydrology. While in situ measurements are trusted methods at the profile- or point-scale, cosmic-ray neutron sensors (CRNS) are renowned for providing volume averages for several hectares and tens of decimeters depth. On the other hand, airborne remote-sensing enables the coverage of regional scales, however limited to the top few centimeters of the soil.Common to all of these methods is a challenging data processing part, often requiring calibration with independent data. We investigated the performance and potential of three complementary observational methods for the determination of soil moisture below grassland in an alpine front-range river catchment (Rott, 55 km2) of southern Germany.We employ the TERENO preAlpine soil moisture monitoring network, along with additional soil samples taken throughout the catchment. Spatial soil moisture products have been generated using surveys of a car-mounted mobile CRNS (rover), and an aerial acquisition of the polarimetric synthetic aperture radar (F-SAR) of DLR.The study assesses (1) the viability of the different methods to estimate soil moisture for their respective scales and extents, and (2) how either method could support an improvement of the others. We found that in situ data can provide valuable information to calibrate the CRNS rover and to train the vegetation removal part of the polarimetric SAR (PolSAR) retrieval algorithm. Vegetation correction is mandatory to obtain the sub-canopy soil moisture patterns. While CRNS rover surveys can be used to evaluate the F-SAR product across scales, vegetation-related PolSAR products in turn can support the spatial correction of CRNS products for biomass water. Despite the different physical principles, the synthesis of the methods can provide reasonable soil moisture information by integrating from the plot to the landscape scale. The

  16. Scaling approach in predicting the seatbelt loading and kinematics of vulnerable occupants: How far can we go?

    Science.gov (United States)

    Nie, Bingbing; Forman, Jason L; Joodaki, Hamed; Wu, Taotao; Kent, Richard W

    2016-09-01

    Occupants with extreme body size and shape, such as the small female or the obese, were reported to sustain high risk of injury in motor vehicle crashes (MVCs). Dimensional scaling approaches are widely used in injury biomechanics research based on the assumption of geometrical similarity. However, its application scope has not been quantified ever since. The objective of this study is to demonstrate the valid range of scaling approaches in predicting the impact response of the occupants with focus on the vulnerable populations. The present analysis was based on a data set consisting of 60 previously reported frontal crash tests in the same sled buck representing a typical mid-size passenger car. The tests included two categories of human surrogates: 9 postmortem human surrogates (PMHS) of different anthropometries (stature range: 147-189 cm; weight range: 27-151 kg) and 5 anthropomorphic test devices (ATDs). The impact response was considered including the restraint loads and the kinematics of multiple body segments. For each category of the human surrogates, a mid-size occupant was selected as a baseline and the impact response was scaled specifically to another subject based on either the body mass (body shape) or stature (the overall body size). To identify the valid range of the scaling approach, the scaled response was compared to the experimental results using assessment scores on the peak value, peak timing (the time when the peak value occurred), and the overall curve shape ranging from 0 (extremely poor) to 1 (perfect match). Scores of 0.7 to 0.8 and 0.8 to 1.0 indicate fair and acceptable prediction. For both ATDs and PMHS, the scaling factor derived from body mass proved an overall good predictor of the peak timing for the shoulder belt (0.868, 0.829) and the lap belt (0.858, 0.774) and for the peak value of the lap belt force (0.796, 0.869). Scaled kinematics based on body stature provided fair or acceptable prediction on the overall head

  17. Breaking the theoretical scaling limit for predicting quasiparticle energies: the stochastic GW approach.

    Science.gov (United States)

    Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi

    2014-08-15

    We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with N_{e}>3000 electrons.

  18. A systems approach to predict oncometabolites via context-specific genome-scale metabolic networks.

    Directory of Open Access Journals (Sweden)

    Hojung Nam

    2014-09-01

    Full Text Available Altered metabolism in cancer cells has been viewed as a passive response required for a malignant transformation. However, this view has changed through the recently described metabolic oncogenic factors: mutated isocitrate dehydrogenases (IDH, succinate dehydrogenase (SDH, and fumarate hydratase (FH that produce oncometabolites that competitively inhibit epigenetic regulation. In this study, we demonstrate in silico predictions of oncometabolites that have the potential to dysregulate epigenetic controls in nine types of cancer by incorporating massive scale genetic mutation information (collected from more than 1,700 cancer genomes, expression profiling data, and deploying Recon 2 to reconstruct context-specific genome-scale metabolic models. Our analysis predicted 15 compounds and 24 substructures of potential oncometabolites that could result from the loss-of-function and gain-of-function mutations of metabolic enzymes, respectively. These results suggest a substantial potential for discovering unidentified oncometabolites in various forms of cancers.

  19. On the Construct Validity of the Academic Motivation Scale: a CFA and Rasch Analysis approach

    DEFF Research Database (Denmark)

    Andersen, Martin Stolpe; Nielsen, Tine

    subscales measuring Extrinsic Motivation (EM) and one scale measuring Amotivation (AM), each with 4 items. The AMS was translated into Danish and data was collected from psychology students (N = 607) at two Danish universities in 6 different study terms. The construct validity of the seven scales was first...... investigated using confirmatory factor analysis with mixed results of some acceptable and some non-acceptable fit indices for the model. Secondly, Rasch analyses were conducted for each of the seven subscales, using the partial credit model (PCM) and graphical loglinear rasch models (GLLRM). This resulted...... in fit to the PCM in the case of IM to Accomplish (retaining three out of four items), and fit to GLLRMs in two cases: 1) IM to know with evidence of local dependence between all four items. 2) AM (retaining three out of four items) with evidence of gender-based differential item functioning, which...

  20. The average carbon-stock approach for small-scale CDM AR projects

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Quijano, J.F.; Muys, B. [Katholieke Universiteit Leuven, Laboratory for Forest, Nature and Landscape Research, Leuven (Belgium); Schlamadinger, B. [Joanneum Research Forschungsgesellschaft mbH, Institute for Energy Research, Graz (Austria); Emmer, I. [Face Foundation, Arnhem (Netherlands); Somogyi, Z. [Forest Research Institute, Budapest (Hungary); Bird, D.N. [Woodrising Consulting Inc., Belfountain, Ontario (Canada)

    2004-06-15

    In many afforestation and reforestation (AR) projects harvesting with stand regeneration forms an integral part of the silvicultural system and satisfies local timber and/or fuelwood demand. Especially clear-cut harvesting will lead to an abrupt and significant reduction of carbon stocks. The smaller the project, the more significant the fluctuations of the carbon stocks may be. In the extreme case a small-scale project could consist of a single forest stand. In such case, all accounted carbon may be removed during a harvesting operation and the time-path of carbon stocks will typically look as in the hypothetical example presented in the report. For the aggregate of many such small-scale projects there will be a constant benefit to the atmosphere during the projects, due to averaging effects.

  1. A Multi-scale, Multi-disciplinary Approach for Assessing the Technological, Economic, and Environmental Performance of Bio-based Chemicals

    DEFF Research Database (Denmark)

    Herrgard, Markus; Sukumara, Sumesh; Campodonico Alt, Miguel Angel

    2015-01-01

    , the Multi-scale framework for Sustainable Industrial Chemicals (MuSIC) was introduced to address this issue by integrating modelling approaches at different scales ranging from cellular to ecological scales. This framework can be further extended by incorporating modelling of the petrochemical value chain...... towards a sustainable chemical industry....

  2. Partitioned based approach for very large scale database in Indian nuclear power plants

    International Nuclear Information System (INIS)

    Tiwari, Sachin; Upadhyay, Pushp; Sengupta, Nabarun; Bhandarkar, S.G.; Agilandaeswari

    2012-01-01

    This paper presents a partition based approach for handling very large tables with size running in giga-bytes to tera-bytes. The scheme is developed from our experience in handling large signal storage which is required in various computer based data acquisition and control room operator information systems such as Distribution Recording System (DRS) and Computerised Operator Information System (COIS). Whenever there is a disturbance in an operating nuclear power plant, it triggers an action where a large volume of data from multiple sources is generated and this data needs to be stored. Concurrency issues as data is from multiple sources and very large amount of data are the problems which are addressed in this paper by applying partition based approach. Advantages of partition based approach with other techniques are discussed. (author)

  3. A method for real time detecting of non-uniform magnetic field

    Science.gov (United States)

    Marusenkov, Andriy

    2015-04-01

    The principle of measuring magnetic signatures for observing diverse objects is widely used in Near Surface work (unexploded ordnance (UXO); engineering & environmental; archaeology) and security and vehicle detection systems as well. As a rule, the magnitude of the signals to be measured is much lower than that of the quasi-uniform Earth magnetic field. Usually magnetometers for these purposes contain two or more spatially separated sensors to estimate the full tensor gradient of the magnetic field or, more frequently, only partial gradient components. The both types (scalar and vector) of magnetic sensors could be used. The identity of the scale factors and proper alignment of the sensitivity axes of the vector sensors are very important for deep suppression of the ambient field and detection of weak target signals. As a rule, the periodical calibration procedure is used to keep matching sensors' parameters as close as possible. In the present report we propose the technique for detection magnetic anomalies, which is almost insensitive to imperfect matching of the sensors. This method based on the idea that the difference signals between two sensors are considerably different when the instrument is rotated or moved in uniform and non-uniform fields. Due to the misfit of calibration parameters the difference signal observed at the rotation in the uniform field is similar to the total signal - the sum of the signals of both sensors. Zero change of the difference and total signals is expected, if the instrument moves in the uniform field along a straight line. In contrast, the same move in the non-uniform field produces some response of each of the sensors. In case one measures dB/dx and moves along x direction, the sensors signals is shifted in time with the lag proportional to the distance between sensors and the speed of move. It means that the difference signal looks like derivative of the total signal at move in the non-uniform field. So, using quite simple

  4. Coastal Foredune Evolution, Part 2: Modeling Approaches for Meso-Scale Morphologic Evolution

    Science.gov (United States)

    2017-03-01

    for Meso-Scale Morphologic Evolution by Margaret L. Palmsten1, Katherine L. Brodie2, and Nicholas J. Spore2 PURPOSE: This Coastal and Hydraulics ...managers because foredunes provide ecosystem services and can reduce storm damages to coastal infrastructure, both of which increase the resiliency...MS 2 U.S. Army Engineer Research and Development Center, Coastal and Hydraulics Laboratory, Duck, NC ERDC/CHL CHETN-II-57 March 2017 2 models of

  5. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    OpenAIRE

    Rahnamaei, Z.; Nematollahi, N.; Farnoosh, R.

    2012-01-01

    We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  6. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    Z. Rahnamaei

    2012-01-01

    Full Text Available We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  7. Perturbation approach to scaled type Markov renewal processes with infinite mean

    OpenAIRE

    Pajor-Gyulai, Zsolt; Szász, Domokos

    2010-01-01

    Scaled type Markov renewal processes generalize classical renewal processes: renewal times come from a one parameter family of probability laws and the sequence of the parameters is the trajectory of an ergodic Markov chain. Our primary interest here is the asymptotic distribution of the Markovian parameter at time t \\to \\infty. The limit, of course, depends on the stationary distribution of the Markov chain. The results, however, are essentially different depending on whether the expectation...

  8. Electrodisintegration of few body systems at SLAC and the Y scaling approach

    International Nuclear Information System (INIS)

    Meziani, Z.E.

    1986-10-01

    It is proposed that extraction of the scaling function F(y) from the transverse and longitudinal response functions in inclusive quasi-elastic electron scattering from 3 He and 4 He is a powerful method to either study the validity regime of the impulse approximation by allowing the access to the high nucleon momentum components in these nuclei, or the electromagnetic properties of bound nucleons. 19 refs., 4 figs

  9. Development of a new body image assessment scale in urban Cameroon: an anthropological approach.

    Science.gov (United States)

    Cohen, Emmanuel; Pasquet, Patrick

    2011-01-01

    Develop and validate body image scales (BIS) presenting real human bodies adapted to the macroscopic phenotype of urban Cameroonian populations. Quantitative and qualitative analysis. Yaoundé, capital city of Cameroon. Four samples with balanced sex-ratio: the first (n=16) aged 18 to 65 years (qualitative study), the second (n=30) aged 25 to 40 years (photo database), the third (n=47) and fourth (n=181), > or =18 years (validation study). Construct validity, test retest reliability, concurrent and convergent validity of BIS. Body image scales present six Cameroonians of each sex arranged according to main body mass index (BMI) categories: underweight ( or =40 kg/m2). Test-retest reliability correlations for current body size (CBS), desired body size and current desirable discrepancy (body self-satisfaction index) on BIS were never below .90. Plus, for the concurrent validity, we observed a significant correlation (r=0.67, Pbody size perceptions, is acceptable. Body image scales are adapted to the phenotypic characteristics of urban Cameroonian populations. They are reliable and valid to assess body size perceptions and culturally adapted to the Cameroonian context.

  10. Evaluation of Scaling Approaches for the Oceanic Dissipation Rate of Turbulent Kinetic Energy in the Surface Ocean

    Science.gov (United States)

    Esters, L. T.; Ward, B.; Sutherland, G.; Ten Doeschate, A.; Landwehr, S.; Bell, T. G.; Christensen, K. H.

    2016-02-01

    The air-sea exchange of heat, gas and momentum plays an important role for the Earth's weather and global climate. The exchange processes between ocean and atmosphere are influenced by the prevailing surface ocean dynamics. This surface ocean is a highly turbulent region where there is enhanced production of turbulent kinetic energy (TKE). The dissipation rate of TKE (ɛ) in the surface ocean is an important process for governing the depth of both the mixing and mixed layers, which are important length-scales for many aspects of ocean research. However, there exist very limited observations of ɛ under open ocean conditions and consequently our understanding of how to model the dissipation profile is very limited. The approaches to model profiles of ɛ that exist, differ by orders of magnitude depending on their underlying theoretical assumption and included physical processes. Therefore, scaling ɛ is not straight forward and requires open ocean measurements of ɛ to validate the respective scaling laws. This validated scaling of ɛ, is for example required to produce accurate mixed layer depths in global climate models. Errors in the depth of the ocean surface boundary layer can lead to biases in sea surface temperature. Here, we present open ocean measurements of ɛ from the Air-Sea Interaction Profiler (ASIP) collected during several cruises in different ocean basins. ASIP is an autonomous upwardly rising microstructure profiler allowing undisturbed profiling up to the ocean surface. These direct measurements of ɛ under various types of atmospheric and oceanic conditions along with measurements of atmospheric fluxes and wave conditions allow us to make a unique assessment of several scaling approaches based on wind, wave and buoyancy forcing. This will allow us to best assess the most appropriate ɛ-based parameterisation for air-sea exchange.

  11. Decidability of uniform recurrence of morphic sequences

    OpenAIRE

    Durand , Fabien

    2012-01-01

    We prove that the uniform recurrence of morphic sequences is decidable. For this we show that the number of derived sequences of uniformly recurrent morphic sequences is bounded. As a corollary we obtain that uniformly recurrent morphic sequences are primitive substitutive sequences.

  12. A multi-scale qualitative approach to assess the impact of urbanization on natural habitats and their connectivity

    Energy Technology Data Exchange (ETDEWEB)

    Scolozzi, Rocco, E-mail: rocco.scolozzi@fmach.it [Sustainable Agro-ecosystems and Bioresources Department, IASMA Research and Innovation Centre, Fondazione Edmund Mach, Via E. Mach 1, 38010 San Michele all& #x27; Adige, (Italy); Geneletti, Davide, E-mail: geneletti@ing.unitn.it [Department of Civil and Environmental Engineering, University of Trento, Trento (Italy)

    2012-09-15

    Habitat loss and fragmentation are often concurrent to land conversion and urbanization. Simple application of GIS-based landscape pattern indicators may be not sufficient to support meaningful biodiversity impact assessment. A review of the literature reveals that habitat definition and habitat fragmentation are frequently inadequately considered in environmental assessment, notwithstanding the increasing number of tools and approaches reported in the landscape ecology literature. This paper presents an approach for assessing impacts on habitats on a local scale, where availability of species data is often limited, developed for an alpine valley in northern Italy. The perspective of the methodology is multiple scale and species-oriented, and provides both qualitative and quantitative definitions of impact significance. A qualitative decision model is used to assess ecological values in order to support land-use decisions at the local level. Building on recent studies in the same region, the methodology integrates various approaches, such as landscape graphs, object-oriented rule-based habitat assessment and expert knowledge. The results provide insights into future habitat loss and fragmentation caused by land-use changes, and aim at supporting decision-making in planning and suggesting possible ecological compensation. - Highlights: Black-Right-Pointing-Pointer Many environmental assessments inadequately consider habitat loss and fragmentation. Black-Right-Pointing-Pointer Species-perspective for defining habitat quality and connectivity is claimed. Black-Right-Pointing-Pointer Species-based tools are difficult to be applied with limited availability of data. Black-Right-Pointing-Pointer We propose a species-oriented and multiple scale-based qualitative approach. Black-Right-Pointing-Pointer Advantages include being species-oriented and providing value-based information.

  13. A multi-scale qualitative approach to assess the impact of urbanization on natural habitats and their connectivity

    International Nuclear Information System (INIS)

    Scolozzi, Rocco; Geneletti, Davide

    2012-01-01

    Habitat loss and fragmentation are often concurrent to land conversion and urbanization. Simple application of GIS-based landscape pattern indicators may be not sufficient to support meaningful biodiversity impact assessment. A review of the literature reveals that habitat definition and habitat fragmentation are frequently inadequately considered in environmental assessment, notwithstanding the increasing number of tools and approaches reported in the landscape ecology literature. This paper presents an approach for assessing impacts on habitats on a local scale, where availability of species data is often limited, developed for an alpine valley in northern Italy. The perspective of the methodology is multiple scale and species-oriented, and provides both qualitative and quantitative definitions of impact significance. A qualitative decision model is used to assess ecological values in order to support land-use decisions at the local level. Building on recent studies in the same region, the methodology integrates various approaches, such as landscape graphs, object-oriented rule-based habitat assessment and expert knowledge. The results provide insights into future habitat loss and fragmentation caused by land-use changes, and aim at supporting decision-making in planning and suggesting possible ecological compensation. - Highlights: ► Many environmental assessments inadequately consider habitat loss and fragmentation. ► Species-perspective for defining habitat quality and connectivity is claimed. ► Species-based tools are difficult to be applied with limited availability of data. ► We propose a species-oriented and multiple scale-based qualitative approach. ► Advantages include being species-oriented and providing value-based information.

  14. The climate-smart village approach: framework of an integrative strategy for scaling up adaptation options in agriculture

    Directory of Open Access Journals (Sweden)

    Pramod K. Aggarwal

    2018-03-01

    Full Text Available Increasing weather risks threaten agricultural production systems and food security across the world. Maintaining agricultural growth while minimizing climate shocks is crucial to building a resilient food production system and meeting developmental goals in vulnerable countries. Experts have proposed several technological, institutional, and policy interventions to help farmers adapt to current and future weather variability and to mitigate greenhouse gas (GHG emissions. This paper presents the climate-smart village (CSV approach as a means of performing agricultural research for development that robustly tests technological and institutional options for dealing with climatic variability and climate change in agriculture using participatory methods. It aims to scale up and scale out the appropriate options and draw out lessons for policy makers from local to global levels. The approach incorporates evaluation of climate-smart technologies, practices, services, and processes relevant to local climatic risk management and identifies opportunities for maximizing adaptation gains from synergies across different interventions and recognizing potential maladaptation and trade-offs. It ensures that these are aligned with local knowledge and link into development plans. This paper describes early results in Asia, Africa, and Latin America to illustrate different examples of the CSV approach in diverse agroecological settings. Results from initial studies indicate that the CSV approach has a high potential for scaling out promising climate-smart agricultural technologies, practices, and services. Climate analog studies indicate that the lessons learned at the CSV sites would be relevant to adaptation planning in a large part of global agricultural land even under scenarios of climate change. Key barriers and opportunities for further work are also discussed.

  15. Uniform magnetic excitations in nanoparticles

    DEFF Research Database (Denmark)

    Mørup, Steen; Hansen, Britt Rosendahl

    2005-01-01

    We have used a spin-wave model to calculate the temperature dependence of the (sublattice) magnetization of magnetic nanoparticles. The uniform precession mode, corresponding to a spin wave with wave vector q=0, is predominant in nanoparticles and gives rise to an approximately linear temperature...... dependence of the (sublattice) magnetization well below the superparamagnetic blocking temperature for both ferro-, ferri-, and antiferromagnetic particles. This is in accordance with the results of a classical model for collective magnetic excitations in nanoparticles. In nanoparticles of antiferromagnetic...... materials, quantum effects give rise to a small deviation from the linear temperature dependence of the (sublattice) magnetization at very low temperatures. The complex nature of the excited precession states of nanoparticles of antiferromagnetic materials, with deviations from antiparallel orientation...

  16. Multi-scale Modeling Approach for Design and Optimization of Oleochemical Processes

    DEFF Research Database (Denmark)

    Jones, Mark Nicholas; Forero-Hernandez, Hector Alexander; Sarup, Bent

    2017-01-01

    The primary goal of this work is to present a systematic methodology and software frameworkfor a multi-level approach ranging from process synthesis and modeling throughproperty prediction, to sensitivity analysis, property parameter tuning and optimization.This framework is applied to the follow...

  17. An approach to large scale identification of non-obvious structural similarities between proteins

    Science.gov (United States)

    Cherkasov, Artem; Jones, Steven JM

    2004-01-01

    Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence. PMID:15147578

  18. National-scale strategic approaches for managing introduced plants: insights from Australian acacias in South Africa

    CSIR Research Space (South Africa)

    van Wilgen, BW

    2011-09-01

    Full Text Available A range of approaches and philosophies underpin national-level strategies for managing invasive alien plants. This study presents a strategy for the management of taxa that both have value and do harm. Insights were derived from examining Australian...

  19. An agent-based approach to model land-use change at a regional scale

    NARCIS (Netherlands)

    Valbuena, D.F.; Verburg, P.H.; Bregt, A.K.; Ligtenberg, A.

    2010-01-01

    Land-use/cover change (LUCC) is a complex process that includes actors and factors at different social and spatial levels. A common approach to analyse and simulate LUCC as the result of individual decisions is agent-based modelling (ABM). However, ABM is often applied to simulate processes at local

  20. An approach to large scale identification of non-obvious structural similarities between proteins

    Directory of Open Access Journals (Sweden)

    Cherkasov Artem

    2004-05-01

    Full Text Available Abstract Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence.

  1. The role of mechanical heterogeneities during continental breakup: a 3D lithospheric-scale modelling approach

    Science.gov (United States)

    Duclaux, Guillaume; Huismans, Ritske S.; May, Dave

    2015-04-01

    How and why do continents break? More than two decades of analogue and 2D plane-strain numerical experiments have shown that despite the origin of the forces driving extension, the geometry of continental rifts falls into three categories - or modes: narrow rift, wide rift, or core complex. The mode of extension itself is strongly influenced by the rheology (and rheological behaviour) of the modelled layered system. In every model, an initial thermal or mechanical heterogeneity, such as a weak seed or a notch, is imposed to help localise the deformation and avoid uniform stretching of the lithosphere by pure shear. While it is widely accepted that structural inheritance is a key parameter for controlling rift localisation - as implied by the Wilson Cycle - modelling the effect of lithospheric heterogeneities on the long-term tectonic evolution of an extending plate in full 3D remains challenging. Recent progress in finite-element methods applied to computational tectonics along with the improved accessibility to high performance computers, now enable to switch from plane strain thermo-mechanical experiments to full 3D high-resolution experiments. Here we investigate the role of mechanical heterogeneities on rift opening, linkage and propagation during extension of a layered lithospheric systems with pTatin3d, a geodynamics modeling package utilising the material-point-method for tracking material composition, combined with a multigrid finite-element method to solve heterogeneous, incompressible visco-plastic Stokes problems. The initial model setup consists in a box of 1200 km horizontally by 250 km deep. It includes a 35 km layer of continental crust, underlaid by 85 km of sub-continental lithospheric mantle, and an asthenospheric mantle. Crust and mantle have visco-plastic rheologies with a pressure dependent yielding, which includes strain weakening, and a temperature, stress, strain-rate-dependent viscosity based on wet quartzite rheology for the crust, and wet

  2. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    Science.gov (United States)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and

  3. Look-up-table approach for leaf area index retrieval from remotely sensed data based on scale information

    Science.gov (United States)

    Zhu, Xiaohua; Li, Chuanrong; Tang, Lingli

    2018-03-01

    Leaf area index (LAI) is a key structural characteristic of vegetation and plays a significant role in global change research. Several methods and remotely sensed data have been evaluated for LAI estimation. This study aimed to evaluate the suitability of the look-up-table (LUT) approach for crop LAI retrieval from Satellite Pour l'Observation de la Terre (SPOT)-5 data and establish an LUT approach for LAI inversion based on scale information. The LAI inversion result was validated by in situ LAI measurements, indicating that the LUT generated based on the PROSAIL (PROSPECT+SAIL: properties spectra + scattering by arbitrarily inclined leaves) model was suitable for crop LAI estimation, with a root mean square error (RMSE) of ˜0.31m2 / m2 and determination coefficient (R2) of 0.65. The scale effect of crop LAI was analyzed based on Taylor expansion theory, indicating that when the SPOT data aggregated by 200 × 200 pixel, the relative error is significant with 13.7%. Finally, an LUT method integrated with scale information was proposed in this article, improving the inversion accuracy with RMSE of 0.20 m2 / m2 and R2 of 0.83.

  4. A novel dendrochronological approach reveals drivers of carbon sequestration in tree species of riparian forests across spatiotemporal scales.

    Science.gov (United States)

    Rieger, Isaak; Kowarik, Ingo; Cherubini, Paolo; Cierjacks, Arne

    2017-01-01

    Aboveground carbon (C) sequestration in trees is important in global C dynamics, but reliable techniques for its modeling in highly productive and heterogeneous ecosystems are limited. We applied an extended dendrochronological approach to disentangle the functioning of drivers from the atmosphere (temperature, precipitation), the lithosphere (sedimentation rate), the hydrosphere (groundwater table, river water level fluctuation), the biosphere (tree characteristics), and the anthroposphere (dike construction). Carbon sequestration in aboveground biomass of riparian Quercus robur L. and Fraxinus excelsior L. was modeled (1) over time using boosted regression tree analysis (BRT) on cross-datable trees characterized by equal annual growth ring patterns and (2) across space using a subsequent classification and regression tree analysis (CART) on cross-datable and not cross-datable trees. While C sequestration of cross-datable Q. robur responded to precipitation and temperature, cross-datable F. excelsior also responded to a low Danube river water level. However, CART revealed that C sequestration over time is governed by tree height and parameters that vary over space (magnitude of fluctuation in the groundwater table, vertical distance to mean river water level, and longitudinal distance to upstream end of the study area). Thus, a uniform response to climatic drivers of aboveground C sequestration in Q. robur was only detectable in trees of an intermediate height class and in taller trees (>21.8m) on sites where the groundwater table fluctuated little (≤0.9m). The detection of climatic drivers and the river water level in F. excelsior depended on sites at lower altitudes above the mean river water level (≤2.7m) and along a less dynamic downstream section of the study area. Our approach indicates unexploited opportunities of understanding the interplay of different environmental drivers in aboveground C sequestration. Results may support species-specific and

  5. Thermo-mechanical behaviour modelling of particle fuels using a multi-scale approach

    International Nuclear Information System (INIS)

    Blanc, V.

    2009-12-01

    Particle fuels are made of a few thousand spheres, one millimeter diameter large, compound of uranium oxide coated by confinement layers which are embedded in a graphite matrix to form the fuel element. The aim of this study is to develop a new simulation tool for thermo-mechanical behaviour of those fuels under radiations which is able to predict finely local loadings on the particles. We choose to use the square finite element method, in which two different discretization scales are used: a macroscopic homogeneous structure whose properties in each integration point are computed on a second heterogeneous microstructure, the Representative Volume Element (RVE). First part of this works is concerned by the definition of this RVE. A morphological indicator based in the minimal distance between spheres centers permit to select random sets of microstructures. The elastic macroscopic response of RVE, computed by finite element has been compared to an analytical model. Thermal and mechanical representativeness indicators of local loadings has been built from the particle failure modes. A statistical study of those criteria on a hundred of RVE showed the significance of choose a representative microstructure. In this perspective, a empirical model binding morphological indicator to mechanical indicator has been developed. Second part of the work deals with the two transition scale method which are based on the periodic homogenization. Considering a linear thermal problem with heat source in permanent condition, one showed that the heterogeneity of the heat source involve to use a second order method to localized finely the thermal field. The mechanical non-linear problem has been treats by using the iterative Cast3M algorithm, substituting to integration of the behavior law a finite element computation on the RVE. This algorithm has been validated, and coupled with thermal resolution in order to compute a radiation loading. A computation on a complete fuel element

  6. A compact to revitalise large-scale irrigation systems: A ‘theory of change’ approach

    Directory of Open Access Journals (Sweden)

    Bruce A. Lankford

    2016-02-01

    Full Text Available In countries with transitional economies such as those found in South Asia, large-scale irrigation systems (LSIS with a history of public ownership account for about 115 million ha (Mha or approximately 45% of their total area under irrigation. In terms of the global area of irrigation (320 Mha for all countries, LSIS are estimated at 130 Mha or 40% of irrigated land. These systems can potentially deliver significant local, regional and global benefits in terms of food, water and energy security, employment, economic growth and ecosystem services. For example, primary crop production is conservatively valued at about US$355 billion. However, efforts to enhance these benefits and reform the sector have been costly and outcomes have been underwhelming and short-lived. We propose the application of a 'theory of change' (ToC as a foundation for promoting transformational change in large-scale irrigation centred upon a 'global irrigation compact' that promotes new forms of leadership, partnership and ownership (LPO. The compact argues that LSIS can change by switching away from the current channelling of aid finances controlled by government irrigation agencies. Instead it is for irrigators, closely partnered by private, public and NGO advisory and regulatory services, to develop strong leadership models and to find new compensatory partnerships with cities and other river basin neighbours. The paper summarises key assumptions for change in the LSIS sector including the need to initially test this change via a handful of volunteer systems. Our other key purpose is to demonstrate a ToC template by which large-scale irrigation policy can be better elaborated and discussed.

  7. "Non-cold" dark matter at small scales: a general approach

    Science.gov (United States)

    Murgia, R.; Merle, A.; Viel, M.; Totzauer, M.; Schneider, A.

    2017-11-01

    Structure formation at small cosmological scales provides an important frontier for dark matter (DM) research. Scenarios with small DM particle masses, large momenta or hidden interactions tend to suppress the gravitational clustering at small scales. The details of this suppression depend on the DM particle nature, allowing for a direct link between DM models and astrophysical observations. However, most of the astrophysical constraints obtained so far refer to a very specific shape of the power suppression, corresponding to thermal warm dark matter (WDM), i.e., candidates with a Fermi-Dirac or Bose-Einstein momentum distribution. In this work we introduce a new analytical fitting formula for the power spectrum, which is simple yet flexible enough to reproduce the clustering signal of large classes of non-thermal DM models, which are not at all adequately described by the oversimplified notion of WDM . We show that the formula is able to fully cover the parameter space of sterile neutrinos (whether resonantly produced or from particle decay), mixed cold and warm models, fuzzy dark matter, as well as other models suggested by effective theory of structure formation (ETHOS). Based on this fitting formula, we perform a large suite of N-body simulations and we extract important nonlinear statistics, such as the matter power spectrum and the halo mass function. Finally, we present first preliminary astrophysical constraints, based on linear theory, from both the number of Milky Way satellites and the Lyman-α forest. This paper is a first step towards a general and comprehensive modeling of small-scale departures from the standard cold DM model.

  8. An Automated Approach to Map Winter Cropped Area of Smallholder Farms across Large Scales Using MODIS Imagery

    Directory of Open Access Journals (Sweden)

    Meha Jain

    2017-06-01

    Full Text Available Fine-scale agricultural statistics are an important tool for understanding trends in food production and their associated drivers, yet these data are rarely collected in smallholder systems. These statistics are particularly important for smallholder systems given the large amount of fine-scale heterogeneity in production that occurs in these regions. To overcome the lack of ground data, satellite data are often used to map fine-scale agricultural statistics. However, doing so is challenging for smallholder systems because of (1 complex sub-pixel heterogeneity; (2 little to no available calibration data; and (3 high amounts of cloud cover as most smallholder systems occur in the tropics. We develop an automated method termed the MODIS Scaling Approach (MSA to map smallholder cropped area across large spatial and temporal scales using MODIS Enhanced Vegetation Index (EVI satellite data. We use this method to map winter cropped area, a key measure of cropping intensity, across the Indian subcontinent annually from 2000–2001 to 2015–2016. The MSA defines a pixel as cropped based on winter growing season phenology and scales the percent of cropped area within a single MODIS pixel based on observed EVI values at peak phenology. We validated the result with eleven high-resolution scenes (spatial scale of 5 × 5 m2 or finer that we classified into cropped versus non-cropped maps using training data collected by visual inspection of the high-resolution imagery. The MSA had moderate to high accuracies when validated using these eleven scenes across India (R2 ranging between 0.19 and 0.89 with an overall R2 of 0.71 across all sites. This method requires no calibration data, making it easy to implement across large spatial and temporal scales, with 100% spatial coverage due to the compositing of EVI to generate cloud-free data sets. The accuracies found in this study are similar to those of other studies that map crop production using automated methods

  9. Experimental demonstration of a tailored-width microchannel heat exchanger configuration for uniform wall temperature

    International Nuclear Information System (INIS)

    Riera, S; Barrau, J; Rosell, J I; Omri, M; Fréchette, L G

    2013-01-01

    In this work, an experimental study of a novel microfabricated heat sink configuration that tends to uniform the wall temperature, even with increasing flow temperature, is presented. The design consists of a series of microchannel sections with stepwise varying width. This scheme counteracts the flow temperature increase by reducing the local thermal resistance along the flow path. A test apparatus with uniform heat flux and distributed wall temperature measurements was developed for microchannel heat exchanger characterisation. The energy balance is checked and the temperature distribution is analysed for each test. The results show that the wall temperature decreases slightly along the flow path while the fluid temperature increases, highlighting the strong impact of this approach. For a flow rate of 16 ml/s, the mean thermal resistance of the heat sink is 2,35·10 −5 m 2 ·K/W which enhances the results compared to the millimeter scale channels nearly three-fold. For the same flow rate and a heat flux of 50 W/cm 2 , the temperature uniformity, expressed as the standard deviation of the wall temperature, is around 6 °C

  10. Multiscale approach including microfibril scale to assess elastic constants of cortical bone based on neural network computation and homogenization method.

    Science.gov (United States)

    Barkaoui, Abdelwahed; Chamekh, Abdessalem; Merzouki, Tarek; Hambli, Ridha; Mkaddem, Ali

    2014-03-01

    The complexity and heterogeneity of bone tissue require a multiscale modeling to understand its mechanical behavior and its remodeling mechanisms. In this paper, a novel multiscale hierarchical approach including microfibril scale based on hybrid neural network (NN) computation and homogenization equations was developed to link nanoscopic and macroscopic scales to estimate the elastic properties of human cortical bone. The multiscale model is divided into three main phases: (i) in step 0, the elastic constants of collagen-water and mineral-water composites are calculated by averaging the upper and lower Hill bounds; (ii) in step 1, the elastic properties of the collagen microfibril are computed using a trained NN simulation. Finite element calculation is performed at nanoscopic levels to provide a database to train an in-house NN program; and (iii) in steps 2-10 from fibril to continuum cortical bone tissue, homogenization equations are used to perform the computation at the higher scales. The NN outputs (elastic properties of the microfibril) are used as inputs for the homogenization computation to determine the properties of mineralized collagen fibril. The mechanical and geometrical properties of bone constituents (mineral, collagen, and cross-links) as well as the porosity were taken in consideration. This paper aims to predict analytically the effective elastic constants of cortical bone by modeling its elastic response at these different scales, ranging from the nanostructural to mesostructural levels. Our findings of the lowest scale's output were well integrated with the other higher levels and serve as inputs for the next higher scale modeling. Good agreement was obtained between our predicted results and literature data. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Systems approach to monitoring and evaluation guides scale up of the Standard Days Method of family planning in Rwanda

    Science.gov (United States)

    Igras, Susan; Sinai, Irit; Mukabatsinda, Marie; Ngabo, Fidele; Jennings, Victoria; Lundgren, Rebecka

    2014-01-01

    There is no guarantee that a successful pilot program introducing a reproductive health innovation can also be expanded successfully to the national or regional level, because the scaling-up process is complex and multilayered. This article describes how a successful pilot program to integrate the Standard Days Method (SDM) of family planning into existing Ministry of Health services was scaled up nationally in Rwanda. Much of the success of the scale-up effort was due to systematic use of monitoring and evaluation (M&E) data from several sources to make midcourse corrections. Four lessons learned illustrate this crucially important approach. First, ongoing M&E data showed that provider training protocols and client materials that worked in the pilot phase did not work at scale; therefore, we simplified these materials to support integration into the national program. Second, triangulation of ongoing monitoring data with national health facility and population-based surveys revealed serious problems in supply chain mechanisms that affected SDM (and the accompanying CycleBeads client tool) availability and use; new procedures for ordering supplies and monitoring stockouts were instituted at the facility level. Third, supervision reports and special studies revealed that providers were imposing unnecessary medical barriers to SDM use; refresher training and revised supervision protocols improved provider practices. Finally, informal environmental scans, stakeholder interviews, and key events timelines identified shifting political and health policy environments that influenced scale-up outcomes; ongoing advocacy efforts are addressing these issues. The SDM scale-up experience in Rwanda confirms the importance of monitoring and evaluating programmatic efforts continuously, using a variety of data sources, to improve program outcomes. PMID:25276581

  12. Sub-bottom profiling for large-scale maritime archaeological survey An experience-based approach

    DEFF Research Database (Denmark)

    Grøn, Ole; Boldreel, Lars Ole

    2013-01-01

    and wrecks partially or wholly embedded in the sea-floor sediments demands the application of highresolution sub-bottom profilers. This paper presents a strategy for the cost-effective large-scale mapping of unknown sedimentembedded sites such as submerged Stone Age settlements or wrecks, based on sub...... of the submerged cultural heritage. Elements such as archaeological wreck sites exposed on the sea floor are mapped using side-scan and multi-beam techniques. These can also provide information on bathymetric patterns representing potential Stone Age settlements, whereas the detection of such archaeological sites...

  13. 75 FR 71344 - Uniform Compliance Date for Food Labeling Regulations

    Science.gov (United States)

    2010-11-23

    .... FSIS-2010-0031] RIN 0583-AD Uniform Compliance Date for Food Labeling Regulations AGENCY: Food Safety... regulations that require changes in the labeling of meat and poultry food products. Many meat and poultry... for new food labeling regulations is consistent with FDA's approach in this regard. FDA is also...

  14. Authormagic – An Approach to Author Disambiguation in Large-Scale Digital Libraries

    CERN Document Server

    Weiler, Henning; Mele, Salvatore

    2011-01-01

    A collaboration of leading research centers in the field of High Energy Physics (HEP) has built INSPIRE, a novel information infrastructure, which comprises the entire corpus of about one million documents produced within the discipline, including a rich set of metadata, citation information and half a million full-text documents, and offers a unique opportunity for author disambiguation strategies. The presented approach features extended metadata comparison metrics and a three-step unsupervised graph clustering technique. The algorithm aided in identifying 200'000 individuals from 6'500'000 author signatures. Preliminary tests based on knowledge of external experts and a pilot of a crowd-sourcing system show a success rate of more than 96% within the selected test cases. The obtained author clusters serve as a recommendation for INSPIRE users to further clean the publication list in a crowd-sourced approach.

  15. Modeling of scale-dependent bacterial growth by chemical kinetics approach.

    Science.gov (United States)

    Martínez, Haydee; Sánchez, Joaquín; Cruz, José-Manuel; Ayala, Guadalupe; Rivera, Marco; Buhse, Thomas

    2014-01-01

    We applied the so-called chemical kinetics approach to complex bacterial growth patterns that were dependent on the liquid-surface-area-to-volume ratio (SA/V) of the bacterial cultures. The kinetic modeling was based on current experimental knowledge in terms of autocatalytic bacterial growth, its inhibition by the metabolite CO2, and the relief of inhibition through the physical escape of the inhibitor. The model quantitatively reproduces kinetic data of SA/V-dependent bacterial growth and can discriminate between differences in the growth dynamics of enteropathogenic E. coli, E. coli JM83, and Salmonella typhimurium on one hand and Vibrio cholerae on the other hand. Furthermore, the data fitting procedures allowed predictions about the velocities of the involved key processes and the potential behavior in an open-flow bacterial chemostat, revealing an oscillatory approach to the stationary states.

  16. Modeling of Scale-Dependent Bacterial Growth by Chemical Kinetics Approach

    Directory of Open Access Journals (Sweden)

    Haydee Martínez

    2014-01-01

    Full Text Available We applied the so-called chemical kinetics approach to complex bacterial growth patterns that were dependent on the liquid-surface-area-to-volume ratio (SA/V of the bacterial cultures. The kinetic modeling was based on current experimental knowledge in terms of autocatalytic bacterial growth, its inhibition by the metabolite CO2, and the relief of inhibition through the physical escape of the inhibitor. The model quantitatively reproduces kinetic data of SA/V-dependent bacterial growth and can discriminate between differences in the growth dynamics of enteropathogenic E. coli, E. coli  JM83, and Salmonella typhimurium on one hand and Vibrio cholerae on the other hand. Furthermore, the data fitting procedures allowed predictions about the velocities of the involved key processes and the potential behavior in an open-flow bacterial chemostat, revealing an oscillatory approach to the stationary states.

  17. Geoscience Meets Social Science: A Flexible Data Driven Approach for Developing High Resolution Population Datasets at Global Scale

    Science.gov (United States)

    Rose, A.; McKee, J.; Weber, E.; Bhaduri, B. L.

    2017-12-01

    Leveraging decades of expertise in population modeling, and in response to growing demand for higher resolution population data, Oak Ridge National Laboratory is now generating LandScan HD at global scale. LandScan HD is conceived as a 90m resolution population distribution where modeling is tailored to the unique geography and data conditions of individual countries or regions by combining social, cultural, physiographic, and other information with novel geocomputation methods. Similarities among these areas are exploited in order to leverage existing training data and machine learning algorithms to rapidly scale development. Drawing on ORNL's unique set of capabilities, LandScan HD adapts highly mature population modeling methods developed for LandScan Global and LandScan USA, settlement mapping research and production in high-performance computing (HPC) environments, land use and neighborhood mapping through image segmentation, and facility-specific population density models. Adopting a flexible methodology to accommodate different geographic areas, LandScan HD accounts for the availability, completeness, and level of detail of relevant ancillary data. Beyond core population and mapped settlement inputs, these factors determine the model complexity for an area, requiring that for any given area, a data-driven model could support either a simple top-down approach, a more detailed bottom-up approach, or a hybrid approach.

  18. Segmenting healthcare terminology users: a strategic approach to large scale evolutionary development.

    Science.gov (United States)

    Price, C; Briggs, K; Brown, P J

    1999-01-01

    Healthcare terminologies have become larger and more complex, aiming to support a diverse range of functions across the whole spectrum of healthcare activity. Prioritization of development, implementation and evaluation can be achieved by regarding the "terminology" as an integrated system of content-based and functional components. Matching these components to target segments within the healthcare community, supports a strategic approach to evolutionary development and provides essential product differentiation to enable terminology providers and systems suppliers to focus on end-user requirements.

  19. Multi-Scale Approach to Understanding Source-Sink Dynamics of Amphibians

    Science.gov (United States)

    2015-12-01

    spotted salamander, A. maculatum) at Fort Leonard Wood (FLW), Missouri. We used a multi-faceted approach in which we combined ecological , genetic...spotted salamander, A. maculatum) at Fort Leonard Wood , Missouri through a combination of intensive ecological field studies, genetic analyses, and...spatial demographic networks to identify optimal locations for wetland construction and restoration. Ecological Applications. Walls, S. C., Ball, L. C

  20. Approaches for Scaling Back the Defense Department’s Budget Plans

    Science.gov (United States)

    2013-03-01

    of an overall strat- egy for curtailing defense costs, or some variation of that approach could be adopted instead. (Ways in which the general... tempo (activities such as steaming days for Navy ships and flying hours for the ser- vices’ aviation components) of the units that remained in the...Mosher and Matthew S. Goldberg . Adam Talaber analyzed the costs to operate individual military units. David Berteau of the Center for Strategic and

  1. MURI: An Integrated Multi-Scale Approach for Understanding Ion Transport in Complex Heterogeneous Organic Materials

    Science.gov (United States)

    2017-09-30

    Thomas A. Witten,f Matthew W. Liberatore,a and Andrew M. Herring,a,* a Department of Chemical and Biological Engineering and bDepartment of Chemistry ...2) To fundamentally understand, with combined experimental and computational approaches, the interplay of chemistry , processing, and morphology on...Society, The International Society of Electrochemistry and The American Institute of Chemical Engineers to give oral and poster presentations. In

  2. Reliable solution processed planar perovskite hybrid solar cells with large-area uniformity by chloroform soaking and spin rinsing induced surface precipitation

    Directory of Open Access Journals (Sweden)

    Yann-Cherng Chern

    2015-08-01

    Full Text Available A solvent soaking and rinsing method, in which the solvent was allowed to soak all over the surface followed by a spinning for solvent draining, was found to produce perovskite layers with high uniformity on a centimeter scale and with much improved reliability. Besides the enhanced crystallinity and surface morphology due to the rinsing induced surface precipitation that constrains the grain growth underneath in the precursor films, large-area uniformity with film thickness determined exclusively by the rotational speed of rinsing spinning for solvent draining was observed. With chloroform as rinsing solvent, highly uniform and mirror-like perovskite layers of area as large as 8 cm × 8 cm were produced and highly uniform planar perovskite solar cells with power conversion efficiency of 10.6 ± 0.2% as well as much prolonged lifetime were obtained. The high uniformity and reliability observed with this solvent soaking and rinsing method were ascribed to the low viscosity of chloroform as well as its feasibility of mixing with the solvent used in the precursor solution. Moreover, since the surface precipitation forms before the solvent draining, this solvent soaking and rinsing method may be adapted to spinless process and be compatible with large-area and continuous production. With the large-area uniformity and reliability for the resultant perovskite layers, this chloroform soaking and rinsing approach may thus be promising for the mass production and commercialization of large-area perovskite solar cells.

  3. Constructivist and Behaviorist Approaches: Development and Initial Evaluation of a Teaching Practice Scale for Introductory Statistics at the College Level

    Directory of Open Access Journals (Sweden)

    Rossi A. Hassad

    2011-07-01

    Full Text Available This study examined the teaching practices of 227 college instructors of introductory statistics from the health and behavioral sciences. Using primarily multidimensional scaling (MDS techniques, a two-dimensional, 10-item teaching-practice scale, TISS (Teaching of Introductory Statistics Scale, was developed. The two dimensions (subscales are characterized as constructivist and behaviorist; they are orthogonal. Criterion validity of the TISS was established in relation to instructors’ attitude toward teaching, and acceptable levels of reliability were obtained. A significantly higher level of behaviorist practice (less reform-oriented was reported by instructors from the U.S., as well as instructors with academic degrees in mathematics and engineering, whereas those with membership in professional organizations, tended to be more reform-oriented (or constructivist. The TISS, thought to be the first of its kind, will allow the statistics education community to empirically assess and describe the pedagogical approach (teaching practice of instructors of introductory statistics in the health and behavioral sciences, at the college level, and determine what learning outcomes result from the different teaching-practice orientations. Further research is required in order to be conclusive about the structural and psychometric properties of this scale, including its stability over time.

  4. New approach to small scale power could light up much of the developing world

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, J.

    2011-01-15

    The modern conveniences requiring electricity have been out of reach for almost half of the world's population because they live too far from the grid. Innovative technology combined with creative new business models could significantly improve the quality of life for millions of people. This article discussed a small scale renewable energy system that could ensure that villages all over the world have access to radios, lights, refrigeration and other critical technologies. The article also noted the potential implications in terms of health, education and the general standard of living for millions of people. The basic model involves setting up small solar panels in a good location in a village or on a farm. The panels can be used to charge up equipment that is either on-site or portable. This article described how to achieve economies of scale through mass production of many similar units. The project has been tested in Brazil and a donation to the project of $100,000 will be used to install a solar-powered public infrastructure comprised of water pumping, school and an Internet station. The funds will also be used to provide 70 solar lanterns for children living in two villages on the Rio Tapajos, a tributary to the Amazon near Santarem. 1 fig.

  5. Introduction of an energy efficiency tool for small scale biomass gasifiers – A thermodynamic approach

    International Nuclear Information System (INIS)

    Vakalis, S.; Patuzzi, F.; Baratieri, M.

    2017-01-01

    Highlights: • Analysis of plants for electricity, heat and materials production. • Thermodynamic analysis by using exergy, entransy and statistical entropy. • Extrapolation of a single efficiency index by combining the thermodynamic parameters. • Application of methodology for two monitored small scale gasifiers. - Abstract: Modern gasification plants, should be treated as poly-generation facilities because, alongside the production of electricity and heat, valuable or waste materials streams are generated. Thus, integrated methods should be introduced in order to account for the full range and the nature of the products. Application of conventional hybrid indicators that convert the output into monetary units or CO_2 equivalents are a source of bias because of the inconsistency of the conversion factors and unreliability of the available data. Therefore, this study introduces a novel thermodynamic-based method for assessing gasification plants performance by means of exergy, entransy and statistical entropy. A monitoring campaign has been implemented on two small scale gasifiers and the results have been applied on the proposed method. The energy plants are compared in respect to their individual thermodynamic parameters for energy production and materials distribution. In addition, the method returns one single value which is a resultant of all the investigated parameters and is a characteristic value of the overall performance of an energy plant.

  6. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    Science.gov (United States)

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Structural health monitoring using DOG multi-scale space: an approach for analyzing damage characteristics

    Science.gov (United States)

    Guo, Tian; Xu, Zili

    2018-03-01

    Measurement noise is inevitable in practice; thus, it is difficult to identify defects, cracks or damage in a structure while suppressing noise simultaneously. In this work, a novel method is introduced to detect multiple damage in noisy environments. Based on multi-scale space analysis for discrete signals, a method for extracting damage characteristics from the measured displacement mode shape is illustrated. Moreover, the proposed method incorporates a data fusion algorithm to further eliminate measurement noise-based interference. The effectiveness of the method is verified by numerical and experimental methods applied to different structural types. The results demonstrate that there are two advantages to the proposed method. First, damage features are extracted by the difference of the multi-scale representation; this step is taken such that the interference of noise amplification can be avoided. Second, a data fusion technique applied to the proposed method provides a global decision, which retains the damage features while maximally eliminating the uncertainty. Monte Carlo simulations are utilized to validate that the proposed method has a higher accuracy in damage detection.

  8. The use of scale-invariance feature transform approach to recognize and retrieve incomplete shoeprints.

    Science.gov (United States)

    Wei, Chia-Hung; Li, Yue; Gwo, Chih-Ying

    2013-05-01

    Shoeprints left at the crime scene provide valuable information in criminal investigation due to the distinctive patterns in the sole. Those shoeprints are often incomplete and noisy. In this study, scale-invariance feature transform is proposed and evaluated for recognition and retrieval of partial and noisy shoeprint images. The proposed method first constructs different scale spaces to detect local extrema in the underlying shoeprint images. Those local extrema are considered as useful key points in the image. Next, the features of those key points are extracted to represent their local patterns around key points. Then, the system computes the cross-correlation between the query image and each shoeprint image in the database. Experimental results show that full-size prints and prints from the toe area perform best among all shoeprints. Furthermore, this system also demonstrates its robustness against noise because there is a very slight difference in comparison between original shoeprints and noisy shoeprints. © 2013 American Academy of Forensic Sciences.

  9. Developing and validating the Youth Conduct Problems Scale-Rwanda: a mixed methods approach.

    Directory of Open Access Journals (Sweden)

    Lauren C Ng

    Full Text Available This study developed and validated the Youth Conduct Problems Scale-Rwanda (YCPS-R. Qualitative free listing (n = 74 and key informant interviews (n = 47 identified local conduct problems, which were compared to existing standardized conduct problem scales and used to develop the YCPS-R. The YCPS-R was cognitive tested by 12 youth and caregiver participants, and assessed for test-retest and inter-rater reliability in a sample of 64 youth. Finally, a purposive sample of 389 youth and their caregivers were enrolled in a validity study. Validity was assessed by comparing YCPS-R scores to conduct disorder, which was diagnosed with the Mini International Neuropsychiatric Interview for Children, and functional impairment scores on the World Health Organization Disability Assessment Schedule Child Version. ROC analyses assessed the YCPS-R's ability to discriminate between youth with and without conduct disorder. Qualitative data identified a local presentation of youth conduct problems that did not match previously standardized measures. Therefore, the YCPS-R was developed solely from local conduct problems. Cognitive testing indicated that the YCPS-R was understandable and required little modification. The YCPS-R demonstrated good reliability, construct, criterion, and discriminant validity, and fair classification accuracy. The YCPS-R is a locally-derived measure of Rwandan youth conduct problems that demonstrated good psychometric properties and could be used for further research.

  10. Qualitative approaches to large scale studies and students' achievements in Science and Mathematics - An Australian and Nordic Perspective

    DEFF Research Database (Denmark)

    Davidsson, Eva; Sørensen, Helene

    Large scale studies play an increasing role in educational politics and results from surveys such as TIMSS and PISA are extensively used in medial debates about students' knowledge in science and mathematics. Although this debate does not usually shed light on the more extensive quantitative...... analyses, there is a lack of investigations which aim at exploring what is possible to conclude or not to conclude from these analyses. There is also a need for more detailed discussions about what trends could be discern concerning students' knowledge in science and mathematics. The aim of this symposium...... is therefore to highlight and discuss different approaches to how data from large scale studies could be used for additional analyses in order to increase our understanding of students' knowledge in science and mathematics, but also to explore possible longitudinal trends, hidden in the data material...

  11. Emotion regulation in patients with rheumatic diseases: validity and responsiveness of the Emotional Approach Coping Scale (EAC

    Directory of Open Access Journals (Sweden)

    Mowinckel Petter

    2009-09-01

    Full Text Available Abstract Background Chronic rheumatic diseases are painful conditions which are not entirely controllable and can place high emotional demands on individuals. Increasing evidence has shown that emotion regulation in terms of actively processing and expressing disease-related emotions are likely to promote positive adjustment in patients with chronic diseases. The Emotional Approach Coping Scale (EAC measures active attempts to acknowledge, understand, and express emotions. Although tested in other clinical samples, the EAC has not been validated for patients with rheumatic diseases. This study evaluated the data quality, internal consistency reliability, validity and responsiveness of the Norwegian version of the EAC for this group of patients. Methods 220 patients with different rheumatic diseases were included in a cross-sectional study in which data quality and internal consistency were assessed. Construct validity was assessed through comparisons with the Brief Approach/Avoidance Coping Questionnaire (BACQ and the General Health Questionnaire (GHQ-20. Responsiveness was tested in a longitudinal pretest-posttest study of two different coping interventions, the Vitality Training Program (VTP and a Self-Management Program (SMP. Results The EAC had low levels of missing data. Results from principal component analysis supported two subscales, Emotional Expression and Emotional Processing, which had high Cronbach's alphas of 0.90 and 0.92, respectively. The EAC had correlations with approach-oriented items in the BACQ in the range 0.17-0.50. The EAC Expression scale had a significant negative correlation with the GHQ-20 of -0.13. As hypothesized, participation in the VTP significantly improved EAC scores, indicating responsiveness to change. Conclusion The EAC is an acceptable and valid instrument for measuring emotional processing and expression in patients with rheumatic diseases. The EAC scales were responsive to change in an intervention

  12. An efficient and general numerical method to compute steady uniform vortices

    Science.gov (United States)

    Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.

    2011-07-01

    Steady uniform vortices are widely used to represent high Reynolds number flows, yet their efficient computation still presents some challenges. Existing Newton iteration methods become inefficient as the vortices develop fine-scale features; in addition, these methods cannot, in general, find solutions with specified Casimir invariants. On the other hand, available relaxation approaches are computationally inexpensive, but can fail to converge to a solution. In this paper, we overcome these limitations by introducing a new discretization, based on an inverse-velocity map, which radically increases the efficiency of Newton iteration methods. In addition, we introduce a procedure to prescribe Casimirs and remove the degeneracies in the steady vorticity equation, thus ensuring convergence for general vortex configurations. We illustrate our methodology by considering several unbounded flows involving one or two vortices. Our method enables the computation, for the first time, of steady vortices that do not exhibit any geometric symmetry. In addition, we discover that, as the limiting vortex state for each flow is approached, each family of solutions traces a clockwise spiral in a bifurcation plot consisting of a velocity-impulse diagram. By the recently introduced "IVI diagram" stability approach [Phys. Rev. Lett. 104 (2010) 044504], each turn of this spiral is associated with a loss of stability for the steady flows. Such spiral structure is suggested to be a universal feature of steady, uniform-vorticity flows.

  13. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    Science.gov (United States)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  14. MEASURING GROCERY STORES SERVICE QUALITY IN INDONESIA: A RETAIL SERVICE QUALITY SCALE APPROACH

    Directory of Open Access Journals (Sweden)

    Leonnard Leonnard

    2017-12-01

    Full Text Available The growing number of modern grocery stores in Indonesia is a challenge for each grocery store to maintain and increase their number of consumers. The success of maintaining and improving service quality will affect long-term profitability and business sustainability. Therefore, in this study, we examined consumer perceptions of service quality in one of modern grocery stores in Indonesia. Data were collected from 387 consumers of grocery stores in Jakarta, Bogor, Depok, Bekasi, Cibubur, and Subang. Structural Equation Modeling (SEM through Maximum likelihood and Bayesian estimation was employed to analyze the data. The finding indicated that the five indicators of the retail service quality scale consisting of physical aspects, reliability, personal interactions, problem solving and policies provided  valid multi-item instruments in measuring consumer perceptions of service quality in grocery stores.

  15. Reexamining the domain of hypochondriasis: comparing the Illness Attitudes Scale to other approaches.

    Science.gov (United States)

    Fergus, Thomas A; Valentiner, David P

    2009-08-01

    The present study examined utility of the Illness Attitudes Scale (IAS; [Kellner, R. (1986). Somatization and hypochondriasis. New York: Praeger Publishers]) in a non-clinical college sample (N=235). Relationships among five recently identified IAS dimensions (fear of illness and pain, symptom effects, treatment experience, disease conviction, and health habits) and self-report measures of several anxiety-related constructs (health anxiety, body vigilance, intolerance of uncertainty, anxiety sensitivity, and non-specific anxiety symptoms) were examined. In addition, this study investigated the incremental validity of the IAS dimensions in predicting medical utilization. The fear of illness and pain dimension and the symptom effects dimension consistently shared stronger relations with the anxiety-related constructs compared to the other three IAS dimensions. The symptom effects dimension, the disease conviction dimension, and the health habits dimension showed incremental validity over the anxiety-related constructs in predicting medical utilization. Implications for the IAS and future conceptualizations of HC are discussed.

  16. Mechanical properties of granular materials: A variational approach to grain-scale simulations

    Energy Technology Data Exchange (ETDEWEB)

    Holtzman, R.; Silin, D.B.; Patzek, T.W.

    2009-01-15

    The mechanical properties of cohesionless granular materials are evaluated from grain-scale simulations. A three-dimensional pack of spherical grains is loaded by incremental displacements of its boundaries. The deformation is described as a sequence of equilibrium configurations. Each configuration is characterized by a minimum of the total potential energy. This minimum is computed using a modification of the conjugate gradient algorithm. Our simulations capture the nonlinear, path-dependent behavior of granular materials observed in experiments. Micromechanical analysis provides valuable insight into phenomena such as hysteresis, strain hardening and stress-induced anisotropy. Estimates of the effective bulk modulus, obtained with no adjustment of material parameters, are in agreement with published experimental data. The model is applied to evaluate the effects of hydrate dissociation in marine sediments. Weakening of the sediment is quantified as a reduction in the effective elastic moduli.

  17. Decommissioning of nuclear reprocessing plants French past experience and approach to future large scale operations

    International Nuclear Information System (INIS)

    Jean Jacques, M.; Maurel, J.J.; Maillet, J.

    1994-01-01

    Over the years, France has built up significant experience in dismantling nuclear fuel reprocessing facilities or various types of units representative of a modern reprocessing plant. However, only small or medium scale operations have been carried out so far. To prepare the future decommissioning of large size industrial facilities such as UP1 (Marcoule) and UP2 (La Hague), new technologies must be developed to maximize waste recycling and optimize direct operations by operators, taking the integrated dose and cost aspects into account. The decommissioning and dismantling methodology comprises: a preparation phase for inventory, choice and installation of tools and arrangement of working areas, a dismantling phase with decontamination, and a final contamination control phase. Detailed description of dismantling operations of the MA Pu finishing facility (La Hague) and of the RM2 radio metallurgical laboratory (CEA-Fontenay-aux-Roses) are given as examples. (J.S.). 3 tabs

  18. National, holistic, watershed-scale approach to understand the sources, transport, and fate of agricultural chemicals

    Science.gov (United States)

    Capel, P.D.; McCarthy, K.A.; Barbash, J.E.

    2008-01-01

    This paper is an introduction to the following series of papers that report on in-depth investigations that have been conducted at five agricultural study areas across the United States in order to gain insights into how environmental processes and agricultural practices interact to determine the transport and fate of agricultural chemicals in the environment. These are the first study areas in an ongoing national study. The study areas were selected, based on the combination of cropping patterns and hydrologic setting, as representative of nationally important agricultural settings to form a basis for extrapolation to unstudied areas. The holistic, watershed-scale study design that involves multiple environmental compartments and that employs both field observations and simulation modeling is presented. This paper introduces the overall study design and presents an overview of the hydrology of the five study areas. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  19. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  20. Subjective evaluation with FAA criteria: A multidimensional scaling approach. [ground track control management

    Science.gov (United States)

    Kreifeldt, J. G.; Parkin, L.; Wempe, T. E.; Huff, E. F.

    1975-01-01

    Perceived orderliness in the ground tracks of five A/C during their simulated flights was studied. Dynamically developing ground tracks for five A/C from 21 separate runs were reproduced from computer storage and displayed on CRTS to professional pilots and controllers for their evaluations and preferences under several criteria. The ground tracks were developed in 20 seconds as opposed to the 5 minutes of simulated flight using speedup techniques for display. Metric and nonmetric multidimensional scaling techniques are being used to analyze the subjective responses in an effort to: (1) determine the meaningfulness of basing decisions on such complex subjective criteria; (2) compare pilot/controller perceptual spaces; (3) determine the dimensionality of the subjects' perceptual spaces; and thereby (4) determine objective measures suitable for comparing alternative traffic management simulations.

  1. AC losses in superconductors: a multi-scale approach for the design of high current cables

    International Nuclear Information System (INIS)

    Escamez, Guillaume

    2016-01-01

    The work reported in this PhD deals with AC losses in superconducting material for large scale applications such as cables or magnets. Numerical models involving FEM or integral methods have been developed to solve the time transient electromagnetic distributions of field and current densities with the peculiarity of the superconducting constitutive E-J equation. Two main conductors have been investigated. First, REBCO superconductors for applications operating at 77 K are studied and a new architecture of conductor (round wires) for 3 kA cables. Secondly, for very high current cables, 3-D simulations on MgB_2 wires are built and solved using FEM modeling. The following chapter introduced new development used for the calculation of AC losses in DC cables with ripples. The thesis ends with the use of the developed numerical model on a practical example in the european BEST-PATHS project: a 10 kA MgB_2 demonstrator [fr

  2. Approaches to modeling landscape-scale drought-induced forest mortality

    Science.gov (United States)

    Gustafson, Eric J.; Shinneman, Douglas

    2015-01-01

    Drought stress is an important cause of tree mortality in forests, and drought-induced disturbance events are projected to become more common in the future due to climate change. Landscape Disturbance and Succession Models (LDSM) are becoming widely used to project climate change impacts on forests, including potential interactions with natural and anthropogenic disturbances, and to explore the efficacy of alternative management actions to mitigate negative consequences of global changes on forests and ecosystem services. Recent studies incorporating drought-mortality effects into LDSMs have projected significant potential changes in forest composition and carbon storage, largely due to differential impacts of drought on tree species and interactions with other disturbance agents. In this chapter, we review how drought affects forest ecosystems and the different ways drought effects have been modeled (both spatially and aspatially) in the past. Building on those efforts, we describe several approaches to modeling drought effects in LDSMs, discuss advantages and shortcomings of each, and include two case studies for illustration. The first approach features the use of empirically derived relationships between measures of drought and the loss of tree biomass to drought-induced mortality. The second uses deterministic rules of species mortality for given drought events to project changes in species composition and forest distribution. A third approach is more mechanistic, simulating growth reductions and death caused by water stress. Because modeling of drought effects in LDSMs is still in its infancy, and because drought is expected to play an increasingly important role in forest health, further development of modeling drought-forest dynamics is urgently needed.

  3. Sustainable Competitive Advantage (SCA Analysis of Furniture Manufacturers in Malaysia: Normalized Scaled Critical Factor Index (NSCFI Approach

    Directory of Open Access Journals (Sweden)

    Tasmin Rosmaini

    2016-06-01

    Full Text Available The purpose of this paper is to investigate Malaysian furniture industry via Sustainable competitive advantages (SCA approach. In this case study, sense and respond method and Normalized Scaled Critical Factor Index (NSCFI are used to specify the distribution of companies’ resources for different criteria and detect the attributes which are critical based on expectation and experience of companies’ employs. Moreover, this study evaluates Malaysian furniture business strategy according to manufacturing strategy in terms of analyzer, prospector and defender. Finally, SCA risk levels are presented to show how much company’s resource allocations support their business strategy.

  4. Decoupling of parity- and SU(2)/sub R/-breaking scales: A new approach to left-right symmetric models

    International Nuclear Information System (INIS)

    Chang, D.; Mohapatra, R.N.; Parida, M.K.

    1984-01-01

    A new approach to left-right symmetric models is proposed, where the left-right discrete-symmetry- and SU(2)/sub R/-breaking scales are decoupled from each other. This changes the spectrum of physical Higgs bosons which leads to different patterns for gauge hierarchies in SU(2)/sub L/xSU(2)/sub R/xSU(4)/sub C/ and SO(10) models. Most interesting are two SO(10) symmetry-breaking chains with an intermediate U(1)/sub R/ symmetry. These are such as to provide new motivation to search for ΔB = 2 and right-handed current effects at low energies

  5. EMAPS: An Efficient Multiscale Approach to Plasma Systems with Non-MHD Scale Effects

    Energy Technology Data Exchange (ETDEWEB)

    Omelchenko, Yuri A. [Trinum Research, Inc., San Diego, CA (United States)

    2016-08-08

    Global interactions of energetic ions with magnetoplasmas and neutral gases lie at the core of many space and laboratory plasma phenomena ranging from solar wind entry into and transport within planetary magnetospheres and exospheres to fast-ion driven instabilities in fusion devices to astrophysics-in-lab experiments. The ability of computational models to properly account for physical effects that underlie such interactions, namely ion kinetic, ion cyclotron, Hall, collisional and ionization processes is important for the success and planning of experimental research in plasma physics. Understanding the physics of energetic ions, in particular their nonlinear resonance interactions with Alfvén waves, is central to improving the heating performance of magnetically confined plasmas for future energy generation. Fluid models are not adequate for high-beta plasmas as they cannot fully capture ion kinetic and cyclotron physics (e.g., ion behavior in the presence of magnetic nulls, shock structures, plasma interpenetration, etc.). Recent results from global reconnection simulations show that even in a MHD-like regime there may be significant differences between kinetic and MHD simulations. Therefore, kinetic modeling becomes essential for meeting modern day challenges in plasma physics. The hybrid approximation is an intermediate approximation between the fluid and fully kinetic approximations. It eliminates light waves, removes the electron inertial temporal and spatial scales from the problem and enables full-orbit ion kinetics. As a result, hybrid codes have become effective tools for exploring ion-scale driven phenomena associated with ion beams, shocks, reconnection and turbulence that control the large-scale behavior of laboratory and space magnetoplasmas. A number of numerical issues, however, make three-dimensional (3D) large-scale hybrid simulations of inhomogeneous magnetized plasmas prohibitively expensive or even impossible. To resolve these difficulties

  6. Validation of Sustainable Development Practices Scale Using the Bayesian Approach to Item Response Theory

    Directory of Open Access Journals (Sweden)

    Martin Hernani Merino

    2014-12-01

    Full Text Available There has been growing recognition of the importance of creating performance measurement tools for the economic, social and environmental management of micro and small enterprise (MSE. In this context, this study aims to validate an instrument to assess perceptions of sustainable development practices by MSEs by means of a Graded Response Model (GRM with a Bayesian approach to Item Response Theory (IRT. The results based on a sample of 506 university students in Peru, suggest that a valid measurement instrument was achieved. At the end of the paper, methodological and managerial contributions are presented.

  7. Hybrid discrete PSO and OPF approach for optimization of biomass fueled micro-scale energy system

    International Nuclear Information System (INIS)

    Gómez-González, M.; López, A.; Jurado, F.

    2013-01-01

    Highlights: ► Method to determine the optimal location and size of biomass power plants. ► The proposed approach is a hybrid of PSO algorithm and optimal power flow. ► Comparison among the proposed algorithm and other methods. ► Computational costs are enough lower than that required for exhaustive search. - Abstract: This paper addresses generation of electricity in the specific aspect of finding the best location and sizing of biomass fueled gas micro-turbine power plants, taking into account the variables involved in the problem, such as the local distribution of biomass resources, biomass transportation and extraction costs, operation and maintenance costs, power losses costs, network operation costs, and technical constraints. In this paper a hybrid method is introduced employing discrete particle swarm optimization and optimal power flow. The approach can be applied to search the best sites and capacities to connect biomass fueled gas micro-turbine power systems in a distribution network among a large number of potential combinations and considering the technical constraints of the network. A fair comparison among the proposed algorithm and other methods is performed.

  8. Multi-Contextual Segregation and Environmental Justice Research: Toward Fine-Scale Spatiotemporal Approaches

    Directory of Open Access Journals (Sweden)

    Yoo Min Park

    2017-10-01

    Full Text Available Many environmental justice studies have sought to examine the effect of residential segregation on unequal exposure to environmental factors among different social groups, but little is known about how segregation in non-residential contexts affects such disparity. Based on a review of the relevant literature, this paper discusses the limitations of traditional residence-based approaches in examining the association between socioeconomic or racial/ethnic segregation and unequal environmental exposure in environmental justice research. It emphasizes that future research needs to go beyond residential segregation by considering the full spectrum of segregation experienced by people in various geographic and temporal contexts of everyday life. Along with this comprehensive understanding of segregation, the paper also highlights the importance of assessing environmental exposure at a high spatiotemporal resolution in environmental justice research. The successful integration of a comprehensive concept of segregation, high-resolution data and fine-grained spatiotemporal approaches to assessing segregation and environmental exposure would provide more nuanced and robust findings on the associations between segregation and disparities in environmental exposure and their health impacts. Moreover, it would also contribute to significantly expanding the scope of environmental justice research.

  9. Large Scale Proteomic Data and Network-Based Systems Biology Approaches to Explore the Plant World.

    Science.gov (United States)

    Di Silvestre, Dario; Bergamaschi, Andrea; Bellini, Edoardo; Mauri, PierLuigi

    2018-06-03

    The investigation of plant organisms by means of data-derived systems biology approaches based on network modeling is mainly characterized by genomic data, while the potential of proteomics is largely unexplored. This delay is mainly caused by the paucity of plant genomic/proteomic sequences and annotations which are fundamental to perform mass-spectrometry (MS) data interpretation. However, Next Generation Sequencing (NGS) techniques are contributing to filling this gap and an increasing number of studies are focusing on plant proteome profiling and protein-protein interactions (PPIs) identification. Interesting results were obtained by evaluating the topology of PPI networks in the context of organ-associated biological processes as well as plant-pathogen relationships. These examples foreshadow well the benefits that these approaches may provide to plant research. Thus, in addition to providing an overview of the main-omic technologies recently used on plant organisms, we will focus on studies that rely on concepts of module, hub and shortest path, and how they can contribute to the plant discovery processes. In this scenario, we will also consider gene co-expression networks, and some examples of integration with metabolomic data and genome-wide association studies (GWAS) to select candidate genes will be mentioned.

  10. Multi-Contextual Segregation and Environmental Justice Research: Toward Fine-Scale Spatiotemporal Approaches.

    Science.gov (United States)

    Park, Yoo Min; Kwan, Mei-Po

    2017-10-10

    Many environmental justice studies have sought to examine the effect of residential segregation on unequal exposure to environmental factors among different social groups, but little is known about how segregation in non-residential contexts affects such disparity. Based on a review of the relevant literature, this paper discusses the limitations of traditional residence-based approaches in examining the association between socioeconomic or racial/ethnic segregation and unequal environmental exposure in environmental justice research. It emphasizes that future research needs to go beyond residential segregation by considering the full spectrum of segregation experienced by people in various geographic and temporal contexts of everyday life. Along with this comprehensive understanding of segregation, the paper also highlights the importance of assessing environmental exposure at a high spatiotemporal resolution in environmental justice research. The successful integration of a comprehensive concept of segregation, high-resolution data and fine-grained spatiotemporal approaches to assessing segregation and environmental exposure would provide more nuanced and robust findings on the associations between segregation and disparities in environmental exposure and their health impacts. Moreover, it would also contribute to significantly expanding the scope of environmental justice research.

  11. Probabilistic Approach to Enable Extreme-Scale Simulations under Uncertainty and System Faults. Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Knio, Omar [Duke Univ., Durham, NC (United States). Dept. of Mechanical Engineering and Materials Science

    2017-05-05

    The current project develops a novel approach that uses a probabilistic description to capture the current state of knowledge about the computational solution. To effectively spread the computational effort over multiple nodes, the global computational domain is split into many subdomains. Computational uncertainty in the solution translates into uncertain boundary conditions for the equation system to be solved on those subdomains, and many independent, concurrent subdomain simulations are used to account for this bound- ary condition uncertainty. By relying on the fact that solutions on neighboring subdomains must agree with each other, a more accurate estimate for the global solution can be achieved. Statistical approaches in this update process make it possible to account for the effect of system faults in the probabilistic description of the computational solution, and the associated uncertainty is reduced through successive iterations. By combining all of these elements, the probabilistic reformulation allows splitting the computational work over very many independent tasks for good scalability, while being robust to system faults.

  12. On a Generalisation of Uniform Distribution and its Properties

    Directory of Open Access Journals (Sweden)

    K. Jayakumar

    2016-03-01

    Full Text Available Nadarajah et al.(2013 introduced a family life time models using truncated negative binomial distribution and derived some properties of the family of distributions. It is a generalization of Marshall-Olkin family of distributions. In this paper, we introduce Generalized Uniform Distribution (GUD using the approach of Nadarajah et al.(2013. The shape properties of density function and hazard function are discussed. The expression for moments, order statistics, entropies are obtained. Estimation procedure is also discussed.The GDU introduced here is a generalization of the Marshall-Olkin extended uniform distribution studied in Jose and Krishna(2011.

  13. Discovery of Uniformly Expanding Universe

    Directory of Open Access Journals (Sweden)

    Cahill R. T.

    2012-01-01

    Full Text Available Saul Perlmutter and the Brian Schmidt – Adam Riess teams reported that their Friedmann-model GR-based analysis of their supernovae magnitude-redshift data re- vealed a new phenomenon of “dark energy” which, it is claimed, forms 73% of the energy / matter density of the present-epoch universe, and which is linked to the further claim of an accelerating expansion of the universe. In 2011 Perlmutter, Schmidt and Riess received the Nobel Prize in Physics “for the discovery of the accelerating ex- pansion of the Universe through observations of distant supernovae”. Here it is shown that (i a generic model-independent analysis of this data reveals a uniformly expanding universe, (ii their analysis actually used Newtonian gravity, and finally (iii the data, as well as the CMB fluctuation data, does not require “dark energy” nor “dark matter”, but instead reveals the phenomenon of a dynamical space, which is absent from the Friedmann model.

  14. Parallel processing and non-uniform grids in global air quality modeling

    NARCIS (Netherlands)

    Berkvens, P.J.F.; Bochev, Mikhail A.

    2002-01-01

    A large-scale global air quality model, running efficiently on a single vector processor, is enhanced to make more realistic and more long-term simulations feasible. Two strategies are combined: non-uniform grids and parallel processing. The communication through the hierarchy of non-uniform grids

  15. Understanding Greenland ice sheet hydrology using an integrated multi-scale approach

    International Nuclear Information System (INIS)

    Rennermalm, A K; Moustafa, S E; Mioduszewski, J; Robinson, D A; Chu, V W; Smith, L C; Forster, R R; Hagedorn, B; Harper, J T; Mote, T L; Shuman, C A; Tedesco, M

    2013-01-01

    Improved understanding of Greenland ice sheet hydrology is critically important for assessing its impact on current and future ice sheet dynamics and global sea level rise. This has motivated the collection and integration of in situ observations, model development, and remote sensing efforts to quantify meltwater production, as well as its phase changes, transport, and export. Particularly urgent is a better understanding of albedo feedbacks leading to enhanced surface melt, potential positive feedbacks between ice sheet hydrology and dynamics, and meltwater retention in firn. These processes are not isolated, but must be understood as part of a continuum of processes within an integrated system. This letter describes a systems approach to the study of Greenland ice sheet hydrology, emphasizing component interconnections and feedbacks, and highlighting research and observational needs. (letter)

  16. Multicontroller: an object programming approach to introduce advanced control algorithms for the GCS large scale project

    CERN Document Server

    Cabaret, S; Coppier, H; Rachid, A; Barillère, R; CERN. Geneva. IT Department

    2007-01-01

    The GCS (Gas Control System) project team at CERN uses a Model Driven Approach with a Framework - UNICOS (UNified Industrial COntrol System) - based on PLC (Programming Language Controller) and SCADA (Supervisory Control And Data Acquisition) technologies. The first' UNICOS versions were able to provide a PID (Proportional Integrative Derivative) controller whereas the Gas Systems required more advanced control strategies. The MultiController is a new UNICOS object which provides the following advanced control algorithms: Smith Predictor, PFC (Predictive Function Control), RST* and GPC (Global Predictive Control). Its design is based on a monolithic entity with a global structure definition which is able to capture the desired set of parameters of any specific control algorithm supported by the object. The SCADA system -- PVSS - supervises the MultiController operation. The PVSS interface provides users with supervision faceplate, in particular it links any MultiController with recipes: the GCS experts are ab...

  17. Critical dynamics a field theory approach to equilibrium and non-equilibrium scaling behavior

    CERN Document Server

    Täuber, Uwe C

    2014-01-01

    Introducing a unified framework for describing and understanding complex interacting systems common in physics, chemistry, biology, ecology, and the social sciences, this comprehensive overview of dynamic critical phenomena covers the description of systems at thermal equilibrium, quantum systems, and non-equilibrium systems. Powerful mathematical techniques for dealing with complex dynamic systems are carefully introduced, including field-theoretic tools and the perturbative dynamical renormalization group approach, rapidly building up a mathematical toolbox of relevant skills. Heuristic and qualitative arguments outlining the essential theory behind each type of system are introduced at the start of each chapter, alongside real-world numerical and experimental data, firmly linking new mathematical techniques to their practical applications. Each chapter is supported by carefully tailored problems for solution, and comprehensive suggestions for further reading, making this an excellent introduction to critic...

  18. Load-based approaches for modelling visual clarity in streams at regional scale.

    Science.gov (United States)

    Elliott, A H; Davies-Colley, R J; Parshotam, A; Ballantine, D

    2013-01-01

    Reduction of visual clarity in streams by diffuse sources of fine sediment is a cause of water quality impairment in New Zealand and internationally. In this paper we introduce the concept of a load of optical cross section (LOCS), which can be used for load-based management of light-attenuating substances and for water quality models that are based on mass accounting. In this approach, the beam attenuation coefficient (units of m(-1)) is estimated from the inverse of the visual clarity (units of m) measured with a black disc. This beam attenuation coefficient can also be considered as an optical cross section (OCS) per volume of water, analogous to a concentration. The instantaneous 'flux' of cross section is obtained from the attenuation coefficient multiplied by the water discharge, and this can be accumulated over time to give an accumulated 'load' of cross section (LOCS). Moreover, OCS is a conservative quantity, in the sense that the OCS of two combined water volumes is the sum of the OCS of the individual water volumes (barring effects such as coagulation, settling, or sorption). The LOCS can be calculated for a water quality station using rating curve methods applied to measured time series of visual clarity and flow. This approach was applied to the sites in New Zealand's National Rivers Water Quality Network (NRWQN). Although the attenuation coefficient follows roughly a power relation with flow at some sites, more flexible loess rating curves are required at other sites. The hybrid mechanistic-statistical catchment model SPARROW (SPAtially Referenced Regressions On Watershed attributes), which is based on a mass balance for mean annual load, was then applied to the NRWQN dataset. Preliminary results from this model are presented, highlighting the importance of factors related to erosion, such as rainfall, slope, hardness of catchment rock types, and the influence of pastoral development on the load of optical cross section.

  19. Influence of plant productivity over variability of soil respiration: a multi-scale approach

    Science.gov (United States)

    Curiel Yuste, J.

    2009-04-01

    To investigate the role of plant photosynthetic activity on the variations in soil respiration (SR), SR data obtained from manual sampling and automatic soil respiration chambers placed on eddy flux towers sites were used. Plant photosynthetic activity was represented as Gross Primary Production (GPP), calculated from the half hourly continuous measurements of Net Ecosystem Exchange (NEE). The role of plant photosynthetic activity over the variation in SR was investigated at different time-scales: data averaged hourly, daily and weekly were used to study the photosynthetic effect on SR dial variations (Hourly data), 15 days variations (Daily averages), monthly variations (daily and weekly averages) and seasonal variations (weekly data). Our results confirm the important role of plant photosynthetic activity on the variations of SR at each of the mentioned time-scales. The effect of photosynthetic activity on SR was high on hourly time-scale (dial variations of SR). At half of the studied ecosystems GPP was the best single predictor of dial variations of SR. However at most of the studied sites the combination of soil temperature and GPP was the best predictor of dial variations in SR. The effect of aboveground productivity over dial variations of SR lagged on the range of 5 to 15 hours, depending on the ecosystem. At daily to monthly time scale variations of SR were in general better explained with the combination of temperature and moisture variations. However, ‘jumps' in average weekly SR during the growing season yielded anomaly high values of Q10, in some cases above 1000, which probably reflects synoptic changes in photosynthates translocation from plant activity. Finally, although seasonal changes of SR were in general very well explained by temperature and soil moisture, seasonality of SR was better correlated to seasonality of GPP than to seasonality of soil temperature and/or soil moisture. Therefore the magnitude of the seasonal variation in SR was in

  20. Scaling-up permafrost thermal measurements in western Alaska using an ecotype approach

    Directory of Open Access Journals (Sweden)

    W. L. Cable

    2016-10-01

    Full Text Available Permafrost temperatures are increasing in Alaska due to climate change and in some cases permafrost is thawing and degrading. In areas where degradation has already occurred the effects can be dramatic, resulting in changing ecosystems, carbon release, and damage to infrastructure. However, in many areas we lack baseline data, such as subsurface temperatures, needed to assess future changes and potential risk areas. Besides climate, the physical properties of the vegetation cover and subsurface material have a major influence on the thermal state of permafrost. These properties are often directly related to the type of ecosystem overlaying permafrost. In this paper we demonstrate that classifying the landscape into general ecotypes is an effective way to scale up permafrost thermal data collected from field monitoring sites. Additionally, we find that within some ecotypes the absence of a moss layer is indicative of the absence of near-surface permafrost. As a proof of concept, we used the ground temperature data collected from the field sites to recode an ecotype land cover map into a map of mean annual ground temperature ranges at 1 m depth based on analysis and clustering of observed thermal regimes. The map should be useful for decision making with respect to land use and understanding how the landscape might change under future climate scenarios.

  1. Construct validity of the Beck Hopelessness Scale (BHS) among university students: A multitrait-multimethod approach.

    Science.gov (United States)

    Boduszek, Daniel; Dhingra, Katie

    2016-10-01

    There is considerable debate about the underlying factor structure of the Beck Hopelessness Scale (BHS) in the literature. An established view is that it reflects a unitary or bidimensional construct in nonclinical samples. There are, however, reasons to reconsider this conceptualization. Based on previous factor analytic findings from both clinical and nonclinical studies, the aim of the present study was to compare 16 competing models of the BHS in a large university student sample (N = 1, 733). Sixteen distinct factor models were specified and tested using conventional confirmatory factor analytic techniques, along with confirmatory bifactor modeling. A 3-factor solution with 2 method effects (i.e., a multitrait-multimethod model) provided the best fit to the data. The reliability of this conceptualization was supported by McDonald's coefficient omega and the differential relationships exhibited between the 3 hopelessness factors ("feelings about the future," "loss of motivation," and "future expectations") and measures of goal disengagement, brooding rumination, suicide ideation, and suicide attempt history. The results provide statistical support for a 3-trait and 2-method factor model, and hence the 3 dimensions of hopelessness theorized by Beck. The theoretical and methodological implications of these findings are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. A structural approach in the study of bones: fossil and burnt bones at nanosize scale

    Science.gov (United States)

    Piga, Giampaolo; Baró, Maria Dolors; Escobal, Irati Golvano; Gonçalves, David; Makhoul, Calil; Amarante, Ana; Malgosa, Assumpció; Enzo, Stefano; Garroni, Sebastiano

    2016-12-01

    We review the different factors affecting significantly mineral structure and composition of bones. Particularly, it is assessed that micro-nanostructural and chemical properties of skeleton bones change drastically during burning; the micro- and nanostructural changes attending those phases manifest themselves, amongst others, in observable alterations to the bones colour, morphology, microstructure, mechanical strength and crystallinity. Intense changes involving the structure and chemical composition of bones also occur during the fossilization process. Bioapatite material is contaminated by an heavy fluorination process which, on a long-time scale reduces sensibly the volume of the original unit cell, mainly the a-axis of the hexagonal P63/m space group. Moreover, the bioapatite suffers to a varying degree of extent by phase contamination from the nearby environment, to the point that rarely a fluorapatite single phase may be found in fossil bones here examined. TEM images supply precise and localized information, on apatite crystal shape and dimension, and on different processes that occur during thermal processes or fossilization of ancient bone, complementary to that given by X-ray diffraction and Attenuated Total Reflection Infrared spectroscopy. We are presenting a synthesis of XRD, ATR-IR and TEM results on the nanostructure of various modern, burned and palaeontological bones.

  3. Catalonia's energy metabolism: Using the MuSIASEM approach at different scales

    International Nuclear Information System (INIS)

    Ramos-Martin, Jesus; Canellas-Bolta, Silvia; Giampietro, Mario; Gamboa, Gonzalo

    2009-01-01

    This paper applies the so-called Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM), based on Georgescu-Roegen's fund-flow model, to the Spanish region of Catalonia. It arrives to the conclusion that within the context of the end of cheap oil, the current development model of the Catalan economy, based on the growth of low-productivity sectors such as services and construction, must be changed. The change is needed not only because of the increasing scarcity of affordable energy and the increasing environmental impact of present development, but also because of the aging population. Moreover, the situation experienced by Catalonia is similar to that of other European countries and many other developed countries. This implies that we can expect a wave of major structural changes in the economy of developed countries worldwide. To make things more challenging, according to current trends, the energy intensity and exosomatic energy metabolism of Catalonia will keep increasing in the near future. To avoid a reduction in the standard of living of Catalans due to a reduction in the available energy it is important that the Government of Catalonia implement major adjustments and conservation efforts in both the household and paid-work sectors.

  4. A Parametric Genetic Algorithm Approach to Assess Complementary Options of Large Scale Wind-solar Coupling

    Institute of Scientific and Technical Information of China (English)

    Tim; Mareda; Ludovic; Gaudard; Franco; Romerio

    2017-01-01

    The transitional path towards a highly renewable power system based on wind and solar energy sources is investigated considering their intermittent and spatially distributed characteristics. Using an extensive weather-driven simulation of hourly power mismatches between generation and load, we explore the interplay between geographical resource complementarity and energy storage strategies. Solar and wind resources are considered at variable spatial scales across Europe and related to the Swiss load curve, which serve as a typical demand side reference. The optimal spatial distribution of renewable units is further assessed through a parameterized optimization method based on a genetic algorithm. It allows us to explore systematically the effective potential of combined integration strategies depending on the sizing of the system, with a focus on how overall performance is affected by the definition of network boundaries. Upper bounds on integration schemes are provided considering both renewable penetration and needed reserve power capacity. The quantitative trade-off between grid extension, storage and optimal wind-solar mix is highlighted.This paper also brings insights on how optimal geographical distribution of renewable units evolves as a function of renewable penetration and grid extent.

  5. Transport of particles in liquid foams: a multi-scale approach

    International Nuclear Information System (INIS)

    Louvet, N.

    2009-11-01

    Foam is used for the decontamination of radioactive tanks since foam is a system that has a large surface for a low amount of liquid and as a consequence requires less water to be decontaminated. We study experimentally different particle transport configurations in fluid micro-channels network (Plateau borders) of aqueous foam. At first, foam permeability is measured at the scale of a single channel and of the whole foam network for 2 soap solutions known for their significant different interface mobility. Experimental data are well described by a model that takes into account the real geometry of the foam and by considering a constant value of the Boussinesq number of each soap solutions. Secondly, the velocity of one particle convected in a single foam channel is measured for different particle/channel aspect ratio. For small aspect ratio, a counterflow that is taking place at the channel's corners slows down the particle. A recirculation model in the channel foam films is developed to describe this effect. To do this, the Gibbs elasticity is introduced. Then, the threshold between trapped and released of one particle in liquid foam are carried out. This threshold is deduced from hydrodynamic and capillary forces equilibrium. Finally, the case of a clog foam node is addressed. (author)

  6. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  7. A multi-signature approach to low-scale sterile neutrino phenomenology

    CERN Document Server

    Ross-Lonergan, Mark

    2017-01-01

    Since the discovery of non-zero neutrino masses, through the observation of neutrino flavour oscillations, we had a plethora of successful experiments which have made increasingly precise measurements of the mixing angles and mass-differences that drive the phenomena. In this thesis we highlight the fact that there is still significant room for new physics, however, when one removes the assumption of unitarity of the 3x3 neutrino mixing matrix, an assumption inherent in the 3ν paradigm. We refit all global data to show just how much non-unitarity is currently allowed. The canonical way that such a non-unitarity is introduced to the 3x3 neutrino mixing matrix is by the addition of additional neutral fermions, singlets under the Standard Model gauge group. These “Sterile Neutrinos” have a wide range of the- oretical and phenomenological implications. Alongside the sensitivity non-unitarity measurements have to sterile neutrinos, in this thesis we will study in detail two additional signatures of low-scale ...

  8. Scale-dependent mechanisms of habitat selection for a migratory passerine: an experimental approach

    Science.gov (United States)

    Donovan, Therese M.; Cornell, Kerri L.

    2010-01-01

    Habitat selection theory predicts that individuals choose breeding habitats that maximize fitness returns on the basis of indirect environmental cues at multiple spatial scales. We performed a 3-year field experiment to evaluate five alternative hypotheses regarding whether individuals choose breeding territories in heterogeneous landscapes on the basis of (1) shrub cover within a site, (2) forest land-cover pattern surrounding a site, (3) conspecific song cues during prebreeding settlement periods, (4) a combination of these factors, and (5) interactions among these factors. We tested hypotheses with playbacks of conspecific song across a gradient of landscape pattern and shrub density and evaluated changes in territory occupancy patterns in a forest-nesting passerine, the Black-throated Blue Warbler (Dendroica caerulescens). Our results support the hypothesis that vegetation structure plays a primary role during presettlement periods in determining occupancy patterns in this species. Further, both occupancy rates and territory turnover were affected by an interaction between local shrub density and amount of forest in the surrounding landscape, but not by interactions between habitat cues and social cues. Although previous studies of this species in unfragmented landscapes found that social postbreeding song cues played a key role in determining territory settlement, our prebreeding playbacks were not associated with territory occupancy or turnover. Our results suggest that in heterogeneous landscapes during spring settlement, vegetation structure may be a more reliable signal of reproductive performance than the physical location of other individuals.

  9. A scale space approach for unsupervised feature selection in mass spectra classification for ovarian cancer detection.

    Science.gov (United States)

    Ceccarelli, Michele; d'Acierno, Antonio; Facchiano, Angelo

    2009-10-15

    Mass spectrometry spectra, widely used in proteomics studies as a screening tool for protein profiling and to detect discriminatory signals, are high dimensional data. A large number of local maxima (a.k.a. peaks) have to be analyzed as part of computational pipelines aimed at the realization of efficient predictive and screening protocols. With this kind of data dimensions and samples size the risk of over-fitting and selection bias is pervasive. Therefore the development of bio-informatics methods based on unsupervised feature extraction can lead to general tools which can be applied to several fields of predictive proteomics. We propose a method for feature selection and extraction grounded on the theory of multi-scale spaces for high resolution spectra derived from analysis of serum. Then we use support vector machines for classification. In particular we use a database containing 216 samples spectra divided in 115 cancer and 91 control samples. The overall accuracy averaged over a large cross validation study is 98.18. The area under the ROC curve of the best selected model is 0.9962. We improved previous known results on the problem on the same data, with the advantage that the proposed method has an unsupervised feature selection phase. All the developed code, as MATLAB scripts, can be downloaded from http://medeaserver.isa.cnr.it/dacierno/spectracode.htm.

  10. A hybrid classical-quantum approach for ultra-scaled confined nanostructures : modeling and simulation*

    Directory of Open Access Journals (Sweden)

    Pietra Paola

    2012-04-01

    Full Text Available We propose a hybrid classical-quantum model to study the motion of electrons in ultra-scaled confined nanostructures. The transport of charged particles, considered as one dimensional, is described by a quantum effective mass model in the active zone coupled directly to a drift-diffusion problem in the rest of the device. We explain how this hybrid model takes into account the peculiarities due to the strong confinement and we present numerical simulations for a simplified carbon nanotube. Nous proposons un modèle hybride classique-quantique pour décrire le mouvement des électrons dans des nanostructures très fortement confinées. Le transport des particules, consideré unidimensionel, est décrit par un modèle quantique avec masse effective dans la zone active couplé à un problème de dérive-diffusion dans le reste du domaine. Nous expliquons comment ce modèle hybride prend en compte les spécificités de ce très fort confinement et nous présentons des résultats numériques pour un nanotube de carbone simplifié.

  11. Uniform competency-based local feature extraction for remote sensing images

    Science.gov (United States)

    Sedaghat, Amin; Mohammadi, Nazila

    2018-01-01

    Local feature detectors are widely used in many photogrammetry and remote sensing applications. The quantity and distribution of the local features play a critical role in the quality of the image matching process, particularly for multi-sensor high resolution remote sensing image registration. However, conventional local feature detectors cannot extract desirable matched features either in terms of the number of correct matches or the spatial and scale distribution in multi-sensor remote sensing images. To address this problem, this paper proposes a novel method for uniform and robust local feature extraction for remote sensing images, which is based on a novel competency criterion and scale and location distribution constraints. The proposed method, called uniform competency (UC) local feature extraction, can be easily applied to any local feature detector for various kinds of applications. The proposed competency criterion is based on a weighted ranking process using three quality measures, including robustness, spatial saliency and scale parameters, which is performed in a multi-layer gridding schema. For evaluation, five state-of-the-art local feature detector approaches, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), scale-invariant feature operator (SFOP), maximally stable extremal region (MSER) and hessian-affine, are used. The proposed UC-based feature extraction algorithms were successfully applied to match various synthetic and real satellite image pairs, and the results demonstrate its capability to increase matching performance and to improve the spatial distribution. The code to carry out the UC feature extraction is available from href="https://www.researchgate.net/publication/317956777_UC-Feature_Extraction.

  12. A hybrid approach to estimating national scale spatiotemporal variability of PM2.5 in the contiguous United States.

    Science.gov (United States)

    Beckerman, Bernardo S; Jerrett, Michael; Serre, Marc; Martin, Randall V; Lee, Seung-Jae; van Donkelaar, Aaron; Ross, Zev; Su, Jason; Burnett, Richard T

    2013-07-02

    Airborne fine particulate matter exhibits spatiotemporal variability at multiple scales, which presents challenges to estimating exposures for health effects assessment. Here we created a model to predict ambient particulate matter less than 2.5 μm in aerodynamic diameter (PM2.5) across the contiguous United States to be applied to health effects modeling. We developed a hybrid approach combining a land use regression model (LUR) selected with a machine learning method, and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals. The PM2.5 data set included 104,172 monthly observations at 1464 monitoring locations with approximately 10% of locations reserved for cross-validation. LUR models were based on remote sensing estimates of PM2.5, land use and traffic indicators. Normalized cross-validated R(2) values for LUR were 0.63 and 0.11 with and without remote sensing, respectively, suggesting remote sensing is a strong predictor of ground-level concentrations. In the models including the BME interpolation of the residuals, cross-validated R(2) were 0.79 for both configurations; the model without remotely sensed data described more fine-scale variation than the model including remote sensing. Our results suggest that our modeling framework can predict ground-level concentrations of PM2.5 at multiple scales over the contiguous U.S.

  13. A multi-scale tensor voting approach for small retinal vessel segmentation in high resolution fundus images.

    Science.gov (United States)

    Christodoulidis, Argyrios; Hurtut, Thomas; Tahar, Houssem Ben; Cheriet, Farida

    2016-09-01

    Segmenting the retinal vessels from fundus images is a prerequisite for many CAD systems for the automatic detection of diabetic retinopathy lesions. So far, research efforts have concentrated mainly on the accurate localization of the large to medium diameter vessels. However, failure to detect the smallest vessels at the segmentation step can lead to false positive lesion detection counts in a subsequent lesion analysis stage. In this study, a new hybrid method for the segmentation of the smallest vessels is proposed. Line detection and perceptual organization techniques are combined in a multi-scale scheme. Small vessels are reconstructed from the perceptual-based approach via tracking and pixel painting. The segmentation was validated in a high resolution fundus image database including healthy and diabetic subjects using pixel-based as well as perceptual-based measures. The proposed method achieves 85.06% sensitivity rate, while the original multi-scale line detection method achieves 81.06% sensitivity rate for the corresponding images (p<0.05). The improvement in the sensitivity rate for the database is 6.47% when only the smallest vessels are considered (p<0.05). For the perceptual-based measure, the proposed method improves the detection of the vasculature by 7.8% against the original multi-scale line detection method (p<0.05). Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. A data-model integration approach toward improved understanding on wetland functions and hydrological benefits at the catchment scale

    Science.gov (United States)

    Yeo, I. Y.; Lang, M.; Lee, S.; Huang, C.; Jin, H.; McCarty, G.; Sadeghi, A.

    2017-12-01

    The wetland ecosystem plays crucial roles in improving hydrological function and ecological integrity for the downstream water and the surrounding landscape. However, changing behaviours and functioning of wetland ecosystems are poorly understood and extremely difficult to characterize. Improved understanding on hydrological behaviours of wetlands, considering their interaction with surrounding landscapes and impacts on downstream waters, is an essential first step toward closing the knowledge gap. We present an integrated wetland-catchment modelling study that capitalizes on recently developed inundation maps and other geospatial data. The aim of the data-model integration is to improve spatial prediction of wetland inundation and evaluate cumulative hydrological benefits at the catchment scale. In this paper, we highlight problems arising from data preparation, parameterization, and process representation in simulating wetlands within a distributed catchment model, and report the recent progress on mapping of wetland dynamics (i.e., inundation) using multiple remotely sensed data. We demonstrate the value of spatially explicit inundation information to develop site-specific wetland parameters and to evaluate model prediction at multi-spatial and temporal scales. This spatial data-model integrated framework is tested using Soil and Water Assessment Tool (SWAT) with improved wetland extension, and applied for an agricultural watershed in the Mid-Atlantic Coastal Plain, USA. This study illustrates necessity of spatially distributed information and a data integrated modelling approach to predict inundation of wetlands and hydrologic function at the local landscape scale, where monitoring and conservation decision making take place.

  15. Estimating heterotrophic respiration at large scales: Challenges, approaches, and next steps

    Science.gov (United States)

    Bond-Lamberty, Ben; Epron, Daniel; Harden, Jennifer W.; Harmon, Mark E.; Hoffman, Forrest; Kumar, Jitendra; McGuire, Anthony David; Vargas, Rodrigo

    2016-01-01

    Heterotrophic respiration (HR), the aerobic and anaerobic processes mineralizing organic matter, is a key carbon flux but one impossible to measure at scales significantly larger than small experimental plots. This impedes our ability to understand carbon and nutrient cycles, benchmark models, or reliably upscale point measurements. Given that a new generation of highly mechanistic, genomic-specific global models is not imminent, we suggest that a useful step to improve this situation would be the development of “Decomposition Functional Types” (DFTs). Analogous to plant functional types (PFTs), DFTs would abstract and capture important differences in HR metabolism and flux dynamics, allowing modelers and experimentalists to efficiently group and vary these characteristics across space and time. We argue that DFTs should be initially informed by top-down expert opinion, but ultimately developed using bottom-up, data-driven analyses, and provide specific examples of potential dependent and independent variables that could be used. We present an example clustering analysis to show how annual HR can be broken into distinct groups associated with global variability in biotic and abiotic factors, and demonstrate that these groups are distinct from (but complementary to) already-existing PFTs. A similar analysis incorporating observational data could form the basis for future DFTs. Finally, we suggest next steps and critical priorities: collection and synthesis of existing data; more in-depth analyses combining open data with rigorous testing of analytical results; using point measurements and realistic forcing variables to constrain process-based models; and planning by the global modeling community for decoupling decomposition from fixed site data. These are all critical steps to build a foundation for DFTs in global models, thus providing the ecological and climate change communities with robust, scalable estimates of HR.

  16. Consumer preference of fertilizer in West Java using multi-dimensional scaling approach

    Science.gov (United States)

    Utami, Hesty Nurul; Sadeli, Agriani Hermita; Perdana, Tomy; Renaldy, Eddy; Mahra Arari, H.; Ajeng Sesy N., P.; Fernianda Rahayu, H.; Ginanjar, Tetep; Sanjaya, Sonny

    2018-02-01

    There are various fertilizer products in the markets for farmers to be used for farming activities. Fertilizers are a supplements supply to soil nutrients, build up soil fertility in order to support plant nutrients and increase plants productivity. Fertilizers consists of nitrogen, phosphorous, potassium, micro vitamins and other complex nutrient in farming systems that commonly used in agricultural activities to improve quantity and quality of harvest. Recently, market demand for fertilizer has been increased dramatically; furthermore, fertilizer companies are required to develop strategies to know about consumer preferences towards several issues. Consumer preference depends on consumer needs selected by subject (individual) that is measured by utilization from several things that market offered and as final decision on purchase process. West Java is one of province as the main producer of agricultural products and automatically is one of the potential consumer's fertilizers on farming activities. This research is a case study in nine districts in West Java province, i.e., Bandung, West Bandung, Bogor, Depok, Garut, Indramayu, Majalengka, Cirebon and Cianjur. The purpose of this research is to describe the attributes on consumer preference for fertilizers. The multi-dimensional scaling method is used as quantitative method to help visualize the level of similarity of individual cases on a dataset, to describe and mapping the information system and to accept the goal. The attributes in this research are availability, nutrients content, price, form of fertilizer, decomposition speed, easy to use, label, packaging type, color, design and size of packaging, hardening process and promotion. There are tendency towards two fertilizer brand have similarity on availability of products, price, speed of decomposition and hardening process.

  17. EMAPS: An Efficient Multiscale Approach to Plasma Systems with Non-MHD Scale Effects

    Energy Technology Data Exchange (ETDEWEB)

    Omelchenko, Yuri A. [SciberQuest, Inc., Del Mar, CA (United States); Karimabadi, Homa [SciberQuest, Inc., Del Mar, CA (United States)

    2014-10-14

    Using Discrete-Event Simulation (DES) as a novel paradigm for time integration of large-scale physics-driven systems, we have achieved significant breakthroughs in simulations of multi-dimensional magnetized plasmas where ion kinetic and finite Larmor radius (FLR) and Hall effects play a crucial role. For these purposes we apply a unique asynchronous simulation tool: a parallel, electromagnetic Particle-in-Cell (PIC) code, HYPERS (Hybrid Particle Event-Resolved Simulator), which treats plasma electrons as a charge neutralizing fluid and solves a self-consistent set of non-radiative Maxwell, electron fluid equations and ion particle equations on a structured computational grid. HYPERS enables adaptive local time steps for particles, fluid elements and electromagnetic fields. This ensures robustness (stability) and efficiency (speed) of highly dynamic and nonlinear simulations of compact plasma systems such spheromaks, FRCs, ion beams and edge plasmas. HYPERS is a unique asynchronous code that has been designed to serve as a test bed for developing multi-physics applications not only for laboratory plasma devices but generally across a number of plasma physics fields, including astrophysics, space physics and electronic devices. We have made significant improvements to the HYPERS core: (1) implemented a new asynchronous magnetic field integration scheme that preserves local divB=0 to within round-off errors; (2) Improved staggered-grid discretizations of electric and magnetic fields. These modifications have significantly enhanced the accuracy and robustness of 3D simulations. We have conducted first-ever end-to-end 3D simulations of merging spheromak plasmas. The preliminary results show: (1) tilt-driven relaxation of a freely expanding spheromak to an m=1 Taylor helix configuration and (2) possibility of formation of a tilt-stable field-reversed configuration via merging and magnetic reconnection of two double-sided spheromaks with opposite helicities.

  18. Automated nodule location and size estimation using a multi-scale Laplacian of Gaussian filtering approach.

    Science.gov (United States)

    Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I

    2009-01-01

    Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.

  19. Comprehending Adverbs Of Doubt And Certainty In Health Communication: A Multidimensional Scaling Approach

    Directory of Open Access Journals (Sweden)

    Norman S. Segalowitz

    2016-05-01

    Full Text Available This research explored the feasibility of using multidimensional scaling (MDS analysis in novel combination with other techniques to study comprehension of epistemic adverbs expressing doubt and certainty (e.g., evidently, obviously, probably as they relate to health communication in clinical settings. In Study 1, Australian English speakers performed a dissimilarity-rating task with sentence pairs containing the target stimuli, presented as doctors' opinions. Ratings were analyzed using a combination of cultural consensus analysis (factor analysis across participants, weighted-data classical-MDS, and cluster analysis. Analyses revealed strong within-community consistency for a 3-dimensional semantic space solution that took into account individual differences, strong statistical acceptability of the MDS results in terms of stress and explained variance, and semantic configurations that were interpretable in terms of linguistic analyses of the target adverbs. The results confirmed the feasibility of using MDS in this context. Study 2 replicated the results with Canadian English speakers on the same task. Semantic analyses and stress decomposition analysis were performed on the Australian and Canadian data sets, revealing similarities and differences between the two groups. Overall, the results support using MDS to study comprehension of words critical for health communication, including in future studies, for example, second language speaking patients and/or practitioners. More broadly, the results indicate that the techniques described should be promising for comprehension studies in many communicative domains, in both clinical settings and beyond, and including those targeting other aspects of language and focusing on comparisons across different speech communities.

  20. The Importance of Being Hybrid for Spatial Epidemic Models:A Multi-Scale Approach

    Directory of Open Access Journals (Sweden)

    Arnaud Banos

    2015-11-01

    Full Text Available This work addresses the spread of a disease within an urban system, definedas a network of interconnected cities. The first step consists of comparing two differentapproaches: a macroscopic one, based on a system of coupled Ordinary DifferentialEquations (ODE Susceptible-Infected-Recovered (SIR systems exploiting populations onnodes and flows on edges (so-called metapopulational model, and a hybrid one, couplingODE SIR systems on nodes and agents traveling on edges. Under homogeneous conditions(mean field approximation, this comparison leads to similar results on the outputs on whichwe focus (the maximum intensity of the epidemic, its duration and the time of the epidemicpeak. However, when it comes to setting up epidemic control strategies, results rapidlydiverge between the two approaches, and it appears that the full macroscopic model is notcompletely adapted to these questions. In this paper, we focus on some control strategies,which are quarantine, avoidance and risk culture, to explore the differences, advantages anddisadvantages of the two models and discuss the importance of being hybrid when modelingand simulating epidemic spread at the level of a whole urban system.

  1. Plastic deformation in nano-scale multilayer materials — A biomimetic approach based on nacre

    Energy Technology Data Exchange (ETDEWEB)

    Lackner, Juergen M., E-mail: juergen.lackner@joanneum.at [JOANNEUM RESEARCH Forschungsges.m.b.H., Institute for Surface Technologies and Photonics, Functional Surfaces, Leobner Strasse 94, A-8712 Niklasdorf (Austria); Waldhauser, Wolfgang [JOANNEUM RESEARCH Forschungsges.m.b.H., Institute for Surface Technologies and Photonics, Functional Surfaces, Leobner Strasse 94, A-8712 Niklasdorf (Austria); Major, Boguslaw; Major, Lukasz [Polish Academy of Sciences, Institute of Metallurgy and Materials Sciences, IMIM-PAN, ul. Reymonta 25, PL-30059 Krakow (Poland); Kot, Marcin [University of Science and Technology, AGH, Aleja Adama Mickiewicza 30, 30-059 Krakow (Poland)

    2013-05-01

    The paper reports about a biomimetic based comparison of deformation in magnetron sputtered multilayer coatings based on titanium (Ti), titanium nitride (TiN) and diamond-like carbon (DLC) layers and the deformation mechanisms in nacre of mollusc shells. Nacre as highly mineralized tissue combines high stiffness and hardness with high toughness, enabling resistance to fracture and crack propagation during tensile loading. Such behaviour is based on a combination of load transmission by tensile stressed aragonite tablets and shearing in layers between the tablets. Shearing in these polysaccharide and protein interlayers demands hydrated conditions. Otherwise, nacre has similar brittle behaviour to aragonite. To prevent shear failure, shear hardening occurs by progressive tablet locking due to wavy dovetail-like surface geometry of the tablets. Similar effects by shearing and strain hardening mechanisms were found for Ti interlayers between TiN and DLC layers in high-resolution transmission electron microscopy studies, performed in deformed zones beneath spherical indentations. 7 nm thin Ti films are sufficient for strong toughening of the whole multi-layered coating structure, providing a barrier for propagation of cracks, starting from tensile-stressed, hard, brittle TiN or DLC layers. - Highlights: • Biomimetic approach to TiN-diamond-like carbon (DLC) multilayers by sputtering • Investigation of deformation in/around hardness indents by HR-TEM • Plastic deformation with shearing in 7-nm thick Ti interlayers in TiN–DLC multilayers • Biomimetically comparable to nacre deformation.

  2. Modelling the impact of increasing soil sealing on runoff coefficients at regional scale: a hydropedological approach

    Directory of Open Access Journals (Sweden)

    Ungaro Fabrizio

    2014-03-01

    Full Text Available Soil sealing is the permanent covering of the land surface by buildings, infrastructures or any impermeable artificial material. Beside the loss of fertile soils with a direct impact on food security, soil sealing modifies the hydrological cycle. This can cause an increased flooding risk, due to urban development in potential risk areas and to the increased volumes of runoff. This work estimates the increase of runoff due to sealing following urbanization and land take in the plain of Emilia Romagna (Italy, using the Green and Ampt infiltration model for two rainfall return periods (20 and 200 years in two different years, 1976 and 2008. To this goal a hydropedological approach was adopted in order to characterize soil hydraulic properties via locally calibrated pedotransfer functions (PTF. PTF inputs were estimated via sequential Gaussian simulations coupled with a simple kriging with varying local means, taking into account soil type and dominant land use. Results show that in the study area an average increment of 8.4% in sealed areas due to urbanization and sprawl induces an average increment in surface runoff equal to 3.5 and 2.7% respectively for 20 and 200-years return periods, with a maximum > 20% for highly sealed coast areas.

  3. School uniforms: tradition, benefit or predicament?

    OpenAIRE

    Van Aardt, Annette Marie; Wilken, Ilani

    2012-01-01

    This article focuses on the controversies surrounding school uniforms. Roleplayers in this debate in South Africa are parents, learners and educators, and arguments centre on aspects such as identity, economy and the equalising effect of school uniforms, which are considered in the literature to be benefits. Opposing viewpoints highlight the fact that compulsory uniforms infringe on learners’ constitutional rights to self-expression. The aim of this research was to determine the perspectives ...

  4. Phase diagram and tricritical behavior of an metamagnet in uniform and random fields

    International Nuclear Information System (INIS)

    Liang Yaqiu; Wei Guozhu; Xu Xiaojuan; Song Guoli

    2010-01-01

    A two-sublattice Ising metamagnet in both uniform and random fields is studied within the mean-field approach based on Bogoliubov's inequality for the Gibbs free energy. We show that the qualitative features of the phase diagrams are dependent on the parameters of the model and the uniform field values. The tricritical point and reentrant phenomenon can be observed on the phase diagram. The reentrance is due to the competition between uniform and random interactions.

  5. Effects of Energy Development on Hydrologic Response: a Multi-Scale Modeling Approach

    Science.gov (United States)

    Vithanage, J.; Miller, S. N.; Berendsen, M.; Caffrey, P. A.; Bellis, J.; Schuler, R.

    2013-12-01

    Potential impacts of energy development on surface hydrology in western Wyoming were assessed using spatially explicit hydrological models. Currently there are proposals to develop over 800 new oil and gas wells in the 218,000 acre-sized LaBarge development area that abuts the Wyoming Range and contributes runoff to the Upper Green River (approximately 1 well per 2 square miles). The intensity of development raises questions relating to impacts on the hydrological cycle, water quality, erosion and sedimentation. We developed landscape management scenarios relating to current disturbance and proposed actions put forth by the energy operators to provide inputs to spatially explicit hydrologic models. Differences between the scenarios were derived to quantify the changes and analyse the impacts to the project area. To perform this research, the Automated Watershed Assessment Tool (AGWA) was enhanced by adding different management practices suitable for the region, including the reclamation of disturbed lands over time. The AGWA interface was used to parameterize and execute two hydrologic models: the Soil and Water Assessment Tool (SWAT) and the KINEmatic Runoff and EROSion model (KINEROS2). We used freely available data including SSURGO soils, Multi-Resolution Landscape Consortium (MRLC) land cover, and 10m resolution terrain data to derive suitable initial parameters for the models. The SWAT model was manually calibrated using an innovative method at the monthly level; observed daily rainfall and temperature inputs were used as a function of elevation considering the local climate effects. Higher temporal calibration was not possible due to a lack of adequate climate and runoff data. The Nash Sutcliff efficiencies of two calibrated watersheds at the monthly scale exceeded 0.95. Results of the AGWA/SWAT simulations indicate a range of sensitivity to disturbance due to heterogeneous soil and terrain characteristics over a simulated time period of 10 years. The KINEROS

  6. Fast and Accurate Approaches for Large-Scale, Automated Mapping of Food Diaries on Food Composition Tables

    Directory of Open Access Journals (Sweden)

    Marc Lamarine

    2018-05-01

    Full Text Available Aim of Study: The use of weighed food diaries in nutritional studies provides a powerful method to quantify food and nutrient intakes. Yet, mapping these records onto food composition tables (FCTs is a challenging, time-consuming and error-prone process. Experts make this effort manually and no automation has been previously proposed. Our study aimed to assess automated approaches to map food items onto FCTs.Methods: We used food diaries (~170,000 records pertaining to 4,200 unique food items from the DiOGenes randomized clinical trial. We attempted to map these items onto six FCTs available from the EuroFIR resource. Two approaches were tested: the first was based solely on food name similarity (fuzzy matching. The second used a machine learning approach (C5.0 classifier combining both fuzzy matching and food energy. We tested mapping food items using their original names and also an English-translation. Top matching pairs were reviewed manually to derive performance metrics: precision (the percentage of correctly mapped items and recall (percentage of mapped items.Results: The simpler approach: fuzzy matching, provided very good performance. Under a relaxed threshold (score > 50%, this approach enabled to remap 99.49% of the items with a precision of 88.75%. With a slightly more stringent threshold (score > 63%, the precision could be significantly improved to 96.81% while keeping a recall rate > 95% (i.e., only 5% of the queried items would not be mapped. The machine learning approach did not lead to any improvements compared to the fuzzy matching. However, it could increase substantially the recall rate for food items without any clear equivalent in the FCTs (+7 and +20% when mapping items using their original or English-translated names. Our approaches have been implemented as R packages and are freely available from GitHub.Conclusion: This study is the first to provide automated approaches for large-scale food item mapping onto FCTs. We

  7. Defect detection in industrial radiography: a multi-scale approach; Detection de defauts en radiographie industrielle: approches multiechelles

    Energy Technology Data Exchange (ETDEWEB)

    Lefevre, M

    1995-10-01

    Radiography is used by Electricite de France for pipe inspection in nuclear power plant in order to detect defects. For several years, the RD Division of EDF has undertaken research to define image processing methods well adapted to radiographic images. The main issues raised by these images are their low contrast, their high level of noise, the presence of a trend and the variable size of the defects. A data base of digitized radiographs of pipes has been gathered and the statistical, topological and geometrical properties of all of these images have been analyzed. From this study, a global indicator of the presence of defects and local features, leading to a classification of images into areas with or without defects, have been extracted. The defect localisation problem has been considered in a multi-scale framework based on the creation of a family of images with increasing regularity and defined as a solution of a partial differential equation. From a choice of axioms, a set of equations may be deduced which define various multi-scale analyses. The survey of the properties of such analysed, when applied to images altered with different types of noise, has lead to the selection of the digitized radiographs best adapted multi-scale analysis. The segmentation process, uses the geodesic information attached to defects via connection cost concept. The final decision is based on a summary of the information extracted at several scales. A fuzzy logic approach has been proposed to solve this part. We then developed methods and tools for expertise guidance and validated them on a complete data base of images. Some global indicators have been extracted and a detection and localisation process has been achieved for large defects. (author). 117 refs., 73 figs.

  8. A new proposed approach for future large-scale de-carbonization coal-fired power plants

    International Nuclear Information System (INIS)

    Xu, Gang; Liang, Feifei; Wu, Ying; Yang, Yongping; Zhang, Kai; Liu, Wenyi

    2015-01-01

    The post-combustion CO 2 capture technology provides a feasible and promising method for large-scale CO 2 capture in coal-fired power plants. However, the large-scale CO 2 capture in conventionally designed coal-fired power plants is confronted with various problems, such as the selection of the steam extraction point and steam parameter mismatch. To resolve these problems, an improved design idea for the future coal-fired power plant with large-scale de-carbonization is proposed. A main characteristic of the proposed design is the adoption of a back-pressure steam turbine, which extracts the suitable steam for CO 2 capture and ensures the stability of the integrated system. A new let-down steam turbine generator is introduced to retrieve the surplus energy from the exhaust steam of the back-pressure steam turbine when CO 2 capture is cut off. Results show that the net plant efficiency of the improved design is 2.56% points higher than that of the conventional one when CO 2 capture ratio reaches 80%. Meanwhile, the net plant efficiency of the improved design maintains the same level to that of the conventional design when CO 2 capture is cut off. Finally, the match between the extracted steam and the heat demand of the reboiler is significantly increased, which solves the steam parameter mismatch problem. The techno-economic analysis indicates that the proposed design is a cost-effective approach for the large-scale CO 2 capture in coal-fired power plants. - Highlights: • Problems caused by CO 2 capture in the power plant are deeply analyzed. • An improved design idea for coal-fired power plants with CO 2 capture is proposed. • Thermodynamic, exergy and techno-economic analyses are quantitatively conducted. • Energy-saving effects are found in the proposed coal-fired power plant design idea

  9. QAPgrid: a two level QAP-based approach for large-scale data analysis and visualization.

    Directory of Open Access Journals (Sweden)

    Mario Inostroza-Ponta

    Full Text Available BACKGROUND: The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain "hidden regularities" and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. METHODOLOGY/PRINCIPAL FINDINGS: We present a new data visualization approach (QAPgrid that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. CONCLUSIONS/SIGNIFICANCE: Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on

  10. A scaling approach to project regional sea level rise and its uncertainties

    Directory of Open Access Journals (Sweden)

    M. Perrette

    2013-01-01

    Full Text Available Climate change causes global mean sea level to rise due to thermal expansion of seawater and loss of land ice from mountain glaciers, ice caps and ice sheets. Locally, sea level can strongly deviate from the global mean rise due to changes in wind and ocean currents. In addition, gravitational adjustments redistribute seawater away from shrinking ice masses. However, the land ice contribution to sea level rise (SLR remains very challenging to model, and comprehensive regional sea level projections, which include appropriate gravitational adjustments, are still a nascent field (Katsman et al., 2011; Slangen et al., 2011. Here, we present an alternative approach to derive regional sea level changes for a range of emission and land ice melt scenarios, combining probabilistic forecasts of a simple climate model (MAGICC6 with the new CMIP5 general circulation models. The contribution from ice sheets varies considerably depending on the assumptions for the ice sheet projections, and thus represents sizeable uncertainties for future sea level rise. However, several consistent and robust patterns emerge from our analysis: at low latitudes, especially in the Indian Ocean and Western Pacific, sea level will likely rise more than the global mean (mostly by 10–20%. Around the northeastern Atlantic and the northeastern Pacific coasts, sea level will rise less than the global average or, in some rare cases, even fall. In the northwestern Atlantic, along the American coast, a strong dynamic sea level rise is counteracted by gravitational depression due to Greenland ice melt; whether sea level will be above- or below-average will depend on the relative contribution of these two factors. Our regional sea level projections and the diagnosed uncertainties provide an improved basis for coastal impact analysis and infrastructure planning for adaptation to climate change.

  11. Scale-dependent approaches to modeling spatial epidemiology of chronic wasting disease.

    Science.gov (United States)

    Conner, Mary M.; Gross, John E.; Cross, Paul C.; Ebinger, Michael R.; Gillies, Robert; Samuel, Michael D.; Miller, Michael W.

    2007-01-01

    This e-book is the product of a second workshop that was funded and promoted by the United States Geological Survey to enhance cooperation between states for the management of chronic wasting disease (CWD). The first workshop addressed issues surrounding the statistical design and collection of surveillance data for CWD. The second workshop, from which this document arose, followed logically from the first workshop and focused on appropriate methods for analysis, interpretation, and use of CWD surveillance and related epidemiology data. Consequently, the emphasis of this e-book is on modeling approaches to describe and gain insight of the spatial epidemiology of CWD. We designed this e-book for wildlife managers and biologists who are responsible for the surveillance of CWD in their state or agency. We chose spatial methods that are popular or common in the spatial epidemiology literature and evaluated them for their relevance to modeling CWD. Our opinion of the usefulness and relevance of each method was based on the type of field data commonly collected as part of CWD surveillance programs and what we know about CWD biology, ecology, and epidemiology. Specifically, we expected the field data to consist primarily of the infection status of a harvested or culled sample along with its date of collection (not date of infection), location, and demographic status. We evaluated methods in light of the fact that CWD does not appear to spread rapidly through wild populations, relative to more highly contagious viruses, and can be spread directly from animal to animal or indirectly through environmental contamination.

  12. Genome-scale identification of Legionella pneumophila effectors using a machine learning approach.

    Directory of Open Access Journals (Sweden)

    David Burstein

    2009-07-01

    Full Text Available A large number of highly pathogenic bacteria utilize secretion systems to translocate effector proteins into host cells. Using these effectors, the bacteria subvert host cell processes during infection. Legionella pneumophila translocates effectors via the Icm/Dot type-IV secretion system and to date, approximately 100 effectors have been identified by various experimental and computational techniques. Effector identification is a critical first step towards the understanding of the pathogenesis system in L. pneumophila as well as in other bacterial pathogens. Here, we formulate the task of effector identification as a classification problem: each L. pneumophila open reading frame (ORF was classified as either effector or not. We computationally defined a set of features that best distinguish effectors from non-effectors. These features cover a wide range of characteristics including taxonomical dispersion, regulatory data, genomic organization, similarity to eukaryotic proteomes and more. Machine learning algorithms utilizing these features were then applied to classify all the ORFs within the L. pneumophila genome. Using this approach we were able to predict and experimentally validate 40 new effectors, reaching a success rate of above 90%. Increasing the number of validated effectors to around 140, we were able to gain novel insights into their characteristics. Effectors were found to have low G+C content, supporting the hypothesis that a large number of effectors originate via horizontal gene transfer, probably from their protozoan host. In addition, effectors were found to cluster in specific genomic regions. Finally, we were able to provide a novel description of the C-terminal translocation signal required for effector translocation by the Icm/Dot secretion system. To conclude, we have discovered 40 novel L. pneumophila effectors, predicted over a hundred additional highly probable effectors, and shown the applicability of machine

  13. Acoustic emission monitoring from a lab scale high shear granulator--a novel approach.

    Science.gov (United States)

    Watson, N J; Povey, M J W; Reynolds, G K; Xu, B H; Ding, Y

    2014-04-25

    A new approach to the monitoring of granulation processes using passive acoustics together with precise control over the granulation process has highlighted the importance of particle-particle and particle-bowl collisions in acoustic emission. The results have shown that repeatable acoustic results could be obtained but only when a spray nozzle water addition system was used. Acoustic emissions were recorded from a transducer attached to the bowl and an airborne transducer. It was found that the airborne transducer detected very little from the granulation and only experienced small changes throughout the process. The results from the bowl transducer showed that during granulation the frequency content of the acoustic emission shifted towards the lower frequencies. Results from the discrete element model indicate that when larger particles are used the number of collisions the particles experience reduces. This is a result of the volume conservation methodology used in this study, therefore larger particles results in less particles. These simulation results coupled with previous theoretical work on the frequency content of an impacting sphere explain why the frequency content of the acoustic emissions reduces during granule growth. The acoustic system used was also clearly able to identify when large over-wetted granules were present in the system, highlighting its benefit for detecting undesirable operational conditions. High-speed photography was used to study if visual changes in the granule properties could be linked with the changing acoustic emissions. The high speed photography was only possible towards the latter stages of the granulation process and it was found that larger granules produced a higher magnitude of acoustic emission across a broader frequency range. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. A system approach for reducing the environmental impact of manufacturing and sustainability improvement of nano-scale manufacturing

    Science.gov (United States)

    Yuan, Yingchun

    This dissertation develops an effective and economical system approach to reduce the environmental impact of manufacturing. The system approach is developed by using a process-based holistic method for upstream analysis and source reduction of the environmental impact of manufacturing. The system approach developed consists of three components of a manufacturing system: technology, energy and material, and is useful for sustainable manufacturing as it establishes a clear link between manufacturing system components and its overall sustainability performance, and provides a framework for environmental impact reductions. In this dissertation, the system approach developed is applied for environmental impact reduction of a semiconductor nano-scale manufacturing system, with three case scenarios analyzed in depth on manufacturing process improvement, clean energy supply, and toxic chemical material selection. The analysis on manufacturing process improvement is conducted on Atomic Layer Deposition of Al2O3 dielectric gate on semiconductor microelectronics devices. Sustainability performance and scale-up impact of the ALD technology in terms of environmental emissions, energy consumption, nano-waste generation and manufacturing productivity are systematically investigated and the ways to improve the sustainability of the ALD technology are successfully developed. The clean energy supply is studied using solar photovoltaic, wind, and fuel cells systems for electricity generation. Environmental savings from each clean energy supply over grid power are quantitatively analyzed, and costs for greenhouse gas reductions on each clean energy supply are comparatively studied. For toxic chemical material selection, an innovative schematic method is developed as a visual decision tool for characterizing and benchmarking the human health impact of toxic chemicals, with a case study conducted on six chemicals commonly used as solvents in semiconductor manufacturing. Reliability of

  15. Three-dimensional imaging of sediment cores: a multi-scale approach

    Science.gov (United States)

    Deprez, Maxim; Van Daele, Maarten; Boone, Marijn; Anselmetti, Flavio; Cnudde, Veerle

    2017-04-01

    Downscaling is a method used in building-material research, where several imaging methods are applied to obtain information on the petrological and petrophysical properties of materials from a centimetre to a sub-micrometre scale (De Boever et al., 2015). However, to reach better resolutions, the sample size is necessarily adjusted as well. If, for instance, X-ray micro computed tomography (µCT) is applied on the material, the resolution can increase as the sample size decreases. In sedimentological research, X-ray computed tomography (CT) is a commonly used technique (Cnudde & Boone, 2013). The ability to visualise materials with different X-ray attenuations reveals structures in sediment cores that cannot be seen with the bare eye. This results in discoveries of sedimentary structures that can lead to a reconstruction of parts of the depositional history in a sedimentary basin (Van Daele et al., 2014). Up to now, most of the CT data used for this kind of research are acquired with a medical CT scanner, of which the highest obtainable resolution is about 250 µm (Cnudde et al., 2006). As the size of most sediment grains is smaller than 250 µm, a lot of information, concerning sediment fabric, grain-size and shape, is not obtained when using medical CT. Therefore, downscaling could be a useful method in sedimentological research. After identifying a region of interest within the sediment core with medical CT, a subsample of several millimetres diameter can be taken and imaged with µCT, allowing images with a resolution of a few micrometres. The subsampling process, however, needs to be considered thoroughly. As the goal is to image the structure and fabric of the sediments, deformation of the sediments during subsampling should be avoided as much as possible. After acquiring the CT data, image processing and analysis are performed in order to re